prompt
stringlengths 501
4.98M
| target
stringclasses 1
value | chunk_prompt
bool 1
class | kind
stringclasses 2
values | prob
float64 0.2
0.97
⌀ | path
stringlengths 10
394
⌀ | quality_prob
float64 0.4
0.99
⌀ | learning_prob
float64 0.15
1
⌀ | filename
stringlengths 4
221
⌀ |
---|---|---|---|---|---|---|---|---|
# Multi-level Models in Keras Playground
Linear Mixed effects models, also known as hiearchical linear models, also known as multi-level models, are powerful linear ensemble modeling tools that can do both regression and classification tasks for many structured data sets. This notebook describes what a multi-level model is, and how to implement a model using the neural network library keras. Model outputs are compared to multi-level models available in the statsmodels package.
A comparison among:
[StatsModels](https://github.com/statsmodels/statsmodels)
[Keras](https://github.com/fchollet/keras) with Tensorflow backend
For brevity, this tutorial will ignore cross-validation and hold out data as tools for model assessment.
## A very brief introduction to multi-level models
Multi-level models account for different levels within a data set. Levels are groupings of data that apply across several observations. For example, a classic data set (simulated below), is the math achievement versus SES status for students who attend catholic schools versus public schools. The first level is the student's SES, whereas the second level is their attended school. Multi-level models can account for fixed effects (i.e., the variance does not change within groups), and random effects (i.e., the variance is distributed across groups). Multi-level models are linear models. In the case of the catholic school data set the equation that predicts student math achievement is:
This equation is wrong lol
$$ math\_achievement = \alpha_{01} + \beta_{01} * SES + \beta_{02} * catholic\_school $$
Takes the general form of:
$$ Y_{ij} = \beta_{0j} + \beta_{1j}X+{ij} + r_{ij} $$
$$ \beta_{0j} = \gamma_{00} + \gamma_{01}W_j + u_{0j} $$
$$ \beta_{1j} = \gamma_{10} + \gamma_{11}W_j + u_{1j} $$
And the more specific form:
$$ Y_{math achievement, i} = \beta{0,school} + \beta_{1,school}X_{i, SES} + \beta_{2,school}X_{i,school} + r_{i,j} $$
$$ \beta_{0, school} = \gamma_{00} + \gamma_{01}W_{school} + u_{0, school} $$
$$ \beta_{1, school} = \gamma_{00} + \gamma_{11}W_{school} + u_{0, school} $$
Where
| variable | description |
|----------------------------------------|--------------------------------------------------------------------------------------------------|
| $i=1,2,3...$ | the student indicator, i.e., the student ID. |
| $j=catholic,public$ | the school group indicator |
| $\beta_{0, school}$,$\beta_{1,school}$ | level-1 coefficients. in this case the SES status and the categorical school belonging variable. |
| $\gamma_{00}...\gamma{11}$ | level-2 coefficients. Also known as fixed effects. |
| $X_{ij}$ | Level-1 variable. SES, etc. |
| $W_{ij}$ | Level-2 predictor. school belonging, etc. |
| $r_{ij}$ | Level-1 random effect. |
| $u_{0,j},u_{1,j}...$ | Level-2 random effects. |
These equations in summation for different intercepts and coefficients for models depending on whether a student attended a catholic school or public school.
```
import numpy as np
import statsmodels.formula.api as smf
from patsy import dmatrices
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.metrics import r2_score
%load_ext watermark
```
`watermark` prints the versions of libraries, pythons, and computer hardware in case this matters for your use of this notebook. It notes that `keras` is not installed. This is because `keras` comes with a `tensorflow` installation and thus does not need to be installed again.
```
%watermark -v -m -p numpy,pandas,statsmodels,tensorflow,keras,matplotlib
```
# Prepare data and quick visualization
For the first comparison, we use the california housing data set and create a categorical variable called `HouseAgeGroup`. This is due to the data analysis showing there is probably several groups of house ages based on periods of housing expansion. `sklearn` has many built in data sets that include data descriptions. The california housing data set has no strong correlations between data points and makes for a good first attempt at modeling data.
```
from sklearn.datasets import fetch_california_housing as get_data
data = get_data()
df = pd.DataFrame(data['data'], columns=data['feature_names'])
# popGroup is high=1; low=0
df['popGroup'] = df.Population.apply(lambda x: 1 if x >= 1000 else 0).astype('category')
def house_age_group(age):
if age >= 52:
return 3
elif age >= 30 and age < 52:
return 2
elif age < 30:
return 1
df['HouseAgeGroup'] = df.HouseAge.apply(lambda x: house_age_group(x))
df.head()
print(data['DESCR'])
fig, ax = plt.subplots(1, 5, figsize=(15, 5), sharey=True)
for n, c in enumerate(df.columns[1:6]):
df.plot.scatter(ax=ax[n], x=c, y='MedInc', alpha=0.05)
df.HouseAge.hist(bins=np.arange(1, 53, 1))
fig, ax = plt.subplots(figsize=(9, 7))
z = df.corr('spearman')
cbar = ax.pcolormesh(z, cmap='seismic', vmin=-0.7, vmax=0.7)
fig.colorbar(cbar, label='Spearman Correlation')
ax.set_xticks(np.arange(0, 8, 1)+0.5)
ax.set_yticks(np.arange(0, 8, 1)+0.5)
ax.set_xticklabels(z.columns)
ax.set_yticklabels(z.index)
for n,mc in enumerate(z.values):
for i,m in enumerate(mc):
ax.text(n+0.3, i+0.35, str(round(m,2)), color='black', fontsize=12)
```
# Using StatsModels to perform a linear mixed model of reaction time
`statsmodels` can use `R`-like formulas to define fixed effects equations. However it uses the groups argument instead of the `|` within the equation to declare random effects. It is common practice to have a variable that has random effects also have fixed effects. This is because [random effects without fixed effects implies that the variable has no average effect.](https://stats.stackexchange.com/questions/173159/can-a-variable-be-both-random-and-fixed-effect-at-the-same-time-in-a-mixed-effec)
```
# https://www.statsmodels.org/stable/mixed_linear.html
formula = "MedInc ~ AveRooms + AveBedrms + AveRooms*AveBedrms + C(HouseAgeGroup)"
md = smf.mixedlm(formula, df, groups=df['HouseAgeGroup'])
mdf = md.fit()
print(mdf.summary())
fe_params = pd.DataFrame(mdf.fe_params,columns=['LMM'])
random_effects = pd.DataFrame(mdf.random_effects)
random_effects = random_effects.transpose()
random_effects = random_effects.rename(index=str, columns={'groups': 'LMM'})
random_effects
```
In this case it seems like these groups are not so important (high or low population). However the other features in the model seem to be able to explain at least half the variance in the median incomes of home owners in southern california.
```
ypred = mdf.predict(df)
fig, ax = plt.subplots()
ax.scatter(df['MedInc'], ypred, alpha=0.05)
ax.set_ylim(0, 10)
ax.set_ylabel('Predicted', fontsize=15)
ax.set_xlabel('Actual', fontsize=15)
ax.plot([0, 10], [0, 10], color='red')
print('R2 score:', r2_score(df['MedInc'], ypred))
```
# creating a design matrix from a statsmodels formula
`statsmodels` can accept `pandas` dataframes directly as input with the defined groups. `keras` can not. Thus we need to create a [design matrix](https://en.wikipedia.org/wiki/Design_matrix) directly for training the `keras` model.
```
Y, X = dmatrices(formula, data=df, return_type='matrix')
Terms = X.design_info.column_names
_, Z = dmatrices('MedInc ~ -1 + C(HouseAgeGroup)', data=df, return_type='matrix')
X = np.asarray(X) # fixed effect
Z = np.asarray(Z) # mixed effect
Y = np.asarray(Y).flatten()
nfixed = np.shape(X)
nrandm = np.shape(Z)
```
# Using Keras
`keras` is a library that allows the construction of neural networks. Neural networks at the most basic level are linear combinations of variables, that is linear models. They can include a lot more sophistication but at their core, they are no different than any other model that is based on linear combinations of variables. Thus, `keras` provides a modular and verbose way to construct multi-level models.
```
import tensorflow.keras as keras
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Add, Dense
from tensorflow.keras import backend as K
from tensorflow.keras.callbacks import TensorBoard
K.clear_session()
nb_epoch = 500
fixedpred = np.argmax(X,axis=1)
randmpred = np.argmax(Z,axis=1)
Xinput = Input(batch_shape=(None, nfixed[1]-1), name='level_1_variables')
fixed_keras = Dense(1, input_dim=nfixed[1]-1, name = 'fixedEffect')(Xinput)
Zinput = Input(batch_shape=(None, nrandm[1]), name='level_2_variables')
randm_keras = Dense(1, input_dim=nrandm[1], use_bias=None, name = 'randomEffect')(Zinput)
merged = keras.layers.add([fixed_keras, randm_keras])
model = Model([Xinput, Zinput], merged)
model.compile(loss='mean_squared_error', optimizer='adam')
# train the model
model.fit([X[:,1:], Z], Y.flatten(),
epochs=nb_epoch,
batch_size=100,
verbose=0,
shuffle=True,
)
Ypredict = model.predict([X[:,1:], Z])
betakeras = np.hstack((model.get_weights()[1], model.get_weights()[0].flatten()))
bkeras = model.get_weights()[2].flatten()
from tensorflow.keras.utils import plot_model
pm = plot_model(model,
to_file='model.png',
show_shapes=True,
show_layer_names=True,
rankdir='TB')
from IPython.display import display, Image
display(Image(filename='model.png'))
fe_params['Keras'] = pd.Series(betakeras, index=fe_params.index)
random_effects['Keras'] = pd.Series(bkeras, index=random_effects.index)
fe_params
fig, ax = plt.subplots(figsize=(5, 10))
yticks = np.arange(fe_params.shape[0])
ax.plot(fe_params, yticks)
ax.set_yticks(yticks)
ax.set_yticklabels(labels=fe_params.index, rotation=0)
ax.legend(['LMM', 'Keras'], fontsize=15)
ax.set_xlabel('Coefficient value', fontsize=15)
random_effects
fig, ax = plt.subplots(figsize=(10, 5))
random_effects.reset_index().plot(ax=ax)
# ax.set_xticks(np.arange(0, 20, 1))
ax.set_title('random effects', fontsize=15)
fig, ax = plt.subplots(figsize=(9, 7))
ax.scatter(ypred, Ypredict, alpha=0.5, label='model comparison')
ax.plot([-100, 100], [-100, 100], label='perfect match', color='red')
ax.set_ylabel('Keras', fontsize=15)
ax.set_xlabel('statsmodels', fontsize=15)
ax.set_ylim(-20, 50)
ax.set_xlim(-25, 80)
ax.legend(fontsize=15, title='Median Income')
fig, ax = plt.subplots(figsize=(12, 5))
ax.plot(ypred - Ypredict.flatten(), marker='o', linewidth=0)
ax.set_ylabel('statsmodels(y) - keras(y)', fontsize=15)
print('R2 score of model comparison:', r2_score(ypred, Ypredict))
```
# Catholic School Simulation
The catholic school data set is a classic hierarchical data set used in education research to justify multi-level models (Bryk and Raudenbush, 2002). The data set is typically a 2D comparison of math achievement (typically described as a math test score) versus socio-economic status (SES) of students who attend catholic or public schools. The catholic school students perform better than the public school students thus justifying the need for a linear model to have two intercepts and slopes depending on the group they belong to.
In this case, I have simulated the catholic school data set.
```
num_samples = 1000
# The desired mean values of the sample.
mu = np.array([5.0, 10.0])
# The desired covariance matrix.
r = np.array([
[ 3.40, 5.75],
[ 5.75, 5.50]
])
# Generate the random samples.
y = np.random.multivariate_normal(mu, r, size=num_samples)
catholic_data = pd.DataFrame({'SES':y[:,0], 'math_score':y[:,1]})
catholic_data['catholic_student'] = [1 if n>0.5 else 0 for n in np.random.random(num_samples)]
catholic_data['math_score'] = catholic_data.apply(lambda x: x['math_score']*3 if x['catholic_student']==1 else x['math_score'], axis=1)
catholic_data['math_score'] = catholic_data['math_score']/catholic_data['math_score'].max()
catholic_data['SES'] = catholic_data['SES'].apply(lambda x: (x - catholic_data['SES'].mean())/catholic_data['SES'].std())
catholic_data['colors'] = catholic_data['catholic_student'].apply(lambda x: 'green' if x==1 else 'purple')
catholic_data.describe()
fig, ax = plt.subplots(figsize=(9, 7))
catholic_data.plot.scatter(x='SES', y='math_score', color=catholic_data['colors'], alpha=0.5, ax=ax, s=55)
ax.set_ylabel('Math Achievement', fontsize=15)
ax.set_xlabel('Socio-Economic Status', fontsize=15)
```
# statsmodels catholic data
```
# https://www.statsmodels.org/stable/mixed_linear.html
# random effects should be fixed effects unless you want to imply the average effect of the random effect is 0
# https://stats.stackexchange.com/questions/173159/can-a-variable-be-both-random-and-fixed-effect-at-the-same-time-in-a-mixed-effec
formula = 'math_score ~ SES + SES * C(catholic_student)'
md = smf.mixedlm(formula, catholic_data, groups=catholic_data['catholic_student'])
mdf = md.fit()
print(mdf.summary())
fe_params = pd.DataFrame(mdf.fe_params,columns=['LMM'])
random_effects = pd.DataFrame(mdf.random_effects)
random_effects = random_effects.transpose()
random_effects = random_effects.rename(index=str, columns={'groups': 'LMM'})
random_effects
mdf.random_effects
ypred = mdf.predict(catholic_data)
fig, ax = plt.subplots()
ax.scatter(catholic_data['math_score'], ypred, alpha=0.5)
ax.set_ylabel('Predicted', fontsize=15)
ax.set_xlabel('Actual', fontsize=15)
ax.plot([0, 1], [0, 1], color='red')
```
# keras catholic data
```
Y, X = dmatrices(formula, data=catholic_data, return_type='matrix')
Terms = X.design_info.column_names
_, Z = dmatrices('math_score ~ -1 + C(catholic_student)', data=catholic_data, return_type='matrix')
X = np.asarray(X) # fixed effect
Z = np.asarray(Z) # mixed effect
Y = np.asarray(Y).flatten()
nfixed = np.shape(X)
nrandm = np.shape(Z)
K.clear_session()
nb_epoch = 500
fixedpred = np.argmax(X,axis=1)
randmpred = np.argmax(Z,axis=1)
Xinput = Input(batch_shape=(None, nfixed[1]-1), name='individualEffects')
fixed_keras = Dense(1, input_dim=nfixed[1]-1, name='fixedEffect')(Xinput)
Zinput = Input(batch_shape=(None, nrandm[1]), name='schoolEffects')
randm_keras = Dense(1, input_dim=nrandm[1], use_bias=None, name='randomEffect')(Zinput)
merged = keras.layers.add([fixed_keras, randm_keras])
model = Model([Xinput, Zinput], merged)
model.compile(loss='mean_squared_error', optimizer='adam')
# train the model
model.fit([X[:,1:], Z], Y.flatten(),
epochs=nb_epoch,
batch_size=100,
verbose=0,
shuffle=True,
)
Ypredict = model.predict([X[:,1:], Z])
betakeras = np.hstack((model.get_weights()[1], model.get_weights()[0].flatten()))
bkeras = model.get_weights()[2].flatten()
from tensorflow.keras.utils import plot_model
pm = plot_model(model,
to_file='model.png',
show_shapes=True,
show_layer_names=True,
rankdir='TB')
from IPython.display import display, Image
display(Image(filename='model.png'))
fe_params['Keras'] = pd.Series(betakeras, index=fe_params.index)
random_effects['Keras'] = pd.Series(bkeras, index=random_effects.index)
fe_params
fig, ax = plt.subplots(figsize=(5, 10))
yticks = np.arange(fe_params.shape[0])
ax.plot(fe_params, yticks)
ax.set_yticks(yticks)
ax.set_yticklabels(labels=fe_params.index, rotation=0)
ax.legend(['LMM', 'Keras'], fontsize=15)
ax.set_xlabel('Coefficient value', fontsize=15)
random_effects
```
It is important to note from before that, although the model fitting (i.e., regression coefficients) are not the same across different approach, the model prediction is highly similar (at least it pass the eyeball test).
# statsmodels compared to keras predicted output
As we can see below, the predicted math achievement value for each student is identical between the `statsmodels` multi-level model and the `keras` model.
```
fig, (ax, ax2) = plt.subplots(1, 2, figsize=(13, 6))
ax.scatter(ypred, Ypredict, alpha=0.5, label='model comparison')
ax.plot([0, 1], [0, 1], color='red', linewidth=2)
ax.set_ylabel('Keras', fontsize=15)
ax.set_xlabel('statsmodels', fontsize=15)
ax.legend(fontsize=15, title='catholic school\nmath achievement')
ax.set_xlim(0, 1)
ax.set_ylim(0, 1)
ax2.scatter(Y, ypred, label='statsmodels', alpha=0.5, s=100)
ax2.scatter(Y, Ypredict, label='keras', marker='x', color='black', alpha=0.5)
ax2.plot([0, 1], [0, 1], color='red', linewidth=2, label='perfect prediction')
ax2.legend(fontsize=15, title='catholic school\nmath achievement')
ax2.set_ylabel('Predicted', fontsize=15)
ax2.set_xlabel('Actual', fontsize=15)
fig.tight_layout()
fig, ax = plt.subplots(figsize=(12, 5))
ax.plot(ypred - Ypredict.flatten(), marker='o', linewidth=0)
ax.set_ylabel('statsmodels(y) - keras(y)', fontsize=15)
print('R2 score of model comparison:', r2_score(ypred, Ypredict))
```
| true |
code
| 0.671067 | null | null | null | null |
|
# Data Preparation
Clone GitHub repository to Colab storage.
```
!git clone https://github.com/megagonlabs/HappyDB.git
!ls
!ls HappyDB/happydb/data
```
# Utility functions
```
import numpy as np
from sklearn.base import clone
from sklearn.decomposition import LatentDirichletAllocation
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import KFold, GridSearchCV, train_test_split
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import confusion_matrix, f1_score
import warnings
warnings.filterwarnings('ignore')
def run_cv(X, y, clf, num_classes):
kf = KFold(n_splits=5, random_state=1)
cm = np.zeros([num_classes,
num_classes],
dtype="int") # Initialize confusion matrix with 0
f1_list = []
for i, (train_index, test_index) in enumerate(kf.split(X)):
print("Fold {}".format(i + 1))
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
cur_clf = clone(clf)
cur_clf.fit(X_train, y_train)
y_pred = cur_clf.predict(X_test)
cm += confusion_matrix(y_test, y_pred)
f1_list.append(f1_score(y_test, y_pred, average="macro"))
f1_scores = np.array(f1_list)
return (f1_scores, cm)
```
## Loading CSV file as DataFrame
Use `.read_csv()` function to load a CSV file.
```
import pandas as pd
hm_df = pd.read_csv("HappyDB/happydb/data/cleaned_hm.csv")
hm_df.head()
# Filtering out samples that do not have ground truth labels
# or # of sentences > 3
filtered_hm_df = hm_df[(hm_df["num_sentence"] <= 3) &
(~ hm_df["ground_truth_category"].isnull())]
print("Original # of HM: {}".format(len(hm_df)))
print("Filtered # of HM: {}".format(len(filtered_hm_df)))
```
# Label vector & Feature matrix creation
Let's create label vector and feature matrix from the DataFrame.
```
# Label Encoder
le = LabelEncoder()
y = le.fit_transform(filtered_hm_df["ground_truth_category"])
y
le.classes_
Xcount = CountVectorizer().fit_transform(filtered_hm_df["cleaned_hm"])
```
# Try other feature extraction methods
```
%%time
# Creates feature vectors
Xtfidf = TfidfVectorizer().fit_transform(filtered_hm_df["cleaned_hm"])
Xlda = LatentDirichletAllocation().fit_transform(
CountVectorizer().fit_transform(filtered_hm_df["cleaned_hm"]))
Xcount_lda = np.concatenate([Xcount.todense(), Xlda], axis=1)
f1_scores_count, _ = run_cv(Xcount, y, LogisticRegression(), len(le.classes_))
f1_scores_tfidf, _ = run_cv(Xtfidf, y, LogisticRegression(), len(le.classes_))
f1_scores_lda, _ = run_cv(Xlda, y, LogisticRegression(), len(le.classes_))
f1_scores_count_lda, _ = run_cv(Xcount_lda, y, LogisticRegression(), len(le.classes_))
eval_df = pd.DataFrame({"CountVec": f1_scores_count,
"TfidfVec": f1_scores_tfidf,
"LDA": f1_scores_lda,
"Count+LDA": f1_scores_count_lda})
eval_df
```
Try!
- Try different configurations of `CountVectorizer()` `TfidfVectorizer()` `LatentDirichletAllocation()`.
- Replace `LogisticRegression()` with other algorithms.
- Replace `LogisticRegression()` wigh `GridSearchCV(LogisticRegression(), ...)`
```
import spacy
nlp = spacy.load("en_core_web_sm")
# Sample code from spaCy
doc = nlp("Apple is looking at buying U.K. startup for $1 billion")
info_list = []
for token in doc:
info_list.append([token.text, token.lemma_, token.pos_, token.tag_, token.dep_,
token.shape_, token.is_alpha, token.is_stop])
pd.DataFrame(
info_list, columns=["TEXT", "LEMMA", "POS", "TAG", "DEP", "SHAPE", "ALPHA", "STOP"])
```
# Feature Engineering
Use the following ideas as preprocessing
- Remove stop words
- Filter adjectives, nouns, and verbs
```
pos_set = ["ADJ", "PROPN", "NOUN", "VERB"]
proc_hm_list = []
for hm in filtered_hm_df["cleaned_hm"].tolist():
filtered_tokens = []
for token in nlp(hm):
# Remove stop words
if token.is_stop:
continue
# Filter tokens that belong to predefined POS types
if token.pos_ not in pos_set:
continue
filtered_tokens.append(token.lemma_)
proc_hm = " ".join(filtered_tokens)
proc_hm_list.append(proc_hm)
filtered_hm_df["proc_hm"] = proc_hm_list
filtered_hm_df["proc_hm"]
Xcount_proc = CountVectorizer().fit_transform(filtered_hm_df["proc_hm"])
f1_scores_count_proc, _ = run_cv(Xcount_proc, y, LogisticRegression(), len(le.classes_))
eval_df = pd.DataFrame({"CountVec": f1_scores_count,
"TfidfVec": f1_scores_tfidf,
"LDA": f1_scores_lda,
"Count+LDA": f1_scores_count_lda,
"Proc+CountVec": f1_scores_count_proc})
eval_df.mean(axis=0)
```
| true |
code
| 0.404331 | null | null | null | null |
|
# Scikit-Learn
<!--<badge>--><a href="https://colab.research.google.com/github/TheAIDojo/Machine_Learning_Bootcamp/blob/main/Week 03 - Machine Learning Algorithms/1- Scikit_Learn.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a><!--</badge>-->
[Scikit-learn](http://scikit-learn.org/stable/) is a python-based machine learning library providing implementations of a great many algorithms for supervised and unsupervised learning. In large part, it builds upon the cabilities of NumPy, SciPy, matplotlib, and Pandas.
In the context of supervised learning, the primary objects scikit-learn defines are called **estimators**. Each of these defines a `fit` method, which develops a model from provided training data, and a `predict` method, which uses the model to map a new instance to a suitable target value. Scikit-learn also defines multiple utilities for partitioning and manipulating data sets as well as evaluating models.
Below, we cover some of the basic steps needed to create a model in scikit-learn. These notes are based on material appearing in the *scikit-learn tutorials*.
* [Tutorial](http://scikit-learn.org/stable/tutorial/index.html)
* [Cheatsheet](https://s3.amazonaws.com/assets.datacamp.com/blog_assets/Scikit_Learn_Cheat_Sheet_Python.pdf)
## Datasets
Scikit-learn comes bundled with several pre-defined (typically small) `datasets` that users can explore.
load_boston() Load and return the boston house-prices dataset (regression).
load_iris() Load and return the iris dataset (classification).
load_diabetes() Load and return the diabetes dataset (regression).
load_digits() Load and return the digits dataset (classification).
load_linnerud() Load and return the linnerud dataset (multivariate regression).
load_wine() Load and return the wine dataset (classification).
load_breast_cancer() Load and return the breast cancer wisconsin dataset (classification).
The iris dataset is loaded below, and a description of it is printed.
```
import numpy as np
import pandas as pd
# using 'from * import ...' allows as to import submodules directly
from sklearn import (
datasets,
model_selection,
linear_model,
metrics,
neighbors,
tree,
ensemble,
preprocessing,
)
# alternatively, we can import the whole package as such
import sklearn
iris_dataset = (
datasets.load_iris()
) # sklearn.datasets.load_iris() works exactly the same
print(iris_dataset.DESCR)
```
We can also use `iris_dataset.data` and `iris_dataset.targets` to create or x & y (inputs & outputs) pairs that will be used for training and testing
```
x = pd.DataFrame(iris_dataset.data, columns=iris_dataset.feature_names)
y = pd.DataFrame(iris_dataset.target, columns=["Labels"])
x
```
Alternatively, can load a dataset into x & y directly (i.e. into input/output pairs) by setting the `return_X_y` parameter to `True`
```
x, y = datasets.load_iris(return_X_y=True)
x.shape, y.shape
```
## Train/Test Split
In order to validate that our model can generalize to data that it wasn't trained on, it's necessary to create a sperate **testing dataset** that will not be used in training.
Within the `model_selection` submodule of Scikit Learn, there's the `train_test_split` that we can use to automatically split the data into training and testing pairs.
Here's an explanation of the different parameters taken directly from the function's docstring
#### **Parameters**
**arrays** : sequence of indexables with same length / shape[0]
Allowed inputs are lists, numpy arrays, scipy-sparse
matrices or pandas dataframes.
**test_size** : float, int or None, optional (default=None)
If float, should be between 0.0 and 1.0 and represent the proportion
of the dataset to include in the test split. If int, represents the
absolute number of test samples. If None, the value is set to the
complement of the train size. If train_size is also None, it will
be set to 0.25.
**train_size** : float, int, or None, (default=None)
If float, should be between 0.0 and 1.0 and represent the
proportion of the dataset to include in the train split. If
int, represents the absolute number of train samples. If None,
the value is automatically set to the complement of the test size.
**random_state** : int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If RandomState instance, random_state is the random number generator;
If None, the random number generator is the RandomState instance used
by np.random.
**shuffle** : boolean, optional (default=True)
Whether or not to shuffle the data before splitting. If shuffle=False
then stratify must be None.
**stratify** : array-like or None (default=None)
If not None, data is split in a stratified fashion, using this as
the class labels.
```
x_train, x_test, y_train, y_test = model_selection.train_test_split(
x, y, test_size=0.1, random_state=42, stratify=y
)
```
Please note that the `stratify` parameter works only in the context of classification tasks where there are a fixed amount of possible outputs/targets
# Fitting and predicting: estimator basics
Scikit-learn provides dozens of built-in machine learning algorithms and models, called estimators. Each estimator can be fitted to some data using its fit method.
Here is a simple example where we fit a Linear Regression to some very basic data:
```
x = [[ 1, 2, 3], # 2 samples, 3 features
[11, 12, 13]]
y = [0, 1]# classes of each sample
model = linear_model.LogisticRegression()
model.fit(x,y)
pred= model.predict(x) # predict classes of the training data
print(pred)
pred= model.predict([[4, 5, 6], [14, 15, 16]]) # predict classes of new data
print(pred)
```
The `fit` method generally accepts 2 inputs:
1. The samples matrix (or design matrix) X. The size of X is typically (n_samples, n_features), which means that samples are represented as rows and features are represented as columns.
2. The target values y which are real numbers for regression tasks, or integers for classification (or any other discrete set of values). For unsupervized learning tasks, y does not need to be specified. y is usually 1d array where the i th entry corresponds to the target of the i th sample (row) of X.
Both X and y are usually expected to be numpy arrays or equivalent array-like data types, though some estimators work with other formats such as sparse matrices.
Once the estimator is fitted, it can be used for predicting target values of new data. You don’t need to re-train the estimator:
# Linear Regression
In statistics, linear regression is a linear approach to modelling the relationship between a set a features, and a desired output. The case of one input feature is called simple linear regression; for more than one, the process is called multiple linear regression.
Scikit Learn defines this algorithm in `LinearRegression` class as a part of the `linear_models` module.
First, we load the data
```
x, y = datasets.load_diabetes(return_X_y=True)
# normalize the values of x and y
y_normalize = preprocessing.MinMaxScaler()
y_norm = y_normalize.fit_transform(y.reshape(-1, 1)) # normlize the y
x_normalize = preprocessing.StandardScaler()
x_norm = x_normalize.fit_transform(x) # normlize the x
print("Diabetes features/input shape:", x.shape)
print("Diabetes target/output shape:", y.shape)
```
Second, we split the data into 90/10 training/testing split (90% of the data will be used for training while 10% will be used for testing)
```
x_train, x_test, y_train, y_test = model_selection.train_test_split(
x_norm, y_norm.reshape(-1), test_size=0.1, random_state=42
)
x_train.shape, x_test.shape, y_train.shape, y_test.shape
```
Third, we train (i.e. `fit`) the model using the training dataset (`x_train` as inputs, `y_train` as targets)
```
regressor = (
linear_model.LinearRegression()
) # initialize the parameter of linear regression model
regressor.fit(x_train, y_train) # training the model on the train data
# we can preview the learned coefficients (i.e. weights) and intercept (i.e. bias)
print("Weights:\n", regressor.coef_)
print("Bias:\n", regressor.intercept_)
```
Fourth, we'll feed the test set into the trained model
```
y_pred = regressor.predict(x_test)
```
Finally, we'll evaluate the predicted output against the ground-truth values in `y_test` using Scikit Learn's `metrics` module
One of the most used metrics to evaluate regression models is `mean_squared_error` which has the following formula: $$\frac{1}{n}\sum_{i=1}^{n}(\hat y_i - y_i)^2$$
Where `n` is the total number of examples evaluated (in this case 45), $\hat y$ is the predicted value (here `y_pred`) and $y$ is the ground-truth value (here `y_test`)
```
metrics.mean_squared_error(y_test, y_pred)
```
# Logistic Regression
In statistics, the logistic model (or logit model) is used to model the probability of a certain class or event existing such as pass/fail, win/lose, alive/dead or healthy/sick. This can be extended to model several classes of events such as determining whether an image contains a cat, dog, lion, etc. Each object being detected in the image would be assigned a probability between 0 and 1, with a sum of one.
Scikit Learn defines this algorithm in `LogisticRegression` class as a part of the `linear_models` module.
First, we load the data
```
x, y = datasets.load_breast_cancer(return_X_y=True)
# normalize the values of x
x_normalize = preprocessing.StandardScaler()
x_norm = x_normalize.fit_transform(x)
print("Breast Cancer features/input shape:", x_norm.shape)
print("Breast Cancer target/output shape:", y.shape)
```
Second, we split the data into 90/10 training/testing split (90% of the data will be used for training while 10% will be used for testing)
Since this is a classification problem (we only have two possible outputs, 1 or 0), we can use the `stratify` parameter to ensure that the two possible output values are distributed proportionally between the training and testing sets and preserve the data's original distribution across the two sets.
```
x_train, x_test, y_train, y_test = model_selection.train_test_split(
x_norm, y, test_size=0.1, random_state=42, stratify=y
)
x_train.shape, x_test.shape, y_train.shape, y_test.shape
```
Third, we train (i.e. `fit`) the model using the training dataset (`x_train` as inputs, `y_train` as targets)
```
classifier = linear_model.LogisticRegression()
classifier.fit(x_train, y_train)
# we can preview the learned coefficients (i.e. weights) and intercept (i.e. bias)
print("Weights:\n", regressor.coef_)
print("Bias:\n", regressor.intercept_)
```
Fourth, we'll feed the test set into the trained model
```
y_pred = classifier.predict(x_test)
```
Finally, we'll evaluate the predicted output against the ground-truth values in `y_test` using Scikit Learn's `metrics` module
One of the most used metrics to evaluate classification models is `accuracy_score` which calculates the precentage of the examples that the trained classifier guessed correctly
```
metrics.accuracy_score(y_test, y_pred)
```
# Pipeline
Scikit-learn's pipeline class is a useful tool for encapsulating multiple different transformers alongside an estimator into one object, so that you only have to call your important methods once `( fit() , predict() , etc).`
```
# Import the sklearn pipeline
from sklearn.pipeline import Pipeline
# Download the dataset
x, y = datasets.load_breast_cancer(return_X_y=True)
# Split the dataset to train and test
x_train, x_test, y_train, y_test = model_selection.train_test_split(
x, y, test_size=0.1, random_state=42, stratify=y
)
```
The first step in building the pipeline is to define each transformer type. The convention here is generally to create transformers for the different variable types. In the code below I have created a numeric transformer which applies a StandardScaler, and includes a SimpleImputer to fill in any missing values. This is a really nice function in scikit-learn and has a number of options for filling missing values. I have chosen to use median but another method may result in better performance. The categorical transformer also has a SimpleImputer with a different fill method, and leverages OneHotEncoder to transform the categorical values into integers.
```
# Create the sklearn pipeline
pipe = Pipeline([('scaler', preprocessing.StandardScaler()),
('Logistic_R', linear_model.LogisticRegression())])
# fit the pipeline
pipe.fit(x_train, y_train)
# Calculate the Accuracy of the model
pipe.score(x_test, y_test)
```
| true |
code
| 0.648299 | null | null | null | null |
|
# NearestCentroid with MaxAbsScaler and QuantileTransformer
This Code template is for the Classification task using a simple NearestCentroid and data rescaling technique MaxAbsScaler and feature transformation QuantileTransformer in a pipeline.
### Required Packages
```
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as se
from imblearn.over_sampling import RandomOverSampler
from sklearn.preprocessing import LabelEncoder, MaxAbsScaler,QuantileTransformer
from sklearn.model_selection import train_test_split
from sklearn.neighbors import NearestCentroid
from sklearn.preprocessing import PowerTransformer
from sklearn.pipeline import make_pipeline
from sklearn.metrics import classification_report,plot_confusion_matrix
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path= ""
```
List of features which are required for model training .
```
#x_values
features=[]
```
Target feature for prediction.
```
#y_value
target=''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X = df[features]
Y = df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
def EncodeY(df):
if len(df.unique())<=2:
return df
else:
un_EncodedT=np.sort(pd.unique(df), axis=-1, kind='mergesort')
df=LabelEncoder().fit_transform(df)
EncodedT=[xi for xi in range(len(un_EncodedT))]
print("Encoded Target: {} to {}".format(un_EncodedT,EncodedT))
return df
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=EncodeY(NullClearner(Y))
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
#### Distribution Of Target Variable
```
plt.figure(figsize = (10,6))
se.countplot(Y)
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
```
### Handling Target Imbalance
The challenge of working with imbalanced datasets is that most machine learning techniques will ignore, and in turn have poor performance on, the minority class, although typically it is performance on the minority class that is most important.
One approach to addressing imbalanced datasets is to oversample the minority class. The simplest approach involves duplicating examples in the minority class.We will perform overspampling using imblearn library.
```
x_train,y_train = RandomOverSampler(random_state=123).fit_resample(x_train, y_train)
```
### Data Rescaling:
Scale each feature by its maximum absolute value.
This estimator scales and translates each feature individually such that the maximal absolute value of each feature in the training set will be 1.0. It does not shift/center the data, and thus does not destroy any sparsity.
This scaler can also be applied to sparse CSR or CSC matrices.[MaxAbsScaler](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MaxAbsScaler.html)
### Feature Transformation
#### Quantile Transformer
Transform features using quantiles information.
This method transforms the features to follow a uniform or a normal distribution. Therefore, for a given feature, this transformation tends to spread out the most frequent values. It also reduces the impact of (marginal) outliers: this is therefore a robust preprocessing scheme.
[For More Reference](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.QuantileTransformer.html)
### Model
The NearestCentroid classifier is a simple algorithm that represents each class by the centroid of its members. In effect, this makes it similar to the label updating phase of the KMeans algorithm. It also has no parameters to choose, making it a good baseline classifier. It does, however, suffer on non-convex classes, as well as when classes have drastically different variances, as equal variance in all dimensions is assumed.
#### Tuning Parameter
> **metric** : The metric to use when calculating distance between instances in a feature array. If metric is a string or callable, it must be one of the options allowed by metrics.pairwise.pairwise_distances for its metric parameter. The centroids for the samples corresponding to each class is the point from which the sum of the distances of all samples that belong to that particular class are minimized. If the “manhattan” metric is provided, this centroid is the median and for all other metrics, the centroid is now set to be the mean.
> **shrink_threshold** :Threshold for shrinking centroids to remove features.
```
# Build Model here
model = make_pipeline(MaxAbsScaler(),QuantileTransformer(),NearestCentroid())
model.fit(x_train, y_train)
```
#### Model Accuracy
score() method return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
```
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
```
#### Confusion Matrix
A confusion matrix is utilized to understand the performance of the classification model or algorithm in machine learning for a given test set where results are known.
```
plot_confusion_matrix(model,x_test,y_test,cmap=plt.cm.Blues)
```
#### Classification Report
A Classification report is used to measure the quality of predictions from a classification algorithm. How many predictions are True, how many are False.
* **where**:
- Precision:- Accuracy of positive predictions.
- Recall:- Fraction of positives that were correctly identified.
- f1-score:- percent of positive predictions were correct
- support:- Support is the number of actual occurrences of the class in the specified dataset.
```
print(classification_report(y_test,model.predict(x_test)))
```
### Creator: Jay Shimpi, GitHub: [profile](https://github.com/JayShimpi22)
| true |
code
| 0.291737 | null | null | null | null |
|
# Spatial Analysis
<br>
### Imports
```
import pandas as pd
import geopandas as gpd
import requests
import warnings
import matplotlib.pyplot as plt
def df_to_gdf(
df: pd.DataFrame,
crs: str='EPSG:4326',
lat_col: str='Latitude',
lon_col: str='Longitude'
):
with warnings.catch_warnings():
warnings.simplefilter('ignore')
gdf = gpd.GeoDataFrame(
df.drop(columns=[lat_col, lon_col]),
geometry=gpd.points_from_xy(df[lat_col].values, df[lon_col].values, crs=crs),
crs=crs
)
return gdf
def load_subsation_locs_gdf(
wpd_network_capacity_map_url: str='https://connecteddata.westernpower.co.uk/dataset/967404e0-f25c-469b-8857-1a396f3c363f/resource/d1895bd3-d9d2-4886-a0a3-b7eadd9ab6c2/download/wpd-network-capacity-map.csv',
network_ids_filter: list=[15130, 15264, 15246]
):
df_wpd_map = pd.read_csv(wpd_network_capacity_map_url)
df_wpd_map_focus = df_wpd_map.query('Network_Reference_ID in @network_ids_filter')
df_subsation_locs = df_wpd_map_focus.set_index('Substation_Name')[['Latitude', 'Longitude']]
df_subsation_locs.index = df_subsation_locs.index.str.lower()
gdf_subsation_locs = df_to_gdf(df_subsation_locs)
return gdf_subsation_locs
gdf_subsation_locs = load_subsation_locs_gdf()
gdf_subsation_locs
def load_weather_grid_locs_gdf(
weather_grid_locs: list=[
{'Name': 'mousehole_1', 'Latitude': 50.0, 'Longitude': -5.625},
{'Name': 'mousehole_2', 'Latitude': 50.0, 'Longitude': -5.0},
{'Name': 'mousehole_3', 'Latitude': 50.5, 'Longitude': -5.625},
{'Name': 'mousehole_4', 'Latitude': 50.5, 'Longitude': -5.0},
{'Name': 'mousehole_5', 'Latitude': 50.5, 'Longitude': -4.375},
{'Name': 'staplegrove_1', 'Latitude': 51.0, 'Longitude': -3.125},
{'Name': 'staplegrove_2', 'Latitude': 51.0, 'Longitude': -2.5},
{'Name': 'staplegrove_3', 'Latitude': 51.5, 'Longitude': -3.125},
{'Name': 'staplegrove_4', 'Latitude': 51.5, 'Longitude': -2.5},
{'Name': 'staplegrove_5', 'Latitude': 51.0, 'Longitude': -3.75}
]
):
gdf_weather_grid_locs = df_to_gdf(pd.DataFrame(weather_grid_locs).set_index('Name'))
return gdf_weather_grid_locs
gdf_weather_grid_locs = load_weather_grid_locs_gdf()
gdf_weather_grid_locs
fig, ax = plt.subplots(dpi=150)
gdf_weather_grid_locs.plot(ax=ax, label='Weather grid')
gdf_subsation_locs.plot(ax=ax, label='Substation')
ax.legend(frameon=False, bbox_to_anchor=(1, 1))
```
| true |
code
| 0.511778 | null | null | null | null |
|
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
# Today's data
400 fotos of human faces. Each face is a 2d array [64x64] of pixel brightness.
```
from sklearn.datasets import fetch_olivetti_faces
data = fetch_olivetti_faces().images
# @this code showcases matplotlib subplots. The syntax is: plt.subplot(height, width, index_starting_from_1)
plt.subplot(2,2,1)
plt.imshow(data[0],cmap='gray')
plt.subplot(2,2,2)
plt.imshow(data[1],cmap='gray')
plt.subplot(2,2,3)
plt.imshow(data[2],cmap='gray')
plt.subplot(2,2,4)
plt.imshow(data[3],cmap='gray')
```
# Face reconstruction problem
Let's solve the face reconstruction problem: given left halves of facex __(X)__, our algorithm shall predict the right half __(y)__. Our first step is to slice the photos into X and y using slices.
__Slices in numpy:__
* In regular python, slice looks roughly like this: `a[2:5]` _(select elements from 2 to 5)_
* Numpy allows you to slice N-dimensional arrays along each dimension: [image_index, height, width]
* `data[:10]` - Select first 10 images
* `data[:, :10]` - For all images, select a horizontal stripe 10 pixels high at the top of the image
* `data[10:20, :, -25:-15]` - Take images [10, 11, ..., 19], for each image select a _vetrical stripe_ of width 10 pixels, 15 pixels away from the _right_ side.
__Your task:__
Let's use slices to select all __left image halves as X__ and all __right halves as y__.
```
# select left half of each face as X, right half as Y
X = <Slice left half-images>
y = <Slice right half-images>
# If you did everything right, you're gonna see left half-image and right half-image drawn separately in natural order
plt.subplot(1,2,1)
plt.imshow(X[0],cmap='gray')
plt.subplot(1,2,2)
plt.imshow(y[0],cmap='gray')
assert X.shape == y.shape == (len(data), 64, 32), "Please slice exactly the left half-face to X and right half-face to Y"
def glue(left_half,right_half):
# merge photos back together
left_half = left_half.reshape([-1,64,32])
right_half = right_half.reshape([-1,64,32])
return np.concatenate([left_half,right_half],axis=-1)
# if you did everything right, you're gonna see a valid face
plt.imshow(glue(X,y)[99],cmap='gray')
```
# Machine learning stuff
```
from sklearn.model_selection import train_test_split
X_train,X_test,Y_train,Y_test = train_test_split(X.reshape([len(X),-1]),
y.reshape([len(y),-1]),
test_size=0.05,random_state=42)
print(X_test.shape)
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(X_train,Y_train)
```
measure mean squared error
```
from sklearn.metrics import mean_squared_error
print("Train MSE:", mean_squared_error(Y_train,model.predict(X_train)))
print("Test MSE:", mean_squared_error(Y_test,model.predict(X_test)))
# Train predictions
pics = glue(X_train,model.predict(X_train))
plt.figure(figsize=[16,12])
for i in range(20):
plt.subplot(4,5,i+1)
plt.imshow(pics[i],cmap='gray')
# Test predictions
pics = glue(X_test,model.predict(X_test))
plt.figure(figsize=[16,12])
for i in range(20):
plt.subplot(4,5,i+1)
plt.imshow(pics[i],cmap='gray')
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
# Ridge regression
RidgeRegression is just a LinearRegression, with l2 regularization - penalized for $ \alpha \cdot \sum _i w_i^2$
Let's train such a model with alpha=0.5
```
from sklearn.linear_model import Ridge
ridge = Ridge(alpha=0.5)
<YOUR CODE: fit the model on training set>
<YOUR CODE: predict and measure MSE on train and test>
# Test predictions
pics = glue(X_test,ridge.predict(X_test))
plt.figure(figsize=[16,12])
for i in range(20):
plt.subplot(4,5,i+1)
plt.imshow(pics[i],cmap='gray')
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
# Grid search
Train model with diferent $\alpha$ and find one that has minimal test MSE. It's okay to use loops or any other python stuff here.
```
<YOUR CODE>
# Test predictions
pics = glue(X_test,<predict with your best model>)
plt.figure(figsize=[16,12])
for i in range(20):
plt.subplot(4,5,i+1)
plt.imshow(pics[i],cmap='gray')
```
| true |
code
| 0.584745 | null | null | null | null |
|
```
import numpy as np
import matplotlib.pyplot as plt
from scipy import signal
from scipy.optimize import minimize_scalar, minimize
from time import time
import seaborn as sns
sns.set_style('darkgrid')
sns.set_context('paper')
import sys
sys.path.append('..')
from osd import Problem
from osd.components import GaussNoise, SmoothSecondDifference, SparseFirstDiffConvex, SparseSecondDiffConvex
from osd.utilities import progress
import cvxpy as cvx
# SOLVER = 'MOSEK'
SOLVER = 'SCS'
```
# Convex example, $K=3$
```
np.random.seed(142)
t = np.linspace(0, 250, 1000)
c0 = 0.1 * np.random.randn(len(t))
c2 = 2 * np.abs(signal.sawtooth(2 * np.pi / 50 * t))
# c3 = 0.5 * (np.sin(2 * np.pi * t * 5 / (500.)) + np.cos(2 * np.pi * t * 7 / (550.)))
c3 = 0.25 * (np.sin(2 * np.pi * t * 5 / (500.)) + np.cos(2 * np.pi * t * 2.5 / (500.) - 50))
y = np.sum([c0, c2, c3], axis=0)
signal1 = c2
signal2 = c3
components = [c0, c2, c3]
# np.random.seed(42)
# t = np.linspace(0, 1000, 3000)
# signal1 = np.sin(2 * np.pi * t * 1 / (500.))
# signal2 = signal.square(2 * np.pi * t * 1 / (450.))
# y = signal1 + signal2 + 0.25 * np.random.randn(len(signal1))
plt.figure(figsize=(10, 6))
plt.plot(t, signal1 + signal2, label='true signal minus noise')
plt.plot(t, y, alpha=0.5, label='observed signal')
plt.legend()
plt.show()
```
# Solve problem all at once with CVXPY
```
problem = Problem(data=y, components=[GaussNoise, SparseSecondDiffConvex(vmax=2, vmin=0),
SmoothSecondDifference])
problem.weights.value = [1, 2e0, 1e4]
problem.decompose(solver='MOSEK')
problem.problem.value
fig, ax = plt.subplots(nrows=3, figsize=(10//1.1, 12//1.5))
ax[0].plot(t, signal1, label='hidden component 1', ls='--')
ax[0].plot(t, problem.estimates[1], label='estimate 1')
ax[1].plot(t, signal2, label='hidden component 2', ls='--')
ax[1].plot(t, problem.estimates[2], label='estimate 2')
ax[2].plot(t, signal1 + signal2, label='true composite signal', ls='--')
ax[2].plot(t, problem.estimates[1] + problem.estimates[2], label='estimated signal');
ax[2].plot(t, y, label='observed signal', linewidth=1, marker='.', alpha=0.1);
for a in ax:
a.legend()
foo = cvx.Parameter((2, 3), value=np.array([[1, 0, 0], [0, 0, 1]]))
bar = cvx.Variable(3)
foo @ bar
bar[foo]
foo.value
problem.problem.parameters()
import cvxpy as cvx
import torch
from cvxpylayers.torch import CvxpyLayer
# def create_layer(osd_problem):
# prob = osd_problem.problem
# layer = CvxpyLayer(
# prob,
# parameters=prob.parameters(),
# variables=prob.variables())
# return layer
def create_layer(signal_length, index_set):
n = signal_length
y_cvx = cvx.Variable(n)
x1_cvx = cvx.Variable(n)
x2_cvx = cvx.Variable(n)
x3_cvx = cvx.Variable(n)
y_data = cvx.Parameter(n)
weight_param = cvx.Parameter(2, pos=True)
costs = [cvx.sum_squares(x1_cvx), cvx.sum_squares(cvx.diff(x2_cvx, k=2)), cvx.sum(cvx.abs(cvx.diff(x3_cvx, k=1)))]
objective = costs[0] + weight_param[0] * costs[1] + weight_param[1] * costs[2]
constraints = [
y_cvx == x1_cvx + x2_cvx + x3_cvx,
y_cvx[index_set] - y_data[index_set] == 0
]
prob = cvx.Problem(cvx.Minimize(objective), constraints)
layer = CvxpyLayer(
prob,
parameters=[y_data, weight_param],
variables=[x1_cvx, x2_cvx, x3_cvx]
)
return layer
index_set = np.random.uniform(size=len(y)) > 0.2
layer = create_layer(len(y), index_set)
import torch
from torch.utils.data import TensorDataset, DataLoader
import numpy as np
from cvxpylayers.torch import CvxpyLayer
torch.set_default_dtype(torch.double)
from tqdm.notebook import tqdm
def fit(loss, params, X, Y, Xval, Yval, batch_size=128, lr=1e-3, epochs=100, verbose=False, print_every=1, callback=None):
"""
Arguments:
loss: given x and y in batched form, evaluates loss.
params: list of parameters to optimize.
X: input data, torch tensor.
Y: output data, torch tensor.
Xval: input validation data, torch tensor.
Yval: output validation data, torch tensor.
"""
train_dset = TensorDataset(X, Y)
train_loader = DataLoader(train_dset, batch_size=batch_size, shuffle=True)
opt = torch.optim.Adam(params, lr=lr)
train_losses = []
val_losses = []
for epoch in tqdm(range(epochs)):
if callback is not None:
callback()
with torch.no_grad():
val_losses.append(loss(Xval, Yval).item())
if verbose and epoch % print_every == 0:
print("val loss %03d | %3.5f" % (epoch + 1, val_losses[-1]))
batch = 1
train_losses.append([])
for Xbatch, Ybatch in train_loader:
opt.zero_grad()
l = loss(Xbatch, Ybatch)
l.backward()
opt.step()
train_losses[-1].append(l.item())
if verbose and epoch % print_every == 0:
print("batch %03d / %03d | %3.5f" %
(batch, len(train_loader), np.mean(train_losses[-1])))
batch += 1
return val_losses, train_losses
weights_tch = torch.tensor([1e7, 1e1], requires_grad=True)
def loss_fn(Y, index_set, cvx_layer):
preds = cvx_layer(X, weights_tch)[0]
mse_per_example = (preds - actual).pow(2).mean(axis=1)
return mse_per_example.mean()
weights_tch = torch.tensor([1e7, 1e1], requires_grad=True)
layer(torch.tensor(y, requires_grad=True), weights_tch)
```
# Simple implementation of ADMM algorithm
Nothing fancy here. Just a quick and dirty implementation of the three proximal operators.
```
def prox1(v, theta, rho):
r = rho / (2 * theta + rho)
return r * v
def prox2(v, theta, rho, A=None, return_A=True):
if A is None:
n = len(v)
M = np.diff(np.eye(n), axis=0, n=2)
r = 2 * theta / rho
A = np.linalg.inv(np.eye(n) + r * M.T.dot(M))
if not return_A:
return A.dot(v)
else:
return A.dot(v), A
def prox3_cvx(v, theta, rho):
n = len(v)
M = np.diff(np.eye(n), axis=0, n=1)
x = cvx.Variable(n)
cost = theta * cvx.norm1(cvx.diff(x)) + (rho / 2) * cvx.sum_squares(x - v)
problem = cvx.Problem(cvx.Minimize(cost), [cvx.sum(x) == 0])
problem.solve(solver='MOSEK')
return x.value
def calc_obj(y, x2, x3, rho1=1, rho2=1e7, rho3=1e1):
x1 = y - x2 - x3
t1 = rho1 * np.sum(np.power(x1, 2))
t2 = rho2 * np.sum(np.power(np.diff(x2, 2), 2))
t3 = rho3 * np.sum(np.abs(np.diff(x3, 1)))
return t1 + t2 + t3
def run_admm(data, num_iter=50, rho=0.5, verbose=True, prox3=prox3_cvx):
y = data
A = None
u = np.zeros_like(y)
x1 = y / 3
x2 = y / 3
x3 = y / 3
residuals = []
obj_vals = []
ti = time()
for it in range(num_iter):
if verbose:
td = time() - ti
progress(it, num_iter, '{:.2f} sec'.format(td))
x1 = prox1(x1 - u, 1, rho)
x2, A = prox2(x2 - u, 1e7, rho, A=A, return_A=True)
x3 = prox3(x3 - u, 1e1, rho)
u += 2 * (np.average([x1, x2, x3], axis=0) - y / 3)
# mean-square-error
error = np.sum([x1, x2, x3], axis=0) - y
mse = np.sum(np.power(error, 2)) / error.size
residuals.append(mse)
obj_vals.append(calc_obj(y, x2, x3))
if verbose:
td = time() - ti
progress(it + 1, num_iter, '{:.2f} sec\n'.format(td))
outdict = {
'x1': x1,
'x2': x2,
'x3': x3,
'u': u,
'residuals': residuals,
'obj_vals': obj_vals
}
return outdict
run1 = run_admm(y, num_iter=1000, rho=1e-1)
run2 = run_admm(y, num_iter=1000, rho=1e0)
run3 = run_admm(y, num_iter=1000, rho=1e1)
error = np.sum(problem.estimates, axis=0) - y
mse = np.sum(np.power(error, 2)) / error.size
plt.figure(figsize=(10,8))
plt.plot(run1['residuals'], label='$\\rho=0.1$', linewidth=1)
plt.plot(run2['residuals'], label='$\\rho=1$', linewidth=1)
plt.plot(run3['residuals'], label='$\\rho=10$', linewidth=1)
plt.axhline(mse, ls='--', color='red', label='cvxpy')
plt.yscale('log')
plt.legend(loc=1)
plt.title('Infeasibility')
plt.xlabel('iteration');
plt.plot(run1['obj_vals'], label='admm_run1', linewidth=1)
plt.plot(run2['obj_vals'], label='admm_run2', linewidth=1)
plt.plot(run3['obj_vals'], label='admm_run3, linewidth=1')
plt.axhline(problem.problem.value, ls='--', color='red', label='cvxpy')
plt.legend()
plt.title('Objective Value')
plt.xlabel('iteration')
plt.ylim(260, 270);
plt.plot(1e0 * run2['u'], problem.problem.constraints[-1].dual_value, ls='none', marker='.')
plt.xlabel('ADMM $\\nu = \\rho u$')
plt.ylabel('CVXPY dual value');
fig, ax = plt.subplots(nrows=3, figsize=(10//1.1, 12//1.5))
ax[0].plot(t, signal1, label='hidden component 1', ls='--')
ax[0].plot(t, problem.estimates[1], label='CVXPY estimate 1')
ax[0].plot(t, run2['x2'], label='ADMM estimate 1')
ax[1].plot(t, signal2, label='hidden component 2', ls='--')
ax[1].plot(t, problem.estimates[2], label='CVXPY estimate 2')
ax[1].plot(t, run2['x3'], label='ADMM estimate 2')
ax[2].plot(t, signal1 + signal2, label='true composite signal', ls='--')
ax[2].plot(t, problem.estimates[1] + problem.estimates[2], label='CVXPY estimated signal');
ax[2].plot(t, run2['x2'] + run2['x3'], label='ADMM estimated signal');
ax[2].plot(t, y, label='observed signal', linewidth=1, marker='.', alpha=0.1);
for a in ax:
a.legend()
```
# Non-convex model
Replace the heuristic for a sparse first difference with the constraint that $x^3\in\left\{-1,1\right\}^T$. Objective function is calculated using the L1-heuristic to allow for an apples-to-apples comparison to previous results.
```
def prox3_noncvx(v, theta, rho):
v1 = np.ones_like(v)
v2 = -1 * np.ones_like(v)
d1 = np.abs(v - v1)
d2 = np.abs(v - v2)
x = np.ones_like(v1)
x[d2 < d1] = -1
return x
run_noncvx = run_admm(y, num_iter=1000, rho=5, prox3=prox3_noncvx)
r = np.linalg.norm(
np.average(problem.estimates, axis=0) - y / 3
)
plt.plot(run1['residuals'], label='run1')
plt.plot(run2['residuals'], label='run2')
plt.plot(run3['residuals'], label='run3')
plt.plot(run_noncvx['residuals'], label='run_noncvx', ls='-.')
plt.axhline(r, ls='--', color='red', label='cvxpy')
plt.yscale('log')
plt.legend()
plt.title('Infeasibility')
plt.xlabel('iteration');
plt.plot(run1['obj_vals'], label='run1')
plt.plot(run2['obj_vals'], label='run2')
plt.plot(run3['obj_vals'], label='run3')
plt.plot(run_noncvx['obj_vals'], label='run_noncvx', ls='-.')
plt.axhline(problem.problem.objective.value, ls='--', color='red', label='cvxpy')
plt.legend()
plt.title('Objective Value')
plt.xlabel('iteration')
plt.ylim(260, 400);
fig, ax = plt.subplots(nrows=3, figsize=(10//1.1, 12//1.5))
ax[0].plot(t, signal1, label='hidden component 1', ls='--')
ax[0].plot(t, problem.estimates[1], label='CVXPY estimate 1')
ax[0].plot(t, run_noncvx['x2'], label='ADMM estimate 1')
ax[1].plot(t, signal2, label='hidden component 2', ls='--')
ax[1].plot(t, problem.estimates[2], label='CVXPY estimate 2')
ax[1].plot(t, run_noncvx['x3'], label='ADMM estimate 2')
ax[2].plot(t, signal1 + signal2, label='true composite signal', ls='--')
ax[2].plot(t, problem.estimates[1] + problem.estimates[2], label='CVXPY estimated signal');
ax[2].plot(t, run_noncvx['x2'] + run_noncvx['x3'], label='ADMM estimated signal');
ax[2].plot(t, y, label='observed signal', linewidth=1, marker='.', alpha=0.1);
for a in ax:
a.legend()
```
| true |
code
| 0.595081 | null | null | null | null |
|
# Replication - High Dimensional Case2 - Table
Here we provide a notebook to replicate the summary tables for the high-dimensional case simulation.
The notebook replicates the results in:
- /out/simulation/tables/sim_hd2*
The main script can be found at:
- /scripts/simulation/tables/highdimensional_case2.py
## Please choose the settup for replication:
```
suffix = 'rank5' # rank5, rank50
R_suffix = 'R_lasso_theta_1se' # ''R_lasso_theta', 'R_lasso_theta_1se', 'R_Alasso1_theta', 'R_Alasso1_theta_1se', 'R_Alasso2_theta', 'R_Alasso2_theta_1se', 'R_SCAD_theta', 'R_MCP_theta', 'R_SCAD_theta'
# Modules
# =======================================================================================================================
import os
import sys
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
sim_name = 'sim_hd2'
# Function
# =======================================================================================================================
def custom_mean(X, W, col_idx):
'''
- average for paramters of an array selcted by an indexing matrix
X :: array to apply mean along axis=0
W :: indexing which elements to use for mean computatiuon
col_idx :: indexing the columns where W is applied - otherwise standard mean without selecting elements
'''
m = []
assert X.shape == W.shape
N, M = X.shape
for jj in range(M):
if col_idx[jj] == True:
m.append(np.mean(X[W[:, jj], jj]))
else:
m.append(np.mean(X[:, jj]))
return(np.asarray(m))
def custom_var(X, W, col_idx):
'''
- variance for paramters of an array selcted by an indexing matrix
X :: array to apply variance along axis=0
W :: indexing which elements to use for variance computatiuon
col_idx :: indexing the columns where W is applied - otherwise standard mean without selecting elements
'''
m = []
assert X.shape == W.shape
N, M = X.shape
for jj in range(M):
if col_idx[jj] == True:
m.append(np.var(X[W[:, jj], jj]))
else:
m.append(np.var(X[:, jj]))
return(np.asarray(m))
# Simulation Settings
# =======================================================================================================================
I = 750
P = 1000
theta = np.concatenate((np.asarray([-0.5, 0.7, 1.2, 0.65, -0.9, 1.4, 0.2, -0.4, -1.3, 0.1]), np.zeros((990,))))[:, None]
# Overall Parameters
# =======================================================================================================================
url = 'https://raw.githubusercontent.com/alexwjung/ProbCox/main/paper/ProbCox/out/simulation/sim_hd2/N_obs.txt'
N_obs = pd.read_csv(url, header=None, sep=';')
print('Obs: ', np.min(N_obs.iloc[:, 1]), np.median(N_obs.iloc[:, 1]), np.max(N_obs.iloc[:, 1]))
print('Censorpship: ', np.min(1-N_obs.iloc[:, 2]/I), np.median(1-N_obs.iloc[:, 2]/I), np.max(1-N_obs.iloc[:, 2]/I))
#print('Tied Events', np.min(N_obs.iloc[:, 3]), np.median(N_obs.iloc[:, 3]), np.max(N_obs.iloc[:, 3]))
# ProbCox Table
# =======================================================================================================================
res = np.zeros((P, 7))
res[:, 0] = theta[:, 0]
url1 = 'https://raw.githubusercontent.com/alexwjung/ProbCox/main/paper/ProbCox/out/simulation/sim_hd2/probcox' + suffix +'_theta.txt'
url2 = 'https://raw.githubusercontent.com/alexwjung/ProbCox/main/paper/ProbCox/out/simulation/sim_hd2/probcox' + suffix +'_theta_lower.txt'
url3 = 'https://raw.githubusercontent.com/alexwjung/ProbCox/main/paper/ProbCox/out/simulation/sim_hd2/probcox' + suffix +'_theta_upper.txt'
theta_est = pd.read_csv(url1, header=None, sep=';')
theta_est_lower = pd.read_csv(url2, header=None, sep=';')
theta_est_upper = pd.read_csv(url3, header=None, sep=';')
theta_est = theta_est.dropna(axis=0)
theta_est = theta_est.groupby(0).first().reset_index()
theta_est = theta_est.iloc[:, :-1]
assert theta_est.shape[0] == 200
theta_est_lower = theta_est_lower.dropna(axis=0)
theta_est_lower = theta_est_lower.groupby(0).first().reset_index()
theta_est_lower = theta_est_lower.iloc[:, :-1]
assert theta_est_lower.shape[0] == 200
theta_est_upper = theta_est_upper.dropna(axis=0)
theta_est_upper = theta_est_upper.groupby(0).first().reset_index()
theta_est_upper = theta_est_upper.iloc[:, :-1]
assert theta_est_upper.shape[0] == 200
theta_bound = theta_est_lower.merge(theta_est_upper, how='inner', on=0)
theta_bound = theta_bound.merge(theta_est, how='inner', on=0)
theta_est = np.asarray(theta_bound.iloc[:, -P:]).astype(float)
theta_bound = theta_bound.iloc[:, :-P]
theta_bound = np.asarray(theta_bound.iloc[:, 1:]).astype(float)
theta_est_lower = np.asarray(theta_est_lower.iloc[:, 1:])
theta_est_upper = np.asarray(theta_est_upper.iloc[:, 1:])
W = np.sign(theta_est_lower) == np.sign(theta_est_upper) # non zero parameters estimates (based on HPD95%)
col_idx = np.logical_and(np.squeeze(theta != 0), np.sum(W, axis=0) > 5) # true non-zero parameters
res[:, 1] = custom_mean(theta_est, W, col_idx)
res[:, 2] = np.sqrt(custom_var(theta_est, W, col_idx))
res[:, 3] = np.sqrt(custom_mean((theta_est - theta[:, 0][None, :])**2, W, col_idx))
res[:, 4] = custom_mean(theta_bound[:, -P:] - theta_bound[:, :P], W, col_idx)
res[:, 5] = custom_mean(np.logical_and(np.squeeze(theta)[None, :] >= theta_bound[:, :P], np.squeeze(theta)[None, :] <= theta_bound[:, -P:])
, W, col_idx)
res[:, 6] = np.mean(W, axis=0)
res = np.round(res, 2)
#pd.DataFrame(res) # full table with 0 parameters
pd.DataFrame(res[:10, :])
# column headings
#$\theta$ $\bar{\hat{\theta}}$ $\overline{\sigma_{\hat{\theta}}}$ $RMSE$ $\overline{HPD}_{95\%}$ $Coverage_{95\%}$ $p_{|\hat{\theta}| > 0}$
# Evaluating identification
theta_est_lower = theta_bound[:, :1000]
theta_est_upper = theta_bound[:, 1000:]
pd.DataFrame(np.concatenate((np.round(np.mean(np.sum(np.sign(theta_est_lower[:, :]) == np.sign(theta_est_upper[:, :]), axis=1)))[None, None], np.round(np.sqrt(np.var(np.sum(np.sign(theta_est_lower[:, :]) == np.sign(theta_est_upper[:, :]), axis=1))))[None, None], np.round(np.mean(np.sum((np.sign(theta_est_lower[:, :]) == np.sign(theta_est_upper[:, :])) * np.squeeze(theta == 0)[None, :], axis=1)))[None, None]), axis=1))
# column headings
# number of covariates identified standard error falsly identified
# R-Cox Table
# =======================================================================================================================
res = np.zeros((P, 7))
res[:, 0] = theta[:, 0]
url = 'https://raw.githubusercontent.com/alexwjung/ProbCox/main/paper/ProbCox/out/simulation/sim_hd2/' + R_suffix + '.txt'
theta_est = pd.read_csv(url, header=None, sep=';')
theta_est = theta_est.dropna(axis=0)
theta_est = theta_est.groupby(0).first().reset_index()
theta_est = np.asarray(theta_est.iloc[:, 1:])
assert theta_est.shape[0] == 200
W = theta_est!=0 # non zero parameters estimates (based on HPD95%)
col_idx = np.logical_and(np.squeeze(theta != 0), np.sum(W, axis=0) > 5) # true non-zero parameters
res[:, 1] = custom_mean(theta_est, W, col_idx)
res[:, 2] = np.sqrt(custom_var(theta_est, W, col_idx))
res[:, 3] = np.sqrt(custom_mean((theta_est - theta[:, 0][None, :])**2, W, col_idx))
res[:, 6] = np.mean(W, axis=0)
res = np.round(res, 2)
# pd.DataFrame(res) # full table with 0 parameters
res = pd.DataFrame(res[:10, :])
res.iloc[:, 4] = '-'
res.iloc[:, 5] = '-'
res
# column headings
#$\theta$ $\bar{\hat{\theta}}$ $\overline{\sigma_{\hat{\theta}}}$ $RMSE$ $\overline{CI}_{95\%}$ $Coverage_{95\%}$ $p_{|\hat{\theta}| > 0}$
# Evaluating identification
pd.DataFrame(np.concatenate((np.round(np.mean(np.sum(theta_est != 0, axis=1)))[None, None], np.round(np.sqrt(np.var(np.sum(theta_est != 0, axis=1))))[None, None],np.round(np.mean(np.sum((theta_est != 0) * np.squeeze(theta == 0)[None, :], axis=1)))[None, None]), axis=1))
# column headings
# number of covariates identified standard error falsly identified
```
| true |
code
| 0.472501 | null | null | null | null |
|
# Jacobi Method
From: https://en.wikipedia.org/wiki/Jacobi_method :
#### Jacobi Method
In numerical linear algebra, the Jacobi method is an iterative algorithm for determining the solutions of a diagonally dominant system of linear equations.
<br>
<br>
#### Convergence
A sufficient (but not necessary) condition for the method to converge is that the matrix A is strictly or irreducibly diagonally dominant.
<br>
<br>
#### Description
Let
:$A\mathbf x = \mathbf b$
be a square system of $n$ linear equations, where:
$A=\begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\a_{n1} & a_{n2} & \cdots & a_{nn} \end{bmatrix}, \qquad \mathbf{x} = \begin{bmatrix} x_{1} \\ x_2 \\ \vdots \\ x_n \end{bmatrix} , \qquad \mathbf{b} = \begin{bmatrix} b_{1} \\ b_2 \\ \vdots \\ b_n \end{bmatrix}.$
Then ''A'' can be decomposed into a diagonal matrix $D$, and the remainder $R$:
:$A=D+R \qquad \text{where} \qquad D = \begin{bmatrix} a_{11} & 0 & \cdots & 0 \\ 0 & a_{22} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\0 & 0 & \cdots & a_{nn} \end{bmatrix} \text{ and } R = \begin{bmatrix} 0 & a_{12} & \cdots & a_{1n} \\ a_{21} & 0 & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \cdots & 0 \end{bmatrix}. $
The solution is then obtained iteratively via
:$ \mathbf{x}^{(k+1)} = D^{-1} (\mathbf{b} - R \mathbf{x}^{(k)}), $
where $\mathbf{x}^{(k)}$ is the $k$th approximation or iteration of $\mathbf{x}$ and $\mathbf{x}^{(k+1)}$ is the next or $k$ + 1 iteration of $\mathbf{x}$.
$$x^{(k+1)}=D^{-1}(b - Rx^{(k)})$$
#### Equivalently:
##### (In the following Code following equations have been used):
$$x^{(k+1)}= Tx^{(k)} + C $$
$$T=-D^{-1}R $$
$$C = D^{-1}b $$
#### Stop Condition:
$$ \lVert X^{(k+1)} - X^{(k)} \rVert_2 \le 10^{-4}$$
```
import numpy as np
def jacobi(A,b,initial_guess):
#Extracting Diagonal elements from input matrix A:
Diagnal = np.diag(A)
D = np.diagflat(Diagnal)
#Calculating Invese of D:
D_inv = np.linalg.inv(D)
#Calculating R:
R = A - D
#Symbol of matrix multiplication in numpy is @
T = -D_inv@R
C = D_inv@b
x = initial_guess
while(1):
x_old = x
x = T@x + C
x_new = x
#using norm2:
if np.linalg.norm(x_new-x_old) <= 10**(-4):
break
return x
A = np.matrix([[2.0,1.0],
[5.0,7.0]])
b = np.matrix([[11.0],[13.0]])
initialGuess = np.matrix([[1.0],[1.0]])
sol = jacobi(A,b,initialGuess)
print ('A:')
print(A)
print ('\nb:')
print(b)
print('\nSolution:')
print(sol)
```
| true |
code
| 0.389372 | null | null | null | null |
|
(Real_Non_Linear_Neural_Network)=
# Chapter 7 -- Real (Non-linear) Neural Network
So in the previous example, we derived the gradients for a two layers neural network. This is to find the straight line that bisects the two groups in figure 7.1 in the introduction. However, in reality, we often have the following groups:
<img src="images/groups.PNG" width="400">
Figure 7.1
For data like this, a linear separator cannot satisfy our needs. The solution is to add another linear separator on top of the original linear separator.
<img src="images/groups1.PNG" width="500">
Figure 7.2
This is a classic three layers neural network. The layer at the left hand side is called the input layer; the layer at the right hand side is known as the output layer; the layer in between is called the hidden layer. The hidden layer is like a black-box that we cannot usually interpret by our instinct. We will dig in more details later.
<img src="images/threeLayers.PNG" width="500">
Figure 7.3
<img src="images/threeLayers1.PNG" width="400">
Figure 7.4
This is becoming something like a network finally. But the way we express the weights get more complicated. Here is how we define it by tradition:
$$
w_{ab}^{(n)}
$$ (eq7_1)
where $n$ means the $n^{th}$ layer in the neural net; $n$ = $1$ at the input layer. Suppose $n=1$ at the input layer, then $a$ and $b$ means that the weight is pointing from the $a^{th}$ neural in the second (hidden) layer to the $b^{th}$ neural in the input layer. This is going backwards (to the left).
<img src="images/bpg.PNG" width="400">
Figure 7.5
For example, the weights in the input layer in the figure 7.4 can be defined as follow
$$
W^{(1)}= \begin{bmatrix}
w^{(1)}_{11} & w^{(1)}_{21} \\
w^{(1)}_{12} & w^{(1)}_{22} \\
w^{(1)}_{13} & w^{(1)}_{23}
\end{bmatrix} =
\begin{bmatrix}
5 & 7 \\
-2 & -3 \\
-8 & 1
\end{bmatrix}
$$ (eq7_2)
And the weights in the hidden layer can be defined as
$$
W^{(2)}= \begin{bmatrix}
w^{(2)}_{11} \\
w^{(2)}_{12} \\
w^{(2)}_{13}
\end{bmatrix} =
\begin{bmatrix}
7 \\
5 \\
-6
\end{bmatrix}
$$ (eq7_3)
In python, we can describe the core features of such network by defining a Network clas. Here's the code we use to initialize a Network object:
```
import numpy as np
class Network(object):
def __init__(self, sizes):
self.num_layers = len(sizes)
self.sizes = sizes
self.biases = [np.random.randn(y, 1) for y in sizes[1:]]
self.weights = [np.random.randn(y, x)
for x, y in zip(sizes[:-1], sizes[1:])]
```
In this code, the list sizes contains the number of neurons in the respective layers. So, for example, if we want to create a Network object with 2 neurons in the first layer, 3 neurons in the second layer, and 1 neuron in the final layer, we'd do this with the code:
```
net = Network([2, 3, 1])
```
The biases and weights in the Network object are all initialized randomly, using the Numpy np.random.randn function to generate Gaussian distributions with mean 0 and standard deviation 1. This random initialization gives our stochastic gradient descent algorithm a place to start from. In later chapters we'll find better ways of initializing the weights and biases, but this will do for now. Note that the Network initialization code assumes that the first layer of neurons is an input layer, and omits to set any biases for those neurons, since biases are only ever used in computing the outputs from later layers.
Note also that the biases and weights are stored as lists of Numpy matrices. So, for example net.weights[1] is a Numpy matrix storing the weights connecting the second and third layers of neurons. (It's not the first and second layers, since Python's list indexing starts at 0.) Since net.weights[1] is rather verbose, let's just denote that matrix $w$. It's a matrix such that $w_{jk}$ is the weight for the connection between the $k_{th}$ neuron in the second layer, and the $j_{th}$ neuron in the third layer. This ordering of the j and k indices may seem strange - surely it'd make more sense to swap the j and k indices around? The big advantage of using this ordering is that it means that the vector of activations of the third layer of neurons is:
$$
a^{'}=\sigma(wa+b)
$$ (eq7_4)
There's quite a bit going on in this equation, so let's unpack it piece by piece. $a$ is the vector of activations of the second layer of neurons. To obtain $a^{'}$ we multiply a by the weight matrix $w$, and add the vector b of biases. We then apply the function $\sigma$ elementwise to every entry in the vector wa+b. (This is called vectorizing the function $\sigma$.)
| true |
code
| 0.424919 | null | null | null | null |
|
```
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
```
# Linear models
Linear models are useful when little data is available or for very large feature spaces as in text classification. In addition, they form a good case study for regularization.
# Linear models for regression
All linear models for regression learn a coefficient parameter ``coef_`` and an offset ``intercept_`` to make predictions using a linear combination of features:
```
y_pred = x_test[0] * coef_[0] + ... + x_test[n_features-1] * coef_[n_features-1] + intercept_
```
The difference between the linear models for regression is what kind of restrictions or penalties are put on ``coef_`` as regularization , in addition to fitting the training data well.
The most standard linear model is the 'ordinary least squares regression', often simply called 'linear regression'. It doesn't put any additional restrictions on ``coef_``, so when the number of features is large, it becomes ill-posed and the model overfits.
Let us generate a simple simulation, to see the behavior of these models.
```
from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
X, y, true_coefficient = make_regression(n_samples=200, n_features=30, n_informative=10, noise=100, coef=True, random_state=5)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=5, train_size=60)
print(X_train.shape)
print(y_train.shape)
```
## Linear Regression
$$ \text{min}_{w, b} \sum_i || w^\mathsf{T}x_i + b - y_i||^2 $$
```
from sklearn.linear_model import LinearRegression
linear_regression = LinearRegression().fit(X_train, y_train)
print("R^2 on training set: %f" % linear_regression.score(X_train, y_train))
print("R^2 on test set: %f" % linear_regression.score(X_test, y_test))
from sklearn.metrics import r2_score
print(r2_score(np.dot(X, true_coefficient), y))
plt.figure(figsize=(10, 5))
coefficient_sorting = np.argsort(true_coefficient)[::-1]
plt.plot(true_coefficient[coefficient_sorting], "o", label="true")
plt.plot(linear_regression.coef_[coefficient_sorting], "o", label="linear regression")
plt.legend()
from sklearn.learning_curve import learning_curve
def plot_learning_curve(est, X, y):
training_set_size, train_scores, test_scores = learning_curve(est, X, y, train_sizes=np.linspace(.1, 1, 20))
estimator_name = est.__class__.__name__
line = plt.plot(training_set_size, train_scores.mean(axis=1), '--', label="training scores " + estimator_name)
plt.plot(training_set_size, test_scores.mean(axis=1), '-', label="test scores " + estimator_name, c=line[0].get_color())
plt.xlabel('Training set size')
plt.legend(loc='best')
plt.ylim(-0.1, 1.1)
plt.figure()
plot_learning_curve(LinearRegression(), X, y)
```
## Ridge Regression (L2 penalty)
**The Ridge estimator** is a simple regularization (called l2 penalty) of the ordinary LinearRegression. In particular, it has the benefit of being not computationally more expensive than the ordinary least square estimate.
$$ \text{min}_{w,b} \sum_i || w^\mathsf{T}x_i + b - y_i||^2 + \alpha ||w||_2^2$$
The amount of regularization is set via the `alpha` parameter of the Ridge.
```
from sklearn.linear_model import Ridge
ridge_models = {}
training_scores = []
test_scores = []
for alpha in [100, 10, 1, .01]:
ridge = Ridge(alpha=alpha).fit(X_train, y_train)
training_scores.append(ridge.score(X_train, y_train))
test_scores.append(ridge.score(X_test, y_test))
ridge_models[alpha] = ridge
plt.figure()
plt.plot(training_scores, label="training scores")
plt.plot(test_scores, label="test scores")
plt.xticks(range(4), [100, 10, 1, .01])
plt.legend(loc="best")
plt.figure(figsize=(10, 5))
plt.plot(true_coefficient[coefficient_sorting], "o", label="true", c='b')
for i, alpha in enumerate([100, 10, 1, .01]):
plt.plot(ridge_models[alpha].coef_[coefficient_sorting], "o", label="alpha = %.2f" % alpha, c=plt.cm.summer(i / 3.))
plt.legend(loc="best")
```
Tuning alpha is critical for performance.
```
plt.figure()
plot_learning_curve(LinearRegression(), X, y)
plot_learning_curve(Ridge(alpha=10), X, y)
```
## Lasso (L1 penalty
**The Lasso estimator** is useful to impose sparsity on the coefficient. In other words, it is to be prefered if we believe that many of the features are not relevant. This is done via the so-called l1 penalty.
$$ \text{min}_{w, b} \sum_i || w^\mathsf{T}x_i + b - y_i||^2 + \alpha ||w||_1$$
```
from sklearn.linear_model import Lasso
lasso_models = {}
training_scores = []
test_scores = []
for alpha in [30, 10, 1, .01]:
lasso = Lasso(alpha=alpha).fit(X_train, y_train)
training_scores.append(lasso.score(X_train, y_train))
test_scores.append(lasso.score(X_test, y_test))
lasso_models[alpha] = lasso
plt.figure()
plt.plot(training_scores, label="training scores")
plt.plot(test_scores, label="test scores")
plt.xticks(range(4), [30, 10, 1, .01])
plt.legend(loc="best")
plt.figure(figsize=(10, 5))
plt.plot(true_coefficient[coefficient_sorting], "o", label="true", c='b')
for i, alpha in enumerate([30, 10, 1, .01]):
plt.plot(lasso_models[alpha].coef_[coefficient_sorting], "o", label="alpha = %.2f" % alpha, c=plt.cm.summer(i / 3.))
plt.legend(loc="best")
plt.figure()
plot_learning_curve(LinearRegression(), X, y)
plot_learning_curve(Ridge(alpha=10), X, y)
plot_learning_curve(Lasso(alpha=10), X, y)
```
| true |
code
| 0.729715 | null | null | null | null |
|
# Solution b.
Create a inference script. Let's call it `inference.py`.
Let's also create the `input_fn`, `predict_fn`, `output_fn` and `model_fn` functions.
Copy the cells below and paste in [the main notebook](../xgboost_customer_churn_studio.ipynb).
```
%%writefile inference.py
import os
import pickle
import xgboost
import sagemaker_xgboost_container.encoder as xgb_encoders
# Same as in the training script
def model_fn(model_dir):
"""Load a model. For XGBoost Framework, a default function to load a model is not provided.
Users should provide customized model_fn() in script.
Args:
model_dir: a directory where model is saved.
Returns:
A XGBoost model.
XGBoost model format type.
"""
model_files = (file for file in os.listdir(model_dir) if os.path.isfile(os.path.join(model_dir, file)))
model_file = next(model_files)
try:
booster = pickle.load(open(os.path.join(model_dir, model_file), 'rb'))
format = 'pkl_format'
except Exception as exp_pkl:
try:
booster = xgboost.Booster()
booster.load_model(os.path.join(model_dir, model_file))
format = 'xgb_format'
except Exception as exp_xgb:
raise ModelLoadInferenceError("Unable to load model: {} {}".format(str(exp_pkl), str(exp_xgb)))
booster.set_param('nthread', 1)
return booster
def input_fn(request_body, request_content_type):
"""
The SageMaker XGBoost model server receives the request data body and the content type,
and invokes the `input_fn`.
The input_fn that just validates request_content_type and prints
"""
print("Hello from the PRE-processing function!!!")
if request_content_type == "text/csv":
return xgb_encoders.csv_to_dmatrix(request_body)
else:
raise ValueError(
"Content type {} is not supported.".format(request_content_type)
)
def predict_fn(input_object, model):
"""
SageMaker XGBoost model server invokes `predict_fn` on the return value of `input_fn`.
"""
return model.predict(input_object)[0]
def output_fn(prediction, response_content_type):
"""
After invoking predict_fn, the model server invokes `output_fn`.
An output_fn that just adds a column to the output and validates response_content_type
"""
print("Hello from the POST-processing function!!!")
appended_output = "hello from pos-processing function!!!"
predictions = [prediction, appended_output]
if response_content_type == "text/csv":
return ','.join(str(x) for x in predictions)
else:
raise ValueError("Content type {} is not supported.".format(response_content_type))
```
Deploy the new model with the inference script:
- find the S3 bucket where the artifact is stored (you can create a tarball and upload it to S3 or use another model that was previously created in SageMaker)
#### Finding a previously trained model:
Go to the Experiments tab in Studio again:

Choose another trained model, such as the one trained with Framework mode (right-click and choose `Open in trial details`):

Click on `Artifacts` and look at the `Output artifacts`:

Copy and paste your `SageMaker.ModelArtifact` of the S3 URI where the model is saved:
In this example:
```
s3_artifact="s3://sagemaker-studio-us-east-2-<AWS_ACCOUNT_ID>/xgboost-churn/output/demo-xgboost-customer-churn-2021-04-13-18-51-56-144/output/model.tar.gz"
```
```
s3_artifact="s3://<YOUR-BUCKET>/PATH/TO/model.tar.gz"
```
**Deploy it:**
```
from sagemaker.xgboost.model import XGBoostModel
xgb_inference_model = XGBoostModel(
entry_point="inference.py",
model_data=s3_artifact,
role=role,
image=docker_image_name,
framework_version="0.90-2",
py_version="py3"
)
data_capture_prefix = '{}/datacapture'.format(prefix)
endpoint_name = "model-xgboost-customer-churn-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print("EndpointName = {}".format(endpoint_name))
predictor = xgb_inference_model.deploy( initial_instance_count=1,
instance_type='ml.m4.xlarge',
endpoint_name=endpoint_name,
data_capture_config=DataCaptureConfig(
enable_capture=True,
sampling_percentage=100,
destination_s3_uri='s3://{}/{}'.format(bucket, data_capture_prefix)
)
)
## Updating an existing endpoint
# predictor = xgb_inference_model.deploy( initial_instance_count=1,
# instance_type='ml.m4.xlarge',
# endpoint_name=endpoint_name,
# data_capture_config=DataCaptureConfig(
# enable_capture=True,
# sampling_percentage=100,
# destination_s3_uri='s3://{}/{}'.format(bucket, data_capture_prefix)
# ),
# update_endpoint=True
# )
```
**Send some requests:**
```
with open('data/test_sample.csv', 'r') as f:
for row in f:
payload = row.rstrip('\n')
print(f"Sending: {payload}")
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType='text/csv',
Accept='text/csv',
Body=payload)
print(f"\nReceived: {response['Body'].read()}")
break
```
Go to CloudWatch logs and check the inference logic:
[Link to CloudWatch Logs](https://us-east-2.console.aws.amazon.com/cloudwatch/home?region=us-east-2#logsV2:log-groups$3FlogGroupNameFilter$3D$252Faws$252Fsagemaker$252FEndpoints$252F)
| true |
code
| 0.541712 | null | null | null | null |
|
# Model Layers
This module contains many layer classes that we might be interested in using in our models. These layers complement the default [Pytorch layers](https://pytorch.org/docs/stable/nn.html) which we can also use as predefined layers.
```
from fastai import *
from fastai.vision import *
from fastai.gen_doc.nbdoc import *
show_doc(AdaptiveConcatPool2d, doc_string=False)
from fastai.gen_doc.nbdoc import *
from fastai.layers import *
```
Layer that concats `AdaptiveAvgPool2d` and `AdaptiveMaxPool2d`. Output will be `2*sz` or 2 if `sz` is None.
The [`AdaptiveConcatPool2d`](/layers.html#AdaptiveConcatPool2d) object uses adaptive average pooling and adaptive max pooling and concatenates them both. We use this because it provides the model with the information of both methods and improves performance. This technique is called `adaptive` because it allows us to decide on what output dimensions we want, instead of choosing the input's dimensions to fit a desired output size.
Let's try training with Adaptive Average Pooling first, then with Adaptive Max Pooling and finally with the concatenation of them both to see how they fare in performance.
We will first define a [`simple_cnn`](/layers.html#simple_cnn) using [Adapative Max Pooling](https://pytorch.org/docs/stable/nn.html#torch.nn.AdaptiveMaxPool2d) by changing the source code a bit.
```
path = untar_data(URLs.MNIST_SAMPLE)
data = ImageDataBunch.from_folder(path)
def simple_cnn_max(actns:Collection[int], kernel_szs:Collection[int]=None,
strides:Collection[int]=None) -> nn.Sequential:
"CNN with `conv2d_relu` layers defined by `actns`, `kernel_szs` and `strides`"
nl = len(actns)-1
kernel_szs = ifnone(kernel_szs, [3]*nl)
strides = ifnone(strides , [2]*nl)
layers = [conv_layer(actns[i], actns[i+1], kernel_szs[i], stride=strides[i])
for i in range(len(strides))]
layers.append(nn.Sequential(nn.AdaptiveMaxPool2d(1), Flatten()))
return nn.Sequential(*layers)
model = simple_cnn_max((3,16,16,2))
learner = Learner(data, model, metrics=[accuracy])
learner.fit(1)
```
Now let's try with [Adapative Average Pooling](https://pytorch.org/docs/stable/nn.html#torch.nn.AdaptiveAvgPool2d) now.
```
def simple_cnn_avg(actns:Collection[int], kernel_szs:Collection[int]=None,
strides:Collection[int]=None) -> nn.Sequential:
"CNN with `conv2d_relu` layers defined by `actns`, `kernel_szs` and `strides`"
nl = len(actns)-1
kernel_szs = ifnone(kernel_szs, [3]*nl)
strides = ifnone(strides , [2]*nl)
layers = [conv_layer(actns[i], actns[i+1], kernel_szs[i], stride=strides[i])
for i in range(len(strides))]
layers.append(nn.Sequential(nn.AdaptiveAvgPool2d(1), Flatten()))
return nn.Sequential(*layers)
model = simple_cnn_avg((3,16,16,2))
learner = Learner(data, model, metrics=[accuracy])
learner.fit(1)
```
Finally we will try with the concatenation of them both [`AdaptiveConcatPool2d`](/layers.html#AdaptiveConcatPool2d). We will see that, in fact, it increases our accuracy and decreases our loss considerably!
```
def simple_cnn(actns:Collection[int], kernel_szs:Collection[int]=None,
strides:Collection[int]=None) -> nn.Sequential:
"CNN with `conv2d_relu` layers defined by `actns`, `kernel_szs` and `strides`"
nl = len(actns)-1
kernel_szs = ifnone(kernel_szs, [3]*nl)
strides = ifnone(strides , [2]*nl)
layers = [conv_layer(actns[i], actns[i+1], kernel_szs[i], stride=strides[i])
for i in range(len(strides))]
layers.append(nn.Sequential(AdaptiveConcatPool2d(1), Flatten()))
return nn.Sequential(*layers)
model = simple_cnn((3,16,16,2))
learner = Learner(data, model, metrics=[accuracy])
learner.fit(1)
show_doc(Lambda, doc_string=False)
```
Lambda allows us to define functions and use them as layers in our networks inside a [Sequential](https://pytorch.org/docs/stable/nn.html#torch.nn.Sequential) object.
So, for example, say we want to apply a [log_softmax loss](https://pytorch.org/docs/stable/nn.html#torch.nn.functional.log_softmax) and we need to change the shape of our output batches to be able to use this loss. We can add a layer that applies the necessary change in shape by calling:
`Lambda(lambda x: x.view(x.size(0),-1))`
Let's see an example of how the shape of our output can change when we add this layer.
```
model = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.AdaptiveAvgPool2d(1),
)
model.cuda()
for xb, yb in data.train_dl:
out = (model(*[xb]))
print(out.size())
break
model = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.AdaptiveAvgPool2d(1),
Lambda(lambda x: x.view(x.size(0),-1))
)
model.cuda()
for xb, yb in data.train_dl:
out = (model(*[xb]))
print(out.size())
break
show_doc(Flatten)
```
The function we build above is actually implemented in our library as [`Flatten`](/layers.html#Flatten). We can see that it returns the same size when we run it.
```
model = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.AdaptiveAvgPool2d(1),
Flatten(),
)
model.cuda()
for xb, yb in data.train_dl:
out = (model(*[xb]))
print(out.size())
break
show_doc(PoolFlatten)
```
We can combine these two final layers ([AdaptiveAvgPool2d](https://pytorch.org/docs/stable/nn.html#torch.nn.AdaptiveAvgPool2d) and [`Flatten`](/layers.html#Flatten)) by using [`PoolFlatten`](/layers.html#PoolFlatten).
```
model = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),
PoolFlatten()
)
model.cuda()
for xb, yb in data.train_dl:
out = (model(*[xb]))
print(out.size())
break
show_doc(ResizeBatch)
```
Another use we give to the Lambda function is to resize batches with [`ResizeBatch`](/layers.html#ResizeBatch) when we have a layer that expects a different input than what comes from the previous one. Let's see an example:
```
a = torch.tensor([[1., -1.], [1., -1.]])
print(a)
out = ResizeBatch(4)
print(out(a))
show_doc(CrossEntropyFlat, doc_string=False)
```
Same as [nn.CrossEntropyLoss](https://pytorch.org/docs/stable/nn.html#torch.nn.CrossEntropyLoss), but flattens input and target. Is used to calculate cross entropy on arrays (which Pytorch will not let us do with their [nn.CrossEntropyLoss](https://pytorch.org/docs/stable/nn.html#torch.nn.CrossEntropyLoss) function). An example of a use case is image segmentation models where the output in an image (or an array of pixels).
The parameters are the same as [nn.CrossEntropyLoss](https://pytorch.org/docs/stable/nn.html#torch.nn.CrossEntropyLoss): `weight` to rescale each class, `size_average` whether we want to sum the losses across elements in a batch or we want to add them up, `ignore_index` what targets do we want to ignore, `reduce` on whether we want to return a loss per batch element and `reduction` specifies which type of reduction (if any) we want to apply to our input.
```
show_doc(MSELossFlat)
show_doc(Debugger)
```
The debugger module allows us to peek inside a network while its training and see in detail what is going on. We can see inputs, ouputs and sizes at any point in the network.
For instance, if you run the following:
``` python
model = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
Debugger(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),
)
model.cuda()
learner = Learner(data, model, metrics=[accuracy])
learner.fit(5)
```
... you'll see something like this:
```
/home/ubuntu/fastai/fastai/layers.py(74)forward()
72 def forward(self,x:Tensor) -> Tensor:
73 set_trace()
---> 74 return x
75
76 class StdUpsample(nn.Module):
ipdb>
```
```
show_doc(NoopLoss)
show_doc(WassersteinLoss)
show_doc(PixelShuffle_ICNR)
show_doc(bn_drop_lin, doc_string=False)
```
The [`bn_drop_lin`](/layers.html#bn_drop_lin) function returns a sequence of [batch normalization](https://arxiv.org/abs/1502.03167), [dropout](https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf) and a linear layer. This custom layer is usually used at the end of a model.
`n_in` represents the number of size of the input `n_out` the size of the output, `bn` whether we want batch norm or not, `p` is how much dropout and `actn` is an optional parameter to add an activation function at the end.
```
show_doc(conv2d)
show_doc(conv2d_trans)
show_doc(conv_layer, doc_string=False)
```
The [`conv_layer`](/layers.html#conv_layer) function returns a sequence of [nn.Conv2D](https://pytorch.org/docs/stable/nn.html#torch.nn.Conv2d), [BatchNorm](https://arxiv.org/abs/1502.03167) and a ReLU or [leaky RELU](https://ai.stanford.edu/~amaas/papers/relu_hybrid_icml2013_final.pdf) activation function.
`n_in` represents the number of size of the input `n_out` the size of the output, `ks` kernel size, `stride` the stride with which we want to apply the convolutions. `bias` will decide if they have bias or not (if None, defaults to True unless using batchnorm). `norm_type` selects type of normalization (or `None`). If `leaky` is None, the activation is a standard `ReLU`, otherwise it's a `LearkyReLU` of slope `leaky`. Finally if `transpose=True`, the convolution is replaced by a `ConvTranspose2D`.
```
show_doc(embedding, doc_string=False)
```
Create an [embedding layer](https://arxiv.org/abs/1711.09160) with input size `ni` and output size `nf`.
```
show_doc(simple_cnn)
show_doc(std_upsample_head, doc_string=False)
```
Create a sequence of upsample layers with a RELU at the beggining and a [nn.ConvTranspose2d](https://pytorch.org/docs/stable/nn.html#torch.nn.ConvTranspose2d).
`nfs` is a list with the input and output sizes of each upsample layer and `c` is the output size of the final 2D Transpose Convolutional layer.
```
show_doc(trunc_normal_)
show_doc(icnr)
```
## Undocumented Methods - Methods moved below this line will intentionally be hidden
```
show_doc(Debugger.forward)
show_doc(MSELossFlat.forward)
show_doc(CrossEntropyFlat.forward)
show_doc(Lambda.forward)
show_doc(AdaptiveConcatPool2d.forward)
show_doc(NoopLoss.forward)
show_doc(icnr)
show_doc(PixelShuffle_ICNR.forward)
show_doc(WassersteinLoss.forward)
```
## New Methods - Please document or move to the undocumented section
| true |
code
| 0.90878 | null | null | null | null |
|
# Ch05
```
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import statsmodels.api as sm
%load_ext autoreload
%autoreload 2
plt.style.use('seaborn-talk')
plt.style.use('bmh')
pd.set_option('display.max_rows', 100)
```
## 5.1 Generate a time series from an IID Gaussian random process. This is a memory-less, stationary series:
```
np.random.seed(0)
N = 252 * 10
s = pd.Series(np.random.randn(N))
s.plot()
```
## (a) Compute the ADF statistic on this series. What is the p-value?
```
adf = lambda s: sm.tsa.stattools.adfuller(s)
p_val = lambda s: sm.tsa.stattools.adfuller(s)[1]
res = adf(s)
p = res[1]
res, p
```
## (b) Compute the cumulative sum of the observations. This is a non-stationary series w/o memory.
```
cmsm = pd.Series(s).cumsum()
cmsm.plot()
p_val(cmsm)
```
# [5.2] Generate a time series that follows a sinusoidal function. This is a stationary series with memory.
```
np.random.seed(0)
rand = np.random.random(N)
idx = np.linspace(0, 10, N)
s = pd.Series(1 * np.sin(2. * idx + .5))
s.plot()
p_val(s)
```
## (b) Shift every observation by the same positive value. Compute the cumulative sum of the observations. This is a non-stationary series with memory.
```
s_ = (s + 1).cumsum().rename('fake_close').to_frame()
s_.plot()
adf(s_['fake_close'].dropna()), p_val(s_['fake_close'])
def getWeights(d, size):
# thres > 0 drops insignificant weights
w = [1.]
for k in range(1, size):
w_ = -w[-1] / k * (d - k + 1)
w.append(w_)
w = np.array(w[::-1]).reshape(-1, 1)
return w
s_.shape[0]
%%time
w = getWeights(0.1, s_.shape[0])
def fracDiff(series, d, thres=0.01):
'''
Increasing width window, with treatment of NaNs
Note 1: For thres=1, nothing is skipped
Note 2: d can be any positive fractional, not necessarily
bounded between [0,1]
'''
#1) Compute weights for the longest series
w=getWeights(d, series.shape[0])
#bp()
#2) Determine initial calcs to be skipped based on weight-loss threshold
w_=np.cumsum(abs(w))
w_ /= w_[-1]
skip = w_[w_>thres].shape[0]
#3) Apply weights to values
df={}
for name in series.columns:
seriesF, df_=series[[name]].fillna(method='ffill').dropna(), pd.Series()
for iloc in range(skip, seriesF.shape[0]):
loc=seriesF.index[iloc]
test_val = series.loc[loc,name] # must resample if duplicate index
if isinstance(test_val, (pd.Series, pd.DataFrame)):
test_val = test_val.resample('1m').mean()
if not np.isfinite(test_val).any(): continue # exclude NAs
try:
df_.loc[loc]=np.dot(w[-(iloc+1):,:].T, seriesF.loc[:loc])[0,0]
except:
continue
df[name]=df_.copy(deep=True)
df=pd.concat(df,axis=1)
return df
df0 = fracDiff(s_, 0.1)
df0.head()
cols = ['adfStat','pVal','lags','nObs','95% conf'] #,'corr']
out = pd.DataFrame(columns = cols)
for d in np.linspace(0,1,11):
try:
df0 = fracDiff(s_, d)
df0 = sm.tsa.stattools.adfuller(df0['fake_close'], maxlag=1, regression='c', autolag=None)
out.loc[d] = list(df0[:4])+[df0[4]['5%']]
except:
break
f, ax = plt.subplots()
out['adfStat'].plot(ax=ax, marker='X')
ax.axhline(out['95% conf'].mean(), lw=1, color='r', ls='dotted')
ax.set_title('min d with thresh=0.01')
ax.set_xlabel('d values')
ax.set_ylabel('adf stat');
display(out)
```
# APPENDIX
| true |
code
| 0.433562 | null | null | null | null |
|
# An Introduction to the Amazon SageMaker IP Insights Algorithm
#### Unsupervised anomaly detection for susicipous IP addresses
-------
1. [Introduction](#Introduction)
2. [Setup](#Setup)
3. [Training](#Training)
4. [Inference](#Inference)
5. [Epilogue](#Epilogue)
## Introduction
-------
The Amazon SageMaker IP Insights algorithm uses statistical modeling and neural networks to capture associations between online resources (such as account IDs or hostnames) and IPv4 addresses. Under the hood, it learns vector representations for online resources and IP addresses. This essentially means that if the vector representing an IP address and an online resource are close together, then it is likey for that IP address to access that online resource, even if it has never accessed it before.
In this notebook, we use the Amazon SageMaker IP Insights algorithm to train a model on synthetic data. We then use this model to perform inference on the data and show how to discover anomalies. After running this notebook, you should be able to:
- obtain, transform, and store data for use in Amazon SageMaker,
- create an AWS SageMaker training job to produce an IP Insights model,
- use the model to perform inference with an Amazon SageMaker endpoint.
If you would like to know more, please check out the [SageMaker IP Inisghts Documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/ip-insights.html).
## Setup
------
*This notebook was tested in Amazon SageMaker Studio on a ml.t3.medium instance with Python 3 (Data Science) kernel.*
Our first step is to setup our AWS credentials so that AWS SageMaker can store and access training data and model artifacts.
### Select Amazon S3 Bucket
We first need to specify the locations where we will store our training data and trained model artifacts. ***This is the only cell of this notebook that you will need to edit.*** In particular, we need the following data:
- `bucket` - An S3 bucket accessible by this account.
- `prefix` - The location in the bucket where this notebook's input and output data will be stored. (The default value is sufficient.)
```
import boto3
import botocore
import os
import sagemaker
bucket = sagemaker.Session().default_bucket()
prefix = "sagemaker/ipinsights-tutorial"
execution_role = sagemaker.get_execution_role()
region = boto3.Session().region_name
# check if the bucket exists
try:
boto3.Session().client("s3").head_bucket(Bucket=bucket)
except botocore.exceptions.ParamValidationError as e:
print("Hey! You either forgot to specify your S3 bucket or you gave your bucket an invalid name!")
except botocore.exceptions.ClientError as e:
if e.response["Error"]["Code"] == "403":
print(f"Hey! You don't have permission to access the bucket, {bucket}.")
elif e.response["Error"]["Code"] == "404":
print(f"Hey! Your bucket, {bucket}, doesn't exist!")
else:
raise
else:
print(f"Training input/output will be stored in: s3://{bucket}/{prefix}")
```
Next we download the modules necessary for synthetic data generation they do not exist.
```
from os import path
tools_bucket = f"jumpstart-cache-prod-{region}" # Bucket containing the data generation module.
tools_prefix = "1p-algorithms-assets/ip-insights" # Prefix for the data generation module
s3 = boto3.client("s3")
data_generation_file = "generate_data.py" # Synthetic data generation module
script_parameters_file = "ip2asn-v4-u32.tsv.gz"
if not path.exists(data_generation_file):
s3.download_file(tools_bucket, f"{tools_prefix}/{data_generation_file}", data_generation_file)
if not path.exists(script_parameters_file):
s3.download_file(tools_bucket, f"{tools_prefix}/{script_parameters_file}", script_parameters_file)
```
### Dataset
Apache Web Server ("httpd") is the most popular web server used on the internet. And luckily for us, it logs all requests processed by the server - by default. If a web page requires HTTP authentication, the Apache Web Server will log the IP address and authenticated user name for each requested resource.
The [access logs](https://httpd.apache.org/docs/2.4/logs.html) are typically on the server under the file `/var/log/httpd/access_log`. From the example log output below, we see which IP addresses each user has connected with:
```
192.168.1.100 - user1 [15/Oct/2018:18:58:32 +0000] "GET /login_success?userId=1 HTTP/1.1" 200 476 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36"
192.168.1.102 - user2 [15/Oct/2018:18:58:35 +0000] "GET /login_success?userId=2 HTTP/1.1" 200 - "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36"
...
```
If we want to train an algorithm to detect suspicious activity, this dataset is ideal for SageMaker IP Insights.
First, we determine the resource we want to be analyzing (such as a login page or access to a protected file). Then, we construct a dataset containing the history of all past user interactions with the resource. We extract out each 'access event' from the log and store the corresponding user name and IP address in a headerless CSV file with two columns. The first column will contain the user identifier string, and the second will contain the IPv4 address in decimal-dot notation.
```
user1, 192.168.1.100
user2, 193.168.1.102
...
```
As a side note, the dataset should include all access events. That means some `<user_name, ip_address>` pairs will be repeated.
#### User Activity Simulation
For this example, we are going to simulate our own web-traffic logs. We mock up a toy website example and simulate users logging into the website from mobile devices.
The details of the simulation are explained in the script [here](./generate_data.py).
```
from generate_data import generate_dataset
# We simulate traffic for 10,000 users. This should yield about 3 million log lines (~700 MB).
NUM_USERS = 10000
log_file = "ipinsights_web_traffic.log"
generate_dataset(NUM_USERS, log_file)
# Visualize a few log lines
!head $log_file
```
### Prepare the dataset
Now that we have our logs, we need to transform them into a format that IP Insights can use. As we mentioned above, we need to:
1. Choose the resource which we want to analyze users' history for
2. Extract our users' usage history of IP addresses
3. In addition, we want to separate our dataset into a training and test set. This will allow us to check for overfitting by evaluating our model on 'unseen' login events.
For the rest of the notebook, we assume that the Apache Access Logs are in the Common Log Format as defined by the [Apache documentation](https://httpd.apache.org/docs/2.4/logs.html#accesslog). We start with reading the logs into a Pandas DataFrame for easy data exploration and pre-processing.
```
import pandas as pd
df = pd.read_csv(
log_file,
sep=" ",
na_values="-",
header=None,
names=["ip_address","rcf_id","user","timestamp","time_zone","request", "status", "size", "referer", "user_agent"]
)
df.head()
```
We convert the log timestamp strings into Python datetimes so that we can sort and compare the data more easily.
```
# Convert time stamps to DateTime objects
df["timestamp"] = pd.to_datetime(df["timestamp"], format="[%d/%b/%Y:%H:%M:%S")
```
We also verify the time zones of all of the time stamps. If the log contains more than one time zone, we would need to standardize the timestamps.
```
# Check if they are all in the same timezone
num_time_zones = len(df["time_zone"].unique())
num_time_zones
```
As we see above, there is only one value in the entire `time_zone` column. Therefore, all of the timestamps are in the same time zone, and we do not need to standardize them. We can skip the next cell and go to [1. Selecting a Resource](#1.-Select-Resource).
If there is more than one time_zone in your dataset, then we parse the timezone offset and update the corresponding datetime object.
**Note:** The next cell takes about 5-10 minutes to run.
```
from datetime import datetime
import pytz
def apply_timezone(row):
tz = row[1]
tz_offset = int(tz[:3]) * 60 # Hour offset
tz_offset += int(tz[3:5]) # Minutes offset
return row[0].replace(tzinfo=pytz.FixedOffset(tz_offset))
if num_time_zones > 1:
df["timestamp"] = df[["timestamp", "time_zone"]].apply(apply_timezone, axis=1)
```
#### 1. Select Resource
Our goal is to train an IP Insights algorithm to analyze the history of user logins such that we can predict how suspicious a login event is.
In our simulated web server, the server logs a `GET` request to the `/login_success` page everytime a user successfully logs in. We filter our Apache logs for `GET` requests for `/login_success`. We also filter for requests that have a `status_code == 200`, to ensure that the page request was well formed.
**Note:** every web server handles logins differently. For your dataset, determine which resource you will need to be analyzing to correctly frame this problem. Depending on your usecase, you may need to do more data exploration and preprocessing.
```
df = df[(df["request"].str.startswith("GET /login_success")) & (df["status"] == 200)]
```
#### 2. Extract Users and IP address
Now that our DataFrame only includes log events for the resource we want to analyze, we extract the relevant fields to construct a IP Insights dataset.
IP Insights takes in a headerless CSV file with two columns: an entity (username) ID string and the IPv4 address in decimal-dot notation. Fortunately, the Apache Web Server Access Logs output IP addresses and authentcated usernames in their own columns.
**Note:** Each website handles user authentication differently. If the Access Log does not output an authenticated user, you could explore the website's query strings or work with your website developers on another solution.
```
df = df[["user", "ip_address", "timestamp"]]
```
#### 3. Create training and test dataset
As part of training a model, we want to evaluate how it generalizes to data it has never seen before.
Typically, you create a test set by reserving a random percentage of your dataset and evaluating the model after training. However, for machine learning models that make future predictions on historical data, we want to use out-of-time testing. Instead of randomly sampling our dataset, we split our dataset into two contiguous time windows. The first window is the training set, and the second is the test set.
We first look at the time range of our dataset to select a date to use as the partition between the training and test set.
```
df["timestamp"].describe()
```
We have login events for 10 days. Let's take the first week (7 days) of data as training and then use the last 3 days for the test set.
```
time_partition = (
datetime(2018, 11, 11, tzinfo=pytz.FixedOffset(0))
if num_time_zones > 1
else datetime(2018, 11, 11)
)
train_df = df[df["timestamp"] <= time_partition]
test_df = df[df["timestamp"] > time_partition]
```
Now that we have our training dataset, we shuffle it.
Shuffling improves the model's performance since SageMaker IP Insights uses stochastic gradient descent. This ensures that login events for the same user are less likely to occur in the same mini batch. This allows the model to improve its performance in between predictions of the same user, which will improve training convergence.
```
# Shuffle train data
train_df = train_df.sample(frac=1)
train_df.head()
```
### Store Data on S3
Now that we have simulated (or scraped) our datasets, we have to prepare and upload it to S3.
We will be doing local inference, therefore we don't need to upload our test dataset.
```
# Output dataset as headerless CSV
train_data = train_df.to_csv(index=False, header=False, columns=["user", "ip_address"])
# Upload data to S3 key
train_data_file = "train.csv"
key = os.path.join(prefix, "train", train_data_file)
s3_train_data = f"s3://{bucket}/{key}"
print(f"Uploading data to: {s3_train_data}")
boto3.resource("s3").Bucket(bucket).Object(key).put(Body=train_data)
# Configure SageMaker IP Insights Input Channels
input_data = {
"train": sagemaker.session.s3_input(
s3_train_data, distribution="FullyReplicated", content_type="text/csv"
)
}
```
## Training
---
Once the data is preprocessed and available in the necessary format, the next step is to train our model on the data. There are number of parameters required by the SageMaker IP Insights algorithm to configure the model and define the computational environment in which training will take place. The first of these is to point to a container image which holds the algorithms training and hosting code:
```
from sagemaker.amazon.amazon_estimator import get_image_uri
image = get_image_uri(boto3.Session().region_name, "ipinsights")
```
Then, we need to determine the training cluster to use. The IP Insights algorithm supports both CPU and GPU training. We recommend using GPU machines as they will train faster. However, when the size of your dataset increases, it can become more economical to use multiple CPU machines running with distributed training. See [Recommended Instance Types](https://docs.aws.amazon.com/sagemaker/latest/dg/ip-insights.html#ip-insights-instances) for more details.
### Training Job Configuration
- **train_instance_type**: the instance type to train on. We recommend `p3.2xlarge` for single GPU, `p3.8xlarge` for multi-GPU, and `m5.2xlarge` if using distributed training with CPU;
- **train_instance_count**: the number of worker nodes in the training cluster.
We need to also configure SageMaker IP Insights-specific hypeparameters:
### Model Hyperparameters
- **num_entity_vectors**: the total number of embeddings to train. We use an internal hashing mechanism to map the entity ID strings to an embedding index; therefore, using an embedding size larger than the total number of possible values helps reduce the number of hash collisions. We recommend this value to be 2x the total number of unique entites (i.e. user names) in your dataset;
- **vector_dim**: the size of the entity and IP embedding vectors. The larger the value, the more information can be encoded using these representations but using too large vector representations may cause the model to overfit, especially for small training data sets;
- **num_ip_encoder_layers**: the number of layers in the IP encoder network. The larger the number of layers, the higher the model capacity to capture patterns among IP addresses. However, large number of layers increases the chance of overfitting. `num_ip_encoder_layers=1` is a good value to start experimenting with;
- **random_negative_sampling_rate**: the number of randomly generated negative samples to produce per 1 positive sample; `random_negative_sampling_rate=1` is a good value to start experimenting with;
- Random negative samples are produced by drawing each octet from a uniform distributed of [0, 255];
- **shuffled_negative_sampling_rate**: the number of shuffled negative samples to produce per 1 positive sample; `shuffled_negative_sampling_rate=1` is a good value to start experimenting with;
- Shuffled negative samples are produced by shuffling the accounts within a batch;
### Training Hyperparameters
- **epochs**: the number of epochs to train. Increase this value if you continue to see the accuracy and cross entropy improving over the last few epochs;
- **mini_batch_size**: how many examples in each mini_batch. A smaller number improves convergence with stochastic gradient descent. But a larger number is necessary if using shuffled_negative_sampling to avoid sampling a wrong account for a negative sample;
- **learning_rate**: the learning rate for the Adam optimizer (try ranges in [0.001, 0.1]). Too large learning rate may cause the model to diverge since the training would be likely to overshoot minima. On the other hand, too small learning rate slows down the convergence;
- **weight_decay**: L2 regularization coefficient. Regularization is required to prevent the model from overfitting the training data. Too large of a value will prevent the model from learning anything;
For more details, see [Amazon SageMaker IP Insights (Hyperparameters)](https://docs.aws.amazon.com/sagemaker/latest/dg/ip-insights-hyperparameters.html). Additionally, most of these hyperparameters can be found using SageMaker Automatic Model Tuning; see [Amazon SageMaker IP Insights (Model Tuning)](https://docs.aws.amazon.com/sagemaker/latest/dg/ip-insights-tuning.html) for more details.
```
# Set up the estimator with training job configuration
ip_insights = sagemaker.estimator.Estimator(
image,
execution_role,
instance_count=1,
instance_type="ml.p3.2xlarge",
output_path=f"s3://{bucket}/{prefix}/output",
sagemaker_session=sagemaker.Session(),
)
# Configure algorithm-specific hyperparameters
ip_insights.set_hyperparameters(
num_entity_vectors="20000",
random_negative_sampling_rate="5",
vector_dim="128",
mini_batch_size="1000",
epochs="5",
learning_rate="0.01",
)
# Start the training job (should take about ~1.5 minute / epoch to complete)
ip_insights.fit(input_data)
```
If you see the message
> Completed - Training job completed
at the bottom of the output logs then that means training successfully completed and the output of the SageMaker IP Insights model was stored in the specified output path. You can also view information about and the status of a training job using the AWS SageMaker console. Just click on the "Jobs" tab and select training job matching the training job name, below:
```
print(f"Training job name: {ip_insights.latest_training_job.job_name}")
```
## Inference
-----
Now that we have trained a SageMaker IP Insights model, we can deploy the model to an endpoint to start performing inference on data. In this case, that means providing it a `<user, IP address>` pair and predicting their compatability scores.
We can create an inference endpoint using the SageMaker Python SDK `deploy()`function from the job we defined above. We specify the instance type where inference will be performed, as well as the initial number of instnaces to spin up. We recommend using the `ml.m5` instance as it provides the most memory at the lowest cost. Verify how large your model is in S3 and pick the instance type with the appropriate amount of memory.
```
predictor = ip_insights.deploy(initial_instance_count=1, instance_type="ml.m5.xlarge")
```
Congratulations, you now have a SageMaker IP Insights inference endpoint! You could start integrating this endpoint with your production services to start querying incoming requests for abnormal behavior.
You can confirm the endpoint configuration and status by navigating to the "Endpoints" tab in the AWS SageMaker console and selecting the endpoint matching the endpoint name below:
```
print(f"Endpoint name: {predictor.endpoint}")
```
### Data Serialization/Deserialization
We can pass data in a variety of formats to our inference endpoint. In this example, we will pass CSV-formmated data. Other available formats are JSON-formated and JSON Lines-formatted. We make use of the SageMaker Python SDK utilities: `csv_serializer` and `json_deserializer` when configuring the inference endpoint
```
from sagemaker.predictor import csv_serializer, json_deserializer
predictor.serializer = csv_serializer
predictor.deserializer = json_deserializer
```
Now that the predictor is configured, it is as easy as passing in a matrix of inference data.
We can take a few samples from the simulated dataset above, so we can see what the output looks like.
```
inference_data = [(data[0], data[1]) for data in train_df[:5].values]
predictor.predict(
inference_data, initial_args={"ContentType": "text/csv", "Accept": "application/json"}
)
```
By default, the predictor will only output the `dot_product` between the learned IP address and the online resource (in this case, the user ID). The dot product summarizes the compatibility between the IP address and online resource. The larger the value, the more the algorithm thinks the IP address is likely to be used by the user. This compatability score is sufficient for most applications, as we can define a threshold for what we constitute as an anomalous score.
However, more advanced users may want to inspect the learned embeddings and use them in further applications. We can configure the predictor to provide the learned embeddings by specifing the `verbose=True` parameter to the Accept heading. You should see that each 'prediction' object contains three keys: `ip_embedding`, `entity_embedding`, and `dot_product`.
```
predictor.predict(
inference_data,
initial_args={"ContentType": "text/csv", "Accept": "application/json; verbose=True"},
)
```
## Compute Anomaly Scores
----
The `dot_product` output of the model provides a good measure of how compatible an IP address and online resource are. However, the range of the dot_product is unbounded. This means to be able to consider an event as anomolous we need to define a threshold. Such that when we score an event, if the dot_product is above the threshold we can flag the behavior as anomolous.However, picking a threshold can be more of an art, and a good threshold depends on the specifics of your problem and dataset.
In the following section, we show how to pick a simple threshold by comparing the score distributions between known normal and malicious traffic:
1. We construct a test set of 'Normal' traffic;
2. Inject 'Malicious' traffic into the dataset;
3. Plot the distribution of dot_product scores for the model on 'Normal' trafic and the 'Malicious' traffic.
3. Select a threshold value which separates the normal distribution from the malicious traffic threshold. This value is based on your false-positive tolerance.
### 1. Construct 'Normal' Traffic Dataset
We previously [created a test set](#3.-Create-training-and-test-dataset) from our simulated Apache access logs dataset. We use this test dataset as the 'Normal' traffic in the test case.
```
test_df.head()
```
### 2. Inject Malicious Traffic
If we had a dataset with enough real malicious activity, we would use that to determine a good threshold. Those are hard to come by. So instead, we simulate malicious web traffic that mimics a realistic attack scenario.
We take a set of user accounts from the test set and randomly generate IP addresses. The users should not have used these IP addresses during training. This simulates an attacker logging in to a user account without knowledge of their IP history.
```
import numpy as np
from generate_data import draw_ip
def score_ip_insights(predictor, df):
def get_score(result):
"""Return the negative to the dot product of the predictions from the model."""
return [-prediction["dot_product"] for prediction in result["predictions"]]
df = df[["user", "ip_address"]]
result = predictor.predict(df.values)
return get_score(result)
def create_test_case(train_df, test_df, num_samples, attack_freq):
"""Creates a test case from provided train and test data frames.
This generates test case for accounts that are both in training and testing data sets.
:param train_df: (panda.DataFrame with columns ['user', 'ip_address']) training DataFrame
:param test_df: (panda.DataFrame with columns ['user', 'ip_address']) testing DataFrame
:param num_samples: (int) number of test samples to use
:param attack_freq: (float) the ratio of negative_samples:positive_samples to generate for test case
:return: DataFrame with both good and bad traffic, with labels
"""
# Get all possible accounts. The IP Insights model can only make predictions on users it has seen in training
# Therefore, filter the test dataset for unseen accounts, as their results will not mean anything.
valid_accounts = set(train_df["user"])
valid_test_df = test_df[test_df["user"].isin(valid_accounts)]
good_traffic = valid_test_df.sample(num_samples, replace=False)
good_traffic = good_traffic[["user", "ip_address"]]
good_traffic["label"] = 0
# Generate malicious traffic
num_bad_traffic = int(num_samples * attack_freq)
bad_traffic_accounts = np.random.choice(list(valid_accounts), size=num_bad_traffic, replace=True)
bad_traffic_ips = [draw_ip() for i in range(num_bad_traffic)]
bad_traffic = pd.DataFrame({"user": bad_traffic_accounts, "ip_address": bad_traffic_ips})
bad_traffic["label"] = 1
# All traffic labels are: 0 for good traffic; 1 for bad traffic.
all_traffic = good_traffic.append(bad_traffic)
return all_traffic
NUM_SAMPLES = 100000
test_case = create_test_case(train_df, test_df, num_samples=NUM_SAMPLES, attack_freq=1)
test_case.head()
test_case_scores = score_ip_insights(predictor, test_case)
```
### 3. Plot Distribution
Now, we plot the distribution of scores. Looking at this distribution will inform us on where we can set a good threshold, based on our risk tolerance.
```
%matplotlib inline
import matplotlib.pyplot as plt
n, x = np.histogram(test_case_scores[:NUM_SAMPLES], bins=100, density=True)
plt.plot(x[1:], n)
n, x = np.histogram(test_case_scores[NUM_SAMPLES:], bins=100, density=True)
plt.plot(x[1:], n)
plt.legend(["Normal", "Random IP"])
plt.xlabel("IP Insights Score")
plt.ylabel("Frequency")
plt.figure()
```
### 4. Selecting a Good Threshold
As we see in the figure above, there is a clear separation between normal traffic and random traffic.
We could select a threshold depending on the application.
- If we were working with low impact decisions, such as whether to ask for another factor or authentication during login, we could use a `threshold = 0.0`. This would result in catching more true-positives, at the cost of more false-positives.
- If our decision system were more sensitive to false positives, we could choose a larger threshold, such as `threshold = 10.0`. That way if we were sending the flagged cases to manual investigation, we would have a higher confidence that the acitivty was suspicious.
```
threshold = 0.0
flagged_cases = test_case[np.array(test_case_scores) > threshold]
num_flagged_cases = len(flagged_cases)
num_true_positives = len(flagged_cases[flagged_cases["label"] == 1])
num_false_positives = len(flagged_cases[flagged_cases["label"] == 0])
num_all_positives = len(test_case.loc[test_case["label"] == 1])
print(f"When threshold is set to: {threshold}")
print(f"Total of {num_flagged_cases} flagged cases")
print(f"Total of {num_true_positives} flagged cases are true positives")
print(f"True Positive Rate: {num_true_positives / float(num_flagged_cases)}")
print(f"Recall: {num_true_positives / float(num_all_positives)}")
print(f"Precision: {num_true_positives / float(num_flagged_cases)}")
```
## Epilogue
----
In this notebook, we have showed how to configure the basic training, deployment, and usage of the Amazon SageMaker IP Insights algorithm. All SageMaker algorithms come with support for two additional services that make optimizing and using the algorithm that much easier: Automatic Model Tuning and Batch Transform service.
### Amazon SageMaker Automatic Model Tuning
The results above were based on using the default hyperparameters of the SageMaker IP Insights algorithm. If we wanted to improve the model's performance even more, we can use [Amazon SageMaker Automatic Model Tuning](https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning.html) to automate the process of finding the hyperparameters.
#### Validation Dataset
Previously, we separated our dataset into a training and test set to validate the performance of a single IP Insights model. However, when we do model tuning, we train many IP Insights models in parallel. If we were to use the same test dataset to select the best model, we bias our model selection such that we don't know if we selected the best model in general, or just the best model for that particular dateaset.
Therefore, we need to separate our test set into a validation dataset and a test dataset. The validation dataset is used for model selection. Then once we pick the model with the best performance, we evaluate it the winner on a test set just as before.
#### Validation Metrics
For SageMaker Automatic Model Tuning to work, we need an objective metric which determines the performance of the model we want to optimize. Because SageMaker IP Insights is an usupervised algorithm, we do not have a clearly defined metric for performance (such as percentage of fraudulent events discovered).
We allow the user to provide a validation set of sample data (same format as training data bove) through the `validation` channel. We then fix the negative sampling strategy to use `random_negative_sampling_rate=1` and `shuffled_negative_sampling_rate=0` and generate a validation dataset by assigning corresponding labels to the real and simulated data. We then calculate the model's `descriminator_auc` metric. We do this by taking the model's predicted labels and the 'true' simulated labels and compute the Area Under ROC Curve (AUC) on the model's performance.
We set up the `HyperParameterTuner` to maximize the `discriminator_auc` on the validation dataset. We also need to set the search space for the hyperparameters. We give recommended ranges for the hyperparmaeters in the [Amazon SageMaker IP Insights (Hyperparameters)](https://docs.aws.amazon.com/sagemaker/latest/dg/ip-insights-hyperparameters.html) documentation.
```
test_df["timestamp"].describe()
```
The test set we constructed above spans 3 days. We reserve the first day as the validation set and the subsequent two days for the test set.
```
time_partition = (
datetime(2018, 11, 13, tzinfo=pytz.FixedOffset(0))
if num_time_zones > 1
else datetime(2018, 11, 13)
)
validation_df = test_df[test_df["timestamp"] < time_partition]
test_df = test_df[test_df["timestamp"] >= time_partition]
valid_data = validation_df.to_csv(index=False, header=False, columns=["user", "ip_address"])
```
We then upload the validation data to S3 and specify it as the validation channel.
```
# Upload data to S3 key
validation_data_file = "valid.csv"
key = os.path.join(prefix, "validation", validation_data_file)
boto3.resource("s3").Bucket(bucket).Object(key).put(Body=valid_data)
s3_valid_data = f"s3://{bucket}/{key}"
print(f"Validation data has been uploaded to: {s3_valid_data}")
# Configure SageMaker IP Insights Input Channels
input_data = {"train": s3_train_data, "validation": s3_valid_data}
from sagemaker.tuner import HyperparameterTuner, IntegerParameter
# Configure HyperparameterTuner
ip_insights_tuner = HyperparameterTuner(
estimator=ip_insights, # previously-configured Estimator object
objective_metric_name="validation:discriminator_auc",
hyperparameter_ranges={"vector_dim": IntegerParameter(64, 1024)},
max_jobs=4,
max_parallel_jobs=2,
)
# Start hyperparameter tuning job
ip_insights_tuner.fit(input_data, include_cls_metadata=False)
# Wait for all the jobs to finish
ip_insights_tuner.wait()
# Visualize training job results
ip_insights_tuner.analytics().dataframe()
# Deploy best model
tuned_predictor = ip_insights_tuner.deploy(
initial_instance_count=1,
instance_type="ml.m4.xlarge",
serializer=csv_serializer,
deserializer=json_deserializer,
)
# Make a prediction against the SageMaker endpoint
tuned_predictor.predict(
inference_data, initial_args={"ContentType": "text/csv", "Accept": "application/json"}
)
```
We should have the best performing model from the training job! Now we can determine thresholds and make predictions just like we did with the inference endpoint [above](#Inference).
### Batch Transform
Let's say we want to score all of the login events at the end of the day and aggregate flagged cases for investigators to look at in the morning. If we store the daily login events in S3, we can use IP Insights with [Amazon SageMaker Batch Transform](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-batch.html) to run inference and store the IP Insights scores back in S3 for future analysis.
Below, we take the training job from before and evaluate it on the validation data we put in S3.
```
transformer = ip_insights.transformer(instance_count=1, instance_type="ml.m4.xlarge")
transformer.transform(s3_valid_data, content_type="text/csv", split_type="Line")
# Wait for Transform Job to finish
transformer.wait()
print(f"Batch Transform output is at: {transformer.output_path}")
```
### Stop and Delete the Endpoint
If you are done with this model, then we should delete the endpoint before we close the notebook. Or else you will continue to pay for the endpoint while it is running.
To do so execute the cell below. Alternately, you can navigate to the "Endpoints" tab in the SageMaker console, select the endpoint with the name stored in the variable endpoint_name, and select "Delete" from the "Actions" dropdown menu.
```
ip_insights_tuner.delete_endpoint()
sagemaker.Session().delete_endpoint(predictor.endpoint)
```
| true |
code
| 0.414129 | null | null | null | null |
|
# Exploring Machine Learning on Quantopian
Recently, Quantopian’s Chief Investment Officer, Jonathan Larkin, shared an industry insider’s overview of the [professional quant equity workflow][1]. This workflow is comprised of distinct stages including: (1) Universe Definition, (2) Alpha Discovery, (3) Alpha Combination, (4) Portfolio Construction and (5) Trading.
This Notebook focuses on stage 3: Alpha Combination. At this stage, Machine Learning is an intuitive choice as we have abstracted the problem to such a degree that it is now a classic classification (or regression) problem which ML is very good at solving and coming up with an alpha combination that is predictive.
As you will see, there is a lot of code here setting up a factor library and some data wrangling to get everything into shape. The details of this part are perhaps not quite as interesting so feel free to skip directly to ["Training our ML pipeline"](#training) where we have everything in place to train and test our classifier.
## Overview
1. Define trading universe to use ([Q500US and Q1500US][2]).
2. Define alphas (implemented in [Pipeline][3]).
3. Run pipeline.
4. Split into train and test set.
5. Preprocess data (rank alphas, subsample, align alphas with future returns, impute, scale).
6. Train Machine Learning classifier ([AdaBoost from Scikit-Learn][4]).
7. Evaluate Machine Learning classifier on test set.
Note that one important limitation is that we only train and test on static (i.e. fixed-in-time) data. Thus, you can not directly do the same in an algorithm. In principle, this is possible and will be the next step but it makes sense to first focus on just the ML in a more direct way to get a good intuition about the workflow and how to develop a competitive ML pipeline.
### Disclaimer
This workflow is still a bit rough around the edges. We are working on improving it and adding better educational materials. This serves as a sneak-peek for the curious and adventurous.
[1]: http://blog.quantopian.com/a-professional-quant-equity-workflow/
[2]: https://www.quantopian.com/posts/the-q500us-and-q1500us
[3]: https://www.quantopian.com/tutorials/pipeline
[4]: http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.AdaBoostClassifier.html
[5]: https://www.quantopian.com/posts/alphalens-a-new-tool-for-analyzing-alpha-factors
```
from quantopian.research import run_pipeline
from quantopian.pipeline import Pipeline
from quantopian.pipeline.factors import Latest
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import morningstar
from quantopian.pipeline.factors import CustomFactor, SimpleMovingAverage, AverageDollarVolume, Returns, RSI
from quantopian.pipeline.classifiers.morningstar import Sector
from quantopian.pipeline.filters import Q500US, Q1500US
from quantopian.pipeline.data.quandl import fred_usdontd156n as libor
from quantopian.pipeline.data.zacks import EarningsSurprises
import talib
import pandas as pd
import numpy as np
from time import time
import alphalens as al
import pyfolio as pf
from scipy import stats
import matplotlib.pyplot as plt
from sklearn import linear_model, decomposition, ensemble, preprocessing, isotonic, metrics
```
## Definition of some commonly used factors
The factors below are a small collection of commonly used alphas that were coded by Gil Wassermann. I will post a separate Notebook with the full collection and more descriptions of them. Ultimately we will put these into a library you can just import to avoid the wall of text. If you want to understand more about pipeline, read the [tutorial](https://www.quantopian.com/tutorials/pipeline).
Also note the `Earnings_Quality` alpha which uses [Zacks Earnings Surprises](https://www.quantopian.com/data/zacks/earnings_surprises), a [new source from our partners](https://www.quantopian.com/data).
The details of these factors are not the focus of this Notebook so feel free to just [skip](#universe) this cell.
```
bs = morningstar.balance_sheet
cfs = morningstar.cash_flow_statement
is_ = morningstar.income_statement
or_ = morningstar.operation_ratios
er = morningstar.earnings_report
v = morningstar.valuation
vr = morningstar.valuation_ratios
def make_factors():
def Asset_Growth_3M():
return Returns(inputs=[bs.total_assets], window_length=63)
def Asset_To_Equity_Ratio():
return bs.total_assets.latest / bs.common_stock_equity.latest
def Capex_To_Cashflows():
return (cfs.capital_expenditure.latest * 4.) / \
(cfs.free_cash_flow.latest * 4.)
def EBITDA_Yield():
return (is_.ebitda.latest * 4.) / \
USEquityPricing.close.latest
def EBIT_To_Assets():
return (is_.ebit.latest * 4.) / \
bs.total_assets.latest
def Earnings_Quality():
return morningstar.cash_flow_statement.operating_cash_flow.latest / \
EarningsSurprises.eps_act.latest
def Return_On_Total_Invest_Capital():
return or_.roic.latest
class Mean_Reversion_1M(CustomFactor):
inputs = [Returns(window_length=21)]
window_length = 252
def compute(self, today, assets, out, monthly_rets):
out[:] = (monthly_rets[-1] - np.nanmean(monthly_rets, axis=0)) / \
np.nanstd(monthly_rets, axis=0)
class MACD_Signal_10d(CustomFactor):
inputs = [USEquityPricing.close]
window_length = 60
def compute(self, today, assets, out, close):
sig_lines = []
for col in close.T:
# get signal line only
try:
_, signal_line, _ = talib.MACD(col, fastperiod=12,
slowperiod=26, signalperiod=10)
sig_lines.append(signal_line[-1])
# if error calculating, return NaN
except:
sig_lines.append(np.nan)
out[:] = sig_lines
class Moneyflow_Volume_5d(CustomFactor):
inputs = [USEquityPricing.close, USEquityPricing.volume]
window_length = 5
def compute(self, today, assets, out, close, volume):
mfvs = []
for col_c, col_v in zip(close.T, volume.T):
# denominator
denominator = np.dot(col_c, col_v)
# numerator
numerator = 0.
for n, price in enumerate(col_c.tolist()):
if price > col_c[n - 1]:
numerator += price * col_v[n]
else:
numerator -= price * col_v[n]
mfvs.append(numerator / denominator)
out[:] = mfvs
def Net_Income_Margin():
return or_.net_margin.latest
def Operating_Cashflows_To_Assets():
return (cfs.operating_cash_flow.latest * 4.) / \
bs.total_assets.latest
def Price_Momentum_3M():
return Returns(window_length=63)
class Price_Oscillator(CustomFactor):
inputs = [USEquityPricing.close]
window_length = 252
def compute(self, today, assets, out, close):
four_week_period = close[-20:]
out[:] = (np.nanmean(four_week_period, axis=0) /
np.nanmean(close, axis=0)) - 1.
def Returns_39W():
return Returns(window_length=215)
class Trendline(CustomFactor):
inputs = [USEquityPricing.close]
window_length = 252
# using MLE for speed
def compute(self, today, assets, out, close):
# prepare X matrix (x_is - x_bar)
X = range(self.window_length)
X_bar = np.nanmean(X)
X_vector = X - X_bar
X_matrix = np.tile(X_vector, (len(close.T), 1)).T
# prepare Y matrix (y_is - y_bar)
Y_bar = np.nanmean(close, axis=0)
Y_bars = np.tile(Y_bar, (self.window_length, 1))
Y_matrix = close - Y_bars
# prepare variance of X
X_var = np.nanvar(X)
# multiply X matrix an Y matrix and sum (dot product)
# then divide by variance of X
# this gives the MLE of Beta
out[:] = (np.sum((X_matrix * Y_matrix), axis=0) / X_var) / \
(self.window_length)
class Vol_3M(CustomFactor):
inputs = [Returns(window_length=2)]
window_length = 63
def compute(self, today, assets, out, rets):
out[:] = np.nanstd(rets, axis=0)
def Working_Capital_To_Assets():
return bs.working_capital.latest / bs.total_assets.latest
all_factors = {
'Asset Growth 3M': Asset_Growth_3M,
'Asset to Equity Ratio': Asset_To_Equity_Ratio,
'Capex to Cashflows': Capex_To_Cashflows,
'EBIT to Assets': EBIT_To_Assets,
'EBITDA Yield': EBITDA_Yield,
'Earnings Quality': Earnings_Quality,
'MACD Signal Line': MACD_Signal_10d,
'Mean Reversion 1M': Mean_Reversion_1M,
'Moneyflow Volume 5D': Moneyflow_Volume_5d,
'Net Income Margin': Net_Income_Margin,
'Operating Cashflows to Assets': Operating_Cashflows_To_Assets,
'Price Momentum 3M': Price_Momentum_3M,
'Price Oscillator': Price_Oscillator,
'Return on Invest Capital': Return_On_Total_Invest_Capital,
'39 Week Returns': Returns_39W,
'Trendline': Trendline,
'Vol 3M': Vol_3M,
'Working Capital to Assets': Working_Capital_To_Assets,
}
return all_factors
```
<a id='universe'></a>
## Define universe and select factors to use
We will screen our universe using the new [Q1500US](https://www.quantopian.com/posts/the-q500us-and-q1500us) and hand-pick a few alphas from the list above. We encourage you to play around with the factors.
```
universe = Q1500US()
factors = make_factors()
```
##Define and build the pipeline
Next we have to build the pipeline. In addition to the factors defined above, we need the forward returns we want to predict. In this Notebook we will predict 5-day returns and train our model on daily data. You can also subsample the data to e.g. weekly to not have overlapping return periods but we omit this here.
```
n_fwd_days = 5 # number of days to compute returns over
def make_history_pipeline(factors, universe, n_fwd_days=5):
# Call .rank() on all factors and mask out the universe
factor_ranks = {name: f().rank(mask=universe) for name, f in factors.iteritems()}
# Get cumulative returns over last n_fwd_days days. We will later shift these.
factor_ranks['Returns'] = Returns(inputs=[USEquityPricing.open],
mask=universe, window_length=n_fwd_days)
pipe = Pipeline(screen=universe, columns=factor_ranks)
return pipe
history_pipe = make_history_pipeline(factors, universe, n_fwd_days=n_fwd_days)
```
##Run the pipeline
```
start_timer = time()
start = pd.Timestamp("2016-03-06")
end = pd.Timestamp("2016-09-14")
results = run_pipeline(history_pipe, start_date=start, end_date=end)
results.index.names = ['date', 'security']
end_timer = time()
print "Time to run pipeline %.2f secs" % (end_timer - start_timer)
results.head()
results.tail()
```
As you can see, running pipeline gives us factors for every day and every security, ranked relative to each other. We assume that the order of individual factors might carry some weak predictive power on future returns. The question then becomes: how can we combine these weakly predictive factors in a clever way to get a single mega-alpha which is hopefully more predictive.
This is an important milestone. We have our ranked factor values on each day for each stock. Ranking is not absolutely necessary but has several benefits:
* it increases robustness to outliers,
* we mostly care about the relative ordering rather than the absolute values.
Also note the `Returns` column. These are the values we want to predict given the factor ranks.
Next, we are doing some additional transformations to our data:
1. Shift factor ranks to align with future returns `n_fwd_days` days in the future.
2. Find the top and bottom 30 percentile stocks by their returns. Essentially, we only care about relative movement of stocks. If we later short stocks that go down and long stocks going up relative to each other, it doesn't matter if e.g. all stocks are going down in absolute terms. Moverover, we are ignoring stocks that did not move that much (i.e. 30th to 70th percentile) to only train the classifier on those that provided strong signal.
3. We also binarize the returns by their percentile to turn our ML problem into a classification one.
`shift_mask_data()` is a utility function that does all of these.
```
def shift_mask_data(X, Y, upper_percentile=70, lower_percentile=30, n_fwd_days=1):
# Shift X to match factors at t to returns at t+n_fwd_days (we want to predict future returns after all)
shifted_X = np.roll(X, n_fwd_days, axis=0)
# Slice off rolled elements
X = shifted_X[n_fwd_days:]
Y = Y[n_fwd_days:]
n_time, n_stocks, n_factors = X.shape
# Look for biggest up and down movers
upper = np.nanpercentile(Y, upper_percentile, axis=1)[:, np.newaxis]
lower = np.nanpercentile(Y, lower_percentile, axis=1)[:, np.newaxis]
upper_mask = (Y >= upper)
lower_mask = (Y <= lower)
mask = upper_mask | lower_mask # This also drops nans
mask = mask.flatten()
# Only try to predict whether a stock moved up/down relative to other stocks
Y_binary = np.zeros(n_time * n_stocks)
Y_binary[upper_mask.flatten()] = 1
Y_binary[lower_mask.flatten()] = -1
# Flatten X
X = X.reshape((n_time * n_stocks, n_factors))
# Drop stocks that did not move much (i.e. are in the 30th to 70th percentile)
X = X[mask]
Y_binary = Y_binary[mask]
return X, Y_binary
```
After we have our helper function to align our data properly we pass our factor ranks to it. You might wonder why we have to do the `swapaxes` thing below rather than just using `pandas` logic. The reason is that this way we can use the same `shift_mask_data()` function inside of a factor where we do not have access to a Pandas `DataFrame`. More on that in a future notebook.
```
# Massage data to be in the form expected by shift_mask_data()
results_wo_returns = results.copy()
returns = results_wo_returns.pop('Returns')
Y = returns.unstack().values
X = results_wo_returns.to_panel()
X = X.swapaxes(2, 0).swapaxes(0, 1).values # (factors, time, stocks) -> (time, stocks, factors)
```
Next, we split our data into training (80%) and test (20%). This is common practice: our classifier will try to fit the training set as well as possible but it does not tell us how well it would perform on unseen data. Because we are dealing with time-series data we split along the time-dimension to only test on future data.
```
# Train-test split
train_size_perc = 0.8
n_time, n_stocks, n_factors = X.shape
train_size = np.int16(np.round(train_size_perc * n_time))
X_train, Y_train = X[:train_size, ...], Y[:train_size]
X_test, Y_test = X[(train_size+n_fwd_days):, ...], Y[(train_size+n_fwd_days):]
```
As we can only exclude stocks that did not move by a lot (i.e. 30th to 70th percentile) during training, we keep all stocks in our test set and just binarize according to the median. This avoids look-ahead bias.
```
X_train_shift, Y_train_shift = shift_mask_data(X_train, Y_train, n_fwd_days=n_fwd_days)
X_test_shift, Y_test_shift = shift_mask_data(X_test, Y_test, n_fwd_days=n_fwd_days,
lower_percentile=50,
upper_percentile=50)
X_train_shift.shape, X_test_shift.shape
```
<a id='training'></a>
## Training our ML pipeline
Before training our classifier, several preprocessing steps are advisable. The first one imputes nan values with the factor mean to get clean training data, the second scales our factor ranks to be between [0, 1).
For training we are using the [AdaBoost classifier](https://en.wikipedia.org/wiki/AdaBoost) which automatically determines the most relevant features (factors) and tries to find a non-linear combination of features to maximize predictiveness while still being robust. In essence, AdaBoost trains an ensemble of weak classifiers (decision trees in this case) sequentially. Each subsequent weak classifier takes into account the samples (or data points) already classified by the previous weak classifiers. It then focuses on the samples misclassified by the previous weak classifiers and tries to get those correctly. With each new weak classifier you get more fine-grained in your decision function and correctly classify some previously misclassified samples. For prediction, you simply average the answer of all weak classifiers to get a single strong classifier.
Of course, this is just an example and you can let your creativity and skill roam freely.
```
start_timer = time()
# Train classifier
imputer = preprocessing.Imputer()
scaler = preprocessing.MinMaxScaler()
clf = ensemble.AdaBoostClassifier(n_estimators=150) # n_estimators controls how many weak classifiers are fi
X_train_trans = imputer.fit_transform(X_train_shift)
X_train_trans = scaler.fit_transform(X_train_trans)
clf.fit(X_train_trans, Y_train_shift)
end_timer = time()
print "Time to train full ML pipline: %0.2f secs" % (end_timer - start_timer)
```
As you can see, training a modern ML classifier does not have to be very compute intensive. Scikit-learn is heavily optimized so the full process only takes less than 10 seconds. Of course, things like deep-learning (which is currently not available on Quantopian), might take a bit longer, but these models are also trained on data sets much much bigger than this (a famous subset of the ImageNet data set is 138 GB).
This means that the current bottlenecks are retrieving the data from pipeline (RAM and i/o), not lack of GPU or parallel processing support.
```
Y_pred = clf.predict(X_train_trans)
print('Accuracy on train set = {:.2f}%'.format(metrics.accuracy_score(Y_train_shift, Y_pred) * 100))
```
The classifier does reasonably well on the data we trained it on, but the real test is on hold-out data.
*Exercise*: It is also common to run cross-validation on the training data and tweak the parameters based on that score, testing should only be done rarely. Try coding a [sklearn pipeline](http://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html) with [K-Fold cross-validation](http://scikit-learn.org/stable/modules/cross_validation.html).
## Evaluating our ML classifier
To evaluate our ML classifer on the test set, we have to transform our test data in the same way as our traning data. Note that we are only calling the `.transform()` method here which does not use any information from the test set.
```
# Transform test data
X_test_trans = imputer.transform(X_test_shift)
X_test_trans = scaler.transform(X_test_trans)
```
After all this work, we can finally test our classifier. We can predict binary labels but also get probability estimates.
```
# Predict!
Y_pred = clf.predict(X_test_trans)
Y_pred_prob = clf.predict_proba(X_test_trans)
print 'Predictions:', Y_pred
print 'Probabilities of class == 1:', Y_pred_prob[:, 1] * 100
```
There are many ways to evaluate the performance of our classifier. The simplest and most intuitive one is certainly the accuracy (50% is chance due to our median split). On Kaggle competitions, you will also often find the log-loss being used. This punishes you for being wrong *and* confident in your answer. See [the Kaggle description](https://www.kaggle.com/wiki/LogarithmicLoss) for more motivation.
```
print('Accuracy on test set = {:.2f}%'.format(metrics.accuracy_score(Y_test_shift, Y_pred) * 100))
print('Log-loss = {:.5f}'.format(metrics.log_loss(Y_test_shift, Y_pred_prob)))
```
Seems like we're at chance on this data set, alas. But perhaps you can do better?
We can also examine which factors the classifier identified as most predictive.
```
feature_importances = pd.Series(clf.feature_importances_, index=results_wo_returns.columns)
feature_importances.sort(ascending=False)
ax = feature_importances.plot(kind='bar')
ax.set(ylabel='Importance (Gini Coefficient)', title='Feature importances');
```
*Exercise*: Use [partial dependence plots](http://scikit-learn.org/stable/auto_examples/ensemble/plot_partial_dependence.html) to get an understanding of how factor rankings are used to predict future returns.
## Where to go from here
Several knobs can be tweaked to boost performance:
* Add existing factors from the collection above to the data set.
* Come up with new factors
* Use [`alphalens`](https://www.quantopian.com/posts/alphalens-a-new-tool-for-analyzing-alpha-factors) to evaluate an alpha for its predictive power.
* Look for [novel data sources from our partners](https://www.quantopian.com/data).
* Look at the [101 Alpha's Project](https://www.quantopian.com/posts/the-101-alphas-project).
* Improve preprocessing of the ML pipeline
* Is 70/30 the best split?
* Should we not binarize the returns and do regression?
* Can we add Sector information in some way?
* Experiment with [feature selection](http://scikit-learn.org/stable/modules/feature_selection.html).
* PCA
* ICA
* etc.
* Tweak hyper-parameters of `AdaBoostClassifier`.
* [Use cross-validation to find optimal parameters](http://scikit-learn.org/stable/modules/grid_search.html).
* Try [different classifiers](http://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html) of combinations of classifiers.
## Machine Learning competition
If you have something you think works well, post it in this thread. Make sure to test over the same time-period as I have here to keep things comparable. In a month from now, we can test on new data that has aggregated since then and determine who built the best ML pipeline. If there is demand, we might turn this into a proper ML contest.
## Machine Learning resources
If you look for information on how to get started with ML, here are a few resources:
* [Scikit-learn resources](http://scikit-learn.org/stable/presentations.html)
* [Learning scikit-learn: Machine Learning in Python](https://www.amazon.com/dp/1783281936)
* [Pattern Recognition and Machine Learning](https://www.amazon.com/Pattern-Recognition-Learning-Information-Statistics/dp/0387310738)
## How to put this into an algorithm
As mentioned above, this is not immediately usable in an algorithm. For one thing, there is no `run_pipeline()` in the backtest IDE. It turns out to be rather simple to take the code above and put it into a pipeline `CustomFactor()` where the ML model would automatically get retrained and make predictions. You would then long the `1` predictions and short the `-1` predictions, apply some weighting (e.g. inverse variance) and execute orders. More on these next steps in the future.
## Credits
* Content created by James Christopher and Thomas Wiecki
* Thanks to Sheng Wang for ideas and inspiration.
* Jonathan Larkin, Jess Stauth, Delaney Granizo-Mackenzie, and Jean Bredeche for helpful comments on an earlier draft.
| true |
code
| 0.815214 | null | null | null | null |
|
# Naive Bayes
Naive Bayes is a method of calculating the probability of a element belonging to a certain class. Naive Bayes is a classification algorithm that focuses on efficiency more than accuracy. The Bayes' Theorm states:
$$ p(class|data) = (p(data|class) * p(class)) / p(data) $$
- $ p(class|data) $ is the probability of class given the provided data
## Dataset
In this mini-project I will be utilizing the **Iris Flower Species Dataset** which involves the process of predicting the flower species based on the measurements of the iris flowers.
## Steps
1. #### Seperate the dataset into two classes
- [Iris-virginica] => 0
- [Iris-versicolor] => 1
- [Iris-setosa] => 2
2. #### Summarize the dataset
- Calculate mean
- Calculate standard deviation
3. #### Summarize data by blass
- Calculate mean
- Calculate standard deviation
- Calculate statistics
4. #### Gaussian Probability Density Function
- Calculate probability distribution function
5. #### Class Probabilities
- Calculate probability of each class
```
from csv import reader
from random import seed
from random import randrange
from math import sqrt
from math import exp
from math import pi
import pandas as pd
import numpy as np
# Loading in the dataset
col_names = ['sepal_length','sepal_width','petal_length','petal_width','class']
dataset = pd.read_csv('iris.csv',names=col_names)
dataset
# Mapping classes to integer values
classes = {'Iris-virginica':0, 'Iris-versicolor':1, 'Iris-setosa':2}
dataset['class'] = dataset['class'].map(classes)
dataset
# Splitting dataset into classes
Ivirg = dataset.loc[dataset['class'] == 0]
Ivers = dataset.loc[dataset['class'] == 1]
Iseto = dataset.loc[dataset['class'] == 2]
Ivirg.pop('class')
Ivers.pop('class')
Iseto.pop('class')
Ivirg
# Grabbing statistics
Ivirg_stats = Ivirg.describe()
Ivirg_stats = Ivirg_stats.transpose()
Ivirg_stats
Ivers_stats = Ivers.describe()
Ivers_stats = Ivers_stats.transpose()
Ivers_stats
Iseto_stats = Iseto.describe()
Iseto_stats = Iseto_stats.transpose()
Iseto_stats
dict_stats = {0:Ivirg_stats,1:Ivers_stats,2:Iseto_stats}
# Calculate the Gaussian probability distribution function for x
def calculate_probability(x, stats):
exponent = np.exp(-((x-stats['mean'])**2 / (2 * stats['std']*2)))
return (1/(sqrt(2*pi)*stats['std'])*exponent)
def calculate_class_probability(x):
probabilities = dict()
for i in range(len(classes)):
probabilities[i] = len(dataset[dataset['class']==i].index) / len(dataset['class'].index)
probabilities[i] *= np.prod(calculate_probability(x, dict_stats[i]))
max_key = max(probabilities, key=probabilities.get)
return max_key
predicted_class = calculate_class_probability([5.7,2.9,4.2,1.3])
predicted_class
```
#### Credits: [Naive Bayes Classifier From Scratch in Python](https://machinelearningmastery.com/naive-bayes-classifier-scratch-python/)
| true |
code
| 0.439386 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/Serbeld/Tensorflow/blob/master/PruebaMnist_with_custom_callback.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
#!pip install tensorflow==1.3
#!pip install keras
import tensorflow as tf
print(tf.__version__)
import keras as k
print(k.__version__)
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPool2D, Activation
import keras
from keras.layers import Activation, Dense
batch_size = 32
num_classes = 10
epochs = 15
filas,columnas = 28,28
(xt,yt),(xtest,ytest) = mnist.load_data()
xt = xt.reshape(xt.shape[0],filas,columnas,1)
xtest = xtest.reshape(xtest.shape[0], filas, columnas,1)
xt = xt.astype('float32')
xtest = xtest.astype('float32')
xt = xt/255
xtest = xtest/255
yt = keras.utils.to_categorical(yt,num_classes)
ytest = keras.utils.to_categorical(ytest,num_classes)
xt = xt[0:100]
yt = yt[0:100]
modelo = Sequential()
modelo.add(Conv2D(64,kernel_size=(2,2),activation='relu',
input_shape=(28,28,1)))
modelo.add(Conv2D(64,kernel_size=(2,2),activation='relu',
input_shape=(28,28,1)))
modelo.add(MaxPool2D(pool_size=(2,2)))
modelo.add(Flatten())
modelo.add(Dense(68))
modelo.add(Dropout(0.25))
modelo.add(Dense(20))
modelo.add(Dropout(0.25))
modelo.add(Dense(num_classes, activation='relu'))
modelo.compile(optimizer=keras.optimizers.Adam(),
loss=keras.losses.categorical_crossentropy,
metrics=['categorical_accuracy'])
modelo.summary()
class LossAndErrorPrintingCallback(keras.callbacks.Callback):
global vector
vector = []
#def on_train_batch_end(self, batch, logs=None):
#print('For batch {}, loss is {:7.2f}.'.format(batch, logs['loss']))
#def on_test_batch_end(self, batch, logs=None):
# print('For batch {}, loss is {:7.2f}.'.format(batch, logs['loss']))
def on_epoch_end(self, epoch, logs=None):
vector.append(logs['categorical_accuracy'])
print('The average loss for epoch {} is {:7.2f} and categorical accuracy is {:7.2f}.'.format(epoch, logs['loss'], logs['categorical_accuracy']))
model = modelo.fit(xt, yt,batch_size,epochs,
validation_data=(xtest,ytest),
shuffle=True,verbose=0,
callbacks=[LossAndErrorPrintingCallback()])
#modelo.fit(xt,yt,batch_size,epochs,validation_data=(xtest,ytest),shuffle=True,verbose=1)
puntuacion = modelo.evaluate(xtest,ytest,verbose=1)
#plt.imshow(xt.shape[0])
#predictions = modelo.predict(xt[0])
print(puntuacion)
print(vector)
```
| true |
code
| 0.726092 | null | null | null | null |
|
# LEARNING
This notebook serves as supporting material for topics covered in **Chapter 18 - Learning from Examples** , **Chapter 19 - Knowledge in Learning**, **Chapter 20 - Learning Probabilistic Models** from the book *Artificial Intelligence: A Modern Approach*. This notebook uses implementations from [learning.py](https://github.com/aimacode/aima-python/blob/master/learning.py). Let's start by importing everything from the module:
```
from learning import *
from notebook import *
```
## CONTENTS
* Machine Learning Overview
* Datasets
* Iris Visualization
* Distance Functions
* Plurality Learner
* k-Nearest Neighbours
* Decision Tree Learner
* Naive Bayes Learner
* Perceptron
* Learner Evaluation
## MACHINE LEARNING OVERVIEW
In this notebook, we learn about agents that can improve their behavior through diligent study of their own experiences.
An agent is **learning** if it improves its performance on future tasks after making observations about the world.
There are three types of feedback that determine the three main types of learning:
* **Supervised Learning**:
In Supervised Learning the agent observes some example input-output pairs and learns a function that maps from input to output.
**Example**: Let's think of an agent to classify images containing cats or dogs. If we provide an image containing a cat or a dog, this agent should output a string "cat" or "dog" for that particular image. To teach this agent, we will give a lot of input-output pairs like {cat image-"cat"}, {dog image-"dog"} to the agent. The agent then learns a function that maps from an input image to one of those strings.
* **Unsupervised Learning**:
In Unsupervised Learning the agent learns patterns in the input even though no explicit feedback is supplied. The most common type is **clustering**: detecting potential useful clusters of input examples.
**Example**: A taxi agent would develop a concept of *good traffic days* and *bad traffic days* without ever being given labeled examples.
* **Reinforcement Learning**:
In Reinforcement Learning the agent learns from a series of reinforcements—rewards or punishments.
**Example**: Let's talk about an agent to play the popular Atari game—[Pong](http://www.ponggame.org). We will reward a point for every correct move and deduct a point for every wrong move from the agent. Eventually, the agent will figure out its actions prior to reinforcement were most responsible for it.
## DATASETS
For the following tutorials we will use a range of datasets, to better showcase the strengths and weaknesses of the algorithms. The datasests are the following:
* [Fisher's Iris](https://github.com/aimacode/aima-data/blob/a21fc108f52ad551344e947b0eb97df82f8d2b2b/iris.csv): Each item represents a flower, with four measurements: the length and the width of the sepals and petals. Each item/flower is categorized into one of three species: Setosa, Versicolor and Virginica.
* [Zoo](https://github.com/aimacode/aima-data/blob/a21fc108f52ad551344e947b0eb97df82f8d2b2b/zoo.csv): The dataset holds different animals and their classification as "mammal", "fish", etc. The new animal we want to classify has the following measurements: 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 4, 1, 0, 1 (don't concern yourself with what the measurements mean).
To make using the datasets easier, we have written a class, `DataSet`, in `learning.py`. The tutorials found here make use of this class.
Let's have a look at how it works before we get started with the algorithms.
### Intro
A lot of the datasets we will work with are .csv files (although other formats are supported too). We have a collection of sample datasets ready to use [on aima-data](https://github.com/aimacode/aima-data/tree/a21fc108f52ad551344e947b0eb97df82f8d2b2b). Two examples are the datasets mentioned above (*iris.csv* and *zoo.csv*). You can find plenty datasets online, and a good repository of such datasets is [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets.html).
In such files, each line corresponds to one item/measurement. Each individual value in a line represents a *feature* and usually there is a value denoting the *class* of the item.
You can find the code for the dataset here:
```
%psource DataSet
```
### Class Attributes
* **examples**: Holds the items of the dataset. Each item is a list of values.
* **attrs**: The indexes of the features (by default in the range of [0,f), where *f* is the number of features). For example, `item[i]` returns the feature at index *i* of *item*.
* **attrnames**: An optional list with attribute names. For example, `item[s]`, where *s* is a feature name, returns the feature of name *s* in *item*.
* **target**: The attribute a learning algorithm will try to predict. By default the last attribute.
* **inputs**: This is the list of attributes without the target.
* **values**: A list of lists which holds the set of possible values for the corresponding attribute/feature. If initially `None`, it gets computed (by the function `setproblem`) from the examples.
* **distance**: The distance function used in the learner to calculate the distance between two items. By default `mean_boolean_error`.
* **name**: Name of the dataset.
* **source**: The source of the dataset (url or other). Not used in the code.
* **exclude**: A list of indexes to exclude from `inputs`. The list can include either attribute indexes (attrs) or names (attrnames).
### Class Helper Functions
These functions help modify a `DataSet` object to your needs.
* **sanitize**: Takes as input an example and returns it with non-input (target) attributes replaced by `None`. Useful for testing. Keep in mind that the example given is not itself sanitized, but instead a sanitized copy is returned.
* **classes_to_numbers**: Maps the class names of a dataset to numbers. If the class names are not given, they are computed from the dataset values. Useful for classifiers that return a numerical value instead of a string.
* **remove_examples**: Removes examples containing a given value. Useful for removing examples with missing values, or for removing classes (needed for binary classifiers).
### Importing a Dataset
#### Importing from aima-data
Datasets uploaded on aima-data can be imported with the following line:
```
iris = DataSet(name="iris")
```
To check that we imported the correct dataset, we can do the following:
```
print(iris.examples[0])
print(iris.inputs)
```
Which correctly prints the first line in the csv file and the list of attribute indexes.
When importing a dataset, we can specify to exclude an attribute (for example, at index 1) by setting the parameter `exclude` to the attribute index or name.
```
iris2 = DataSet(name="iris",exclude=[1])
print(iris2.inputs)
```
### Attributes
Here we showcase the attributes.
First we will print the first three items/examples in the dataset.
```
print(iris.examples[:3])
```
Then we will print `attrs`, `attrnames`, `target`, `input`. Notice how `attrs` holds values in [0,4], but since the fourth attribute is the target, `inputs` holds values in [0,3].
```
print("attrs:", iris.attrs)
print("attrnames (by default same as attrs):", iris.attrnames)
print("target:", iris.target)
print("inputs:", iris.inputs)
```
Now we will print all the possible values for the first feature/attribute.
```
print(iris.values[0])
```
Finally we will print the dataset's name and source. Keep in mind that we have not set a source for the dataset, so in this case it is empty.
```
print("name:", iris.name)
print("source:", iris.source)
```
A useful combination of the above is `dataset.values[dataset.target]` which returns the possible values of the target. For classification problems, this will return all the possible classes. Let's try it:
```
print(iris.values[iris.target])
```
### Helper Functions
We will now take a look at the auxiliary functions found in the class.
First we will take a look at the `sanitize` function, which sets the non-input values of the given example to `None`.
In this case we want to hide the class of the first example, so we will sanitize it.
Note that the function doesn't actually change the given example; it returns a sanitized *copy* of it.
```
print("Sanitized:",iris.sanitize(iris.examples[0]))
print("Original:",iris.examples[0])
```
Currently the `iris` dataset has three classes, setosa, virginica and versicolor. We want though to convert it to a binary class dataset (a dataset with two classes). The class we want to remove is "virginica". To accomplish that we will utilize the helper function `remove_examples`.
```
iris2 = DataSet(name="iris")
iris2.remove_examples("virginica")
print(iris2.values[iris2.target])
```
We also have `classes_to_numbers`. For a lot of the classifiers in the module (like the Neural Network), classes should have numerical values. With this function we map string class names to numbers.
```
print("Class of first example:",iris2.examples[0][iris2.target])
iris2.classes_to_numbers()
print("Class of first example:",iris2.examples[0][iris2.target])
```
As you can see "setosa" was mapped to 0.
Finally, we take a look at `find_means_and_deviations`. It finds the means and standard deviations of the features for each class.
```
means, deviations = iris.find_means_and_deviations()
print("Setosa feature means:", means["setosa"])
print("Versicolor mean for first feature:", means["versicolor"][0])
print("Setosa feature deviations:", deviations["setosa"])
print("Virginica deviation for second feature:",deviations["virginica"][1])
```
## IRIS VISUALIZATION
Since we will use the iris dataset extensively in this notebook, below we provide a visualization tool that helps in comprehending the dataset and thus how the algorithms work.
We plot the dataset in a 3D space using `matplotlib` and the function `show_iris` from `notebook.py`. The function takes as input three parameters, *i*, *j* and *k*, which are indicises to the iris features, "Sepal Length", "Sepal Width", "Petal Length" and "Petal Width" (0 to 3). By default we show the first three features.
```
iris = DataSet(name="iris")
show_iris()
show_iris(0, 1, 3)
show_iris(1, 2, 3)
```
You can play around with the values to get a good look at the dataset.
## DISTANCE FUNCTIONS
In a lot of algorithms (like the *k-Nearest Neighbors* algorithm), there is a need to compare items, finding how *similar* or *close* they are. For that we have many different functions at our disposal. Below are the functions implemented in the module:
### Manhattan Distance (`manhattan_distance`)
One of the simplest distance functions. It calculates the difference between the coordinates/features of two items. To understand how it works, imagine a 2D grid with coordinates *x* and *y*. In that grid we have two items, at the squares positioned at `(1,2)` and `(3,4)`. The difference between their two coordinates is `3-1=2` and `4-2=2`. If we sum these up we get `4`. That means to get from `(1,2)` to `(3,4)` we need four moves; two to the right and two more up. The function works similarly for n-dimensional grids.
```
def manhattan_distance(X, Y):
return sum([abs(x - y) for x, y in zip(X, Y)])
distance = manhattan_distance([1,2], [3,4])
print("Manhattan Distance between (1,2) and (3,4) is", distance)
```
### Euclidean Distance (`euclidean_distance`)
Probably the most popular distance function. It returns the square root of the sum of the squared differences between individual elements of two items.
```
def euclidean_distance(X, Y):
return math.sqrt(sum([(x - y)**2 for x, y in zip(X,Y)]))
distance = euclidean_distance([1,2], [3,4])
print("Euclidean Distance between (1,2) and (3,4) is", distance)
```
### Hamming Distance (`hamming_distance`)
This function counts the number of differences between single elements in two items. For example, if we have two binary strings "111" and "011" the function will return 1, since the two strings only differ at the first element. The function works the same way for non-binary strings too.
```
def hamming_distance(X, Y):
return sum(x != y for x, y in zip(X, Y))
distance = hamming_distance(['a','b','c'], ['a','b','b'])
print("Hamming Distance between 'abc' and 'abb' is", distance)
```
### Mean Boolean Error (`mean_boolean_error`)
To calculate this distance, we find the ratio of different elements over all elements of two items. For example, if the two items are `(1,2,3)` and `(1,4,5)`, the ration of different/all elements is 2/3, since they differ in two out of three elements.
```
def mean_boolean_error(X, Y):
return mean(int(x != y) for x, y in zip(X, Y))
distance = mean_boolean_error([1,2,3], [1,4,5])
print("Mean Boolean Error Distance between (1,2,3) and (1,4,5) is", distance)
```
### Mean Error (`mean_error`)
This function finds the mean difference of single elements between two items. For example, if the two items are `(1,0,5)` and `(3,10,5)`, their error distance is `(3-1) + (10-0) + (5-5) = 2 + 10 + 0 = 12`. The mean error distance therefore is `12/3=4`.
```
def mean_error(X, Y):
return mean([abs(x - y) for x, y in zip(X, Y)])
distance = mean_error([1,0,5], [3,10,5])
print("Mean Error Distance between (1,0,5) and (3,10,5) is", distance)
```
### Mean Square Error (`ms_error`)
This is very similar to the `Mean Error`, but instead of calculating the difference between elements, we are calculating the *square* of the differences.
```
def ms_error(X, Y):
return mean([(x - y)**2 for x, y in zip(X, Y)])
distance = ms_error([1,0,5], [3,10,5])
print("Mean Square Distance between (1,0,5) and (3,10,5) is", distance)
```
### Root of Mean Square Error (`rms_error`)
This is the square root of `Mean Square Error`.
```
def rms_error(X, Y):
return math.sqrt(ms_error(X, Y))
distance = rms_error([1,0,5], [3,10,5])
print("Root of Mean Error Distance between (1,0,5) and (3,10,5) is", distance)
```
## PLURALITY LEARNER CLASSIFIER
### Overview
The Plurality Learner is a simple algorithm, used mainly as a baseline comparison for other algorithms. It finds the most popular class in the dataset and classifies any subsequent item to that class. Essentially, it classifies every new item to the same class. For that reason, it is not used very often, instead opting for more complicated algorithms when we want accurate classification.

Let's see how the classifier works with the plot above. There are three classes named **Class A** (orange-colored dots) and **Class B** (blue-colored dots) and **Class C** (green-colored dots). Every point in this plot has two **features** (i.e. X<sub>1</sub>, X<sub>2</sub>). Now, let's say we have a new point, a red star and we want to know which class this red star belongs to. Solving this problem by predicting the class of this new red star is our current classification problem.
The Plurality Learner will find the class most represented in the plot. ***Class A*** has four items, ***Class B*** has three and ***Class C*** has seven. The most popular class is ***Class C***. Therefore, the item will get classified in ***Class C***, despite the fact that it is closer to the other two classes.
### Implementation
Below follows the implementation of the PluralityLearner algorithm:
```
psource(PluralityLearner)
```
It takes as input a dataset and returns a function. We can later call this function with the item we want to classify as the argument and it returns the class it should be classified in.
The function first finds the most popular class in the dataset and then each time we call its "predict" function, it returns it. Note that the input ("example") does not matter. The function always returns the same class.
### Example
For this example, we will not use the Iris dataset, since each class is represented the same. This will throw an error. Instead we will use the zoo dataset.
```
zoo = DataSet(name="zoo")
pL = PluralityLearner(zoo)
print(pL([1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 4, 1, 0, 1]))
```
The output for the above code is "mammal", since that is the most popular and common class in the dataset.
## K-NEAREST NEIGHBOURS CLASSIFIER
### Overview
The k-Nearest Neighbors algorithm is a non-parametric method used for classification and regression. We are going to use this to classify Iris flowers. More about kNN on [Scholarpedia](http://www.scholarpedia.org/article/K-nearest_neighbor).

Let's see how kNN works with a simple plot shown in the above picture.
We have co-ordinates (we call them **features** in Machine Learning) of this red star and we need to predict its class using the kNN algorithm. In this algorithm, the value of **k** is arbitrary. **k** is one of the **hyper parameters** for kNN algorithm. We choose this number based on our dataset and choosing a particular number is known as **hyper parameter tuning/optimising**. We learn more about this in coming topics.
Let's put **k = 3**. It means you need to find 3-Nearest Neighbors of this red star and classify this new point into the majority class. Observe that smaller circle which contains three points other than **test point** (red star). As there are two violet points, which form the majority, we predict the class of red star as **violet- Class B**.
Similarly if we put **k = 5**, you can observe that there are three yellow points, which form the majority. So, we classify our test point as **yellow- Class A**.
In practical tasks, we iterate through a bunch of values for k (like [1, 3, 5, 10, 20, 50, 100]), see how it performs and select the best one.
### Implementation
Below follows the implementation of the kNN algorithm:
```
psource(NearestNeighborLearner)
```
It takes as input a dataset and k (default value is 1) and it returns a function, which we can later use to classify a new item.
To accomplish that, the function uses a heap-queue, where the items of the dataset are sorted according to their distance from *example* (the item to classify). We then take the k smallest elements from the heap-queue and we find the majority class. We classify the item to this class.
### Example
We measured a new flower with the following values: 5.1, 3.0, 1.1, 0.1. We want to classify that item/flower in a class. To do that, we write the following:
```
iris = DataSet(name="iris")
kNN = NearestNeighborLearner(iris,k=3)
print(kNN([5.1,3.0,1.1,0.1]))
```
The output of the above code is "setosa", which means the flower with the above measurements is of the "setosa" species.
## DECISION TREE LEARNER
### Overview
#### Decision Trees
A decision tree is a flowchart that uses a tree of decisions and their possible consequences for classification. At each non-leaf node of the tree an attribute of the input is tested, based on which corresponding branch leading to a child-node is selected. At the leaf node the input is classified based on the class label of this leaf node. The paths from root to leaves represent classification rules based on which leaf nodes are assigned class labels.

#### Decision Tree Learning
Decision tree learning is the construction of a decision tree from class-labeled training data. The data is expected to be a tuple in which each record of the tuple is an attribute used for classification. The decision tree is built top-down, by choosing a variable at each step that best splits the set of items. There are different metrics for measuring the "best split". These generally measure the homogeneity of the target variable within the subsets.
#### Gini Impurity
Gini impurity of a set is the probability of a randomly chosen element to be incorrectly labeled if it was randomly labeled according to the distribution of labels in the set.
$$I_G(p) = \sum{p_i(1 - p_i)} = 1 - \sum{p_i^2}$$
We select a split which minimizes the Gini impurity in child nodes.
#### Information Gain
Information gain is based on the concept of entropy from information theory. Entropy is defined as:
$$H(p) = -\sum{p_i \log_2{p_i}}$$
Information Gain is difference between entropy of the parent and weighted sum of entropy of children. The feature used for splitting is the one which provides the most information gain.
#### Pseudocode
You can view the pseudocode by running the cell below:
```
pseudocode("Decision Tree Learning")
```
### Implementation
The nodes of the tree constructed by our learning algorithm are stored using either `DecisionFork` or `DecisionLeaf` based on whether they are a parent node or a leaf node respectively.
```
psource(DecisionFork)
```
`DecisionFork` holds the attribute, which is tested at that node, and a dict of branches. The branches store the child nodes, one for each of the attribute's values. Calling an object of this class as a function with input tuple as an argument returns the next node in the classification path based on the result of the attribute test.
```
psource(DecisionLeaf)
```
The leaf node stores the class label in `result`. All input tuples' classification paths end on a `DecisionLeaf` whose `result` attribute decide their class.
```
psource(DecisionTreeLearner)
```
The implementation of `DecisionTreeLearner` provided in [learning.py](https://github.com/aimacode/aima-python/blob/master/learning.py) uses information gain as the metric for selecting which attribute to test for splitting. The function builds the tree top-down in a recursive manner. Based on the input it makes one of the four choices:
<ol>
<li>If the input at the current step has no training data we return the mode of classes of input data received in the parent step (previous level of recursion).</li>
<li>If all values in training data belong to the same class it returns a `DecisionLeaf` whose class label is the class which all the data belongs to.</li>
<li>If the data has no attributes that can be tested we return the class with highest plurality value in the training data.</li>
<li>We choose the attribute which gives the highest amount of entropy gain and return a `DecisionFork` which splits based on this attribute. Each branch recursively calls `decision_tree_learning` to construct the sub-tree.</li>
</ol>
### Example
We will now use the Decision Tree Learner to classify a sample with values: 5.1, 3.0, 1.1, 0.1.
```
iris = DataSet(name="iris")
DTL = DecisionTreeLearner(iris)
print(DTL([5.1, 3.0, 1.1, 0.1]))
```
As expected, the Decision Tree learner classifies the sample as "setosa" as seen in the previous section.
## NAIVE BAYES LEARNER
### Overview
#### Theory of Probabilities
The Naive Bayes algorithm is a probabilistic classifier, making use of [Bayes' Theorem](https://en.wikipedia.org/wiki/Bayes%27_theorem). The theorem states that the conditional probability of **A** given **B** equals the conditional probability of **B** given **A** multiplied by the probability of **A**, divided by the probability of **B**.
$$P(A|B) = \dfrac{P(B|A)*P(A)}{P(B)}$$
From the theory of Probabilities we have the Multiplication Rule, if the events *X* are independent the following is true:
$$P(X_{1} \cap X_{2} \cap ... \cap X_{n}) = P(X_{1})*P(X_{2})*...*P(X_{n})$$
For conditional probabilities this becomes:
$$P(X_{1}, X_{2}, ..., X_{n}|Y) = P(X_{1}|Y)*P(X_{2}|Y)*...*P(X_{n}|Y)$$
#### Classifying an Item
How can we use the above to classify an item though?
We have a dataset with a set of classes (**C**) and we want to classify an item with a set of features (**F**). Essentially what we want to do is predict the class of an item given the features.
For a specific class, **Class**, we will find the conditional probability given the item features:
$$P(Class|F) = \dfrac{P(F|Class)*P(Class)}{P(F)}$$
We will do this for every class and we will pick the maximum. This will be the class the item is classified in.
The features though are a vector with many elements. We need to break the probabilities up using the multiplication rule. Thus the above equation becomes:
$$P(Class|F) = \dfrac{P(Class)*P(F_{1}|Class)*P(F_{2}|Class)*...*P(F_{n}|Class)}{P(F_{1})*P(F_{2})*...*P(F_{n})}$$
The calculation of the conditional probability then depends on the calculation of the following:
*a)* The probability of **Class** in the dataset.
*b)* The conditional probability of each feature occurring in an item classified in **Class**.
*c)* The probabilities of each individual feature.
For *a)*, we will count how many times **Class** occurs in the dataset (aka how many items are classified in a particular class).
For *b)*, if the feature values are discrete ('Blue', '3', 'Tall', etc.), we will count how many times a feature value occurs in items of each class. If the feature values are not discrete, we will go a different route. We will use a distribution function to calculate the probability of values for a given class and feature. If we know the distribution function of the dataset, then great, we will use it to compute the probabilities. If we don't know the function, we can assume the dataset follows the normal (Gaussian) distribution without much loss of accuracy. In fact, it can be proven that any distribution tends to the Gaussian the larger the population gets (see [Central Limit Theorem](https://en.wikipedia.org/wiki/Central_limit_theorem)).
*NOTE:* If the values are continuous but use the discrete approach, there might be issues if we are not lucky. For one, if we have two values, '5.0 and 5.1', with the discrete approach they will be two completely different values, despite being so close. Second, if we are trying to classify an item with a feature value of '5.15', if the value does not appear for the feature, its probability will be 0. This might lead to misclassification. Generally, the continuous approach is more accurate and more useful, despite the overhead of calculating the distribution function.
The last one, *c)*, is tricky. If feature values are discrete, we can count how many times they occur in the dataset. But what if the feature values are continuous? Imagine a dataset with a height feature. Is it worth it to count how many times each value occurs? Most of the time it is not, since there can be miscellaneous differences in the values (for example, 1.7 meters and 1.700001 meters are practically equal, but they count as different values).
So as we cannot calculate the feature value probabilities, what are we going to do?
Let's take a step back and rethink exactly what we are doing. We are essentially comparing conditional probabilities of all the classes. For two classes, **A** and **B**, we want to know which one is greater:
$$\dfrac{P(F|A)*P(A)}{P(F)} vs. \dfrac{P(F|B)*P(B)}{P(F)}$$
Wait, **P(F)** is the same for both the classes! In fact, it is the same for every combination of classes. That is because **P(F)** does not depend on a class, thus being independent of the classes.
So, for *c)*, we actually don't need to calculate it at all.
#### Wrapping It Up
Classifying an item to a class then becomes a matter of calculating the conditional probabilities of feature values and the probabilities of classes. This is something very desirable and computationally delicious.
Remember though that all the above are true because we made the assumption that the features are independent. In most real-world cases that is not true though. Is that an issue here? Fret not, for the the algorithm is very efficient even with that assumption. That is why the algorithm is called **Naive** Bayes Classifier. We (naively) assume that the features are independent to make computations easier.
### Implementation
The implementation of the Naive Bayes Classifier is split in two; *Learning* and *Simple*. The *learning* classifier takes as input a dataset and learns the needed distributions from that. It is itself split into two, for discrete and continuous features. The *simple* classifier takes as input not a dataset, but already calculated distributions (a dictionary of `CountingProbDist` objects).
#### Discrete
The implementation for discrete values counts how many times each feature value occurs for each class, and how many times each class occurs. The results are stored in a `CountinProbDist` object.
With the below code you can see the probabilities of the class "Setosa" appearing in the dataset and the probability of the first feature (at index 0) of the same class having a value of 5. Notice that the second probability is relatively small, even though if we observe the dataset we will find that a lot of values are around 5. The issue arises because the features in the Iris dataset are continuous, and we are assuming they are discrete. If the features were discrete (for example, "Tall", "3", etc.) this probably wouldn't have been the case and we would see a much nicer probability distribution.
```
dataset = iris
target_vals = dataset.values[dataset.target]
target_dist = CountingProbDist(target_vals)
attr_dists = {(gv, attr): CountingProbDist(dataset.values[attr])
for gv in target_vals
for attr in dataset.inputs}
for example in dataset.examples:
targetval = example[dataset.target]
target_dist.add(targetval)
for attr in dataset.inputs:
attr_dists[targetval, attr].add(example[attr])
print(target_dist['setosa'])
print(attr_dists['setosa', 0][5.0])
```
First we found the different values for the classes (called targets here) and calculated their distribution. Next we initialized a dictionary of `CountingProbDist` objects, one for each class and feature. Finally, we iterated through the examples in the dataset and calculated the needed probabilites.
Having calculated the different probabilities, we will move on to the predicting function. It will receive as input an item and output the most likely class. Using the above formula, it will multiply the probability of the class appearing, with the probability of each feature value appearing in the class. It will return the max result.
```
def predict(example):
def class_probability(targetval):
return (target_dist[targetval] *
product(attr_dists[targetval, attr][example[attr]]
for attr in dataset.inputs))
return argmax(target_vals, key=class_probability)
print(predict([5, 3, 1, 0.1]))
```
You can view the complete code by executing the next line:
```
psource(NaiveBayesDiscrete)
```
#### Continuous
In the implementation we use the Gaussian/Normal distribution function. To make it work, we need to find the means and standard deviations of features for each class. We make use of the `find_means_and_deviations` Dataset function. On top of that, we will also calculate the class probabilities as we did with the Discrete approach.
```
means, deviations = dataset.find_means_and_deviations()
target_vals = dataset.values[dataset.target]
target_dist = CountingProbDist(target_vals)
print(means["setosa"])
print(deviations["versicolor"])
```
You can see the means of the features for the "Setosa" class and the deviations for "Versicolor".
The prediction function will work similarly to the Discrete algorithm. It will multiply the probability of the class occurring with the conditional probabilities of the feature values for the class.
Since we are using the Gaussian distribution, we will input the value for each feature into the Gaussian function, together with the mean and deviation of the feature. This will return the probability of the particular feature value for the given class. We will repeat for each class and pick the max value.
```
def predict(example):
def class_probability(targetval):
prob = target_dist[targetval]
for attr in dataset.inputs:
prob *= gaussian(means[targetval][attr], deviations[targetval][attr], example[attr])
return prob
return argmax(target_vals, key=class_probability)
print(predict([5, 3, 1, 0.1]))
```
The complete code of the continuous algorithm:
```
psource(NaiveBayesContinuous)
```
#### Simple
The simple classifier (chosen with the argument `simple`) does not learn from a dataset, instead it takes as input a dictionary of already calculated `CountingProbDist` objects and returns a predictor function. The dictionary is in the following form: `(Class Name, Class Probability): CountingProbDist Object`.
Each class has its own probability distribution. The classifier given a list of features calculates the probability of the input for each class and returns the max. The only pre-processing work is to create dictionaries for the distribution of classes (named `targets`) and attributes/features.
The complete code for the simple classifier:
```
psource(NaiveBayesSimple)
```
This classifier is useful when you already have calculated the distributions and you need to predict future items.
### Examples
We will now use the Naive Bayes Classifier (Discrete and Continuous) to classify items:
```
nBD = NaiveBayesLearner(iris, continuous=False)
print("Discrete Classifier")
print(nBD([5, 3, 1, 0.1]))
print(nBD([6, 5, 3, 1.5]))
print(nBD([7, 3, 6.5, 2]))
nBC = NaiveBayesLearner(iris, continuous=True)
print("\nContinuous Classifier")
print(nBC([5, 3, 1, 0.1]))
print(nBC([6, 5, 3, 1.5]))
print(nBC([7, 3, 6.5, 2]))
```
Notice how the Discrete Classifier misclassified the second item, while the Continuous one had no problem.
Let's now take a look at the simple classifier. First we will come up with a sample problem to solve. Say we are given three bags. Each bag contains three letters ('a', 'b' and 'c') of different quantities. We are given a string of letters and we are tasked with finding from which bag the string of letters came.
Since we know the probability distribution of the letters for each bag, we can use the naive bayes classifier to make our prediction.
```
bag1 = 'a'*50 + 'b'*30 + 'c'*15
dist1 = CountingProbDist(bag1)
bag2 = 'a'*30 + 'b'*45 + 'c'*20
dist2 = CountingProbDist(bag2)
bag3 = 'a'*20 + 'b'*20 + 'c'*35
dist3 = CountingProbDist(bag3)
```
Now that we have the `CountingProbDist` objects for each bag/class, we will create the dictionary. We assume that it is equally probable that we will pick from any bag.
```
dist = {('First', 0.5): dist1, ('Second', 0.3): dist2, ('Third', 0.2): dist3}
nBS = NaiveBayesLearner(dist, simple=True)
```
Now we can start making predictions:
```
print(nBS('aab')) # We can handle strings
print(nBS(['b', 'b'])) # And lists!
print(nBS('ccbcc'))
```
The results make intuitive sence. The first bag has a high amount of 'a's, the second has a high amount of 'b's and the third has a high amount of 'c's. The classifier seems to confirm this intuition.
Note that the simple classifier doesn't distinguish between discrete and continuous values. It just takes whatever it is given. Also, the `simple` option on the `NaiveBayesLearner` overrides the `continuous` argument. `NaiveBayesLearner(d, simple=True, continuous=False)` just creates a simple classifier.
## PERCEPTRON CLASSIFIER
### Overview
The Perceptron is a linear classifier. It works the same way as a neural network with no hidden layers (just input and output). First it trains its weights given a dataset and then it can classify a new item by running it through the network.
Its input layer consists of the the item features, while the output layer consists of nodes (also called neurons). Each node in the output layer has *n* synapses (for every item feature), each with its own weight. Then, the nodes find the dot product of the item features and the synapse weights. These values then pass through an activation function (usually a sigmoid). Finally, we pick the largest of the values and we return its index.
Note that in classification problems each node represents a class. The final classification is the class/node with the max output value.
Below you can see a single node/neuron in the outer layer. With *f* we denote the item features, with *w* the synapse weights, then inside the node we have the dot product and the activation function, *g*.

### Implementation
First, we train (calculate) the weights given a dataset, using the `BackPropagationLearner` function of `learning.py`. We then return a function, `predict`, which we will use in the future to classify a new item. The function computes the (algebraic) dot product of the item with the calculated weights for each node in the outer layer. Then it picks the greatest value and classifies the item in the corresponding class.
```
psource(PerceptronLearner)
```
Note that the Perceptron is a one-layer neural network, without any hidden layers. So, in `BackPropagationLearner`, we will pass no hidden layers. From that function we get our network, which is just one layer, with the weights calculated.
That function `predict` passes the input/example through the network, calculating the dot product of the input and the weights for each node and returns the class with the max dot product.
### Example
We will train the Perceptron on the iris dataset. Because though the `BackPropagationLearner` works with integer indexes and not strings, we need to convert class names to integers. Then, we will try and classify the item/flower with measurements of 5, 3, 1, 0.1.
```
iris = DataSet(name="iris")
iris.classes_to_numbers()
perceptron = PerceptronLearner(iris)
print(perceptron([5, 3, 1, 0.1]))
```
The correct output is 0, which means the item belongs in the first class, "setosa". Note that the Perceptron algorithm is not perfect and may produce false classifications.
## LEARNER EVALUATION
In this section we will evaluate and compare algorithm performance. The dataset we will use will again be the iris one.
```
iris = DataSet(name="iris")
```
### Naive Bayes
First up we have the Naive Bayes algorithm. First we will test how well the Discrete Naive Bayes works, and then how the Continuous fares.
```
nBD = NaiveBayesLearner(iris, continuous=False)
print("Error ratio for Discrete:", err_ratio(nBD, iris))
nBC = NaiveBayesLearner(iris, continuous=True)
print("Error ratio for Continuous:", err_ratio(nBC, iris))
```
The error for the Naive Bayes algorithm is very, very low; close to 0. There is also very little difference between the discrete and continuous version of the algorithm.
## k-Nearest Neighbors
Now we will take a look at kNN, for different values of *k*. Note that *k* should have odd values, to break any ties between two classes.
```
kNN_1 = NearestNeighborLearner(iris, k=1)
kNN_3 = NearestNeighborLearner(iris, k=3)
kNN_5 = NearestNeighborLearner(iris, k=5)
kNN_7 = NearestNeighborLearner(iris, k=7)
print("Error ratio for k=1:", err_ratio(kNN_1, iris))
print("Error ratio for k=3:", err_ratio(kNN_3, iris))
print("Error ratio for k=5:", err_ratio(kNN_5, iris))
print("Error ratio for k=7:", err_ratio(kNN_7, iris))
```
Notice how the error became larger and larger as *k* increased. This is generally the case with datasets where classes are spaced out, as is the case with the iris dataset. If items from different classes were closer together, classification would be more difficult. Usually a value of 1, 3 or 5 for *k* suffices.
Also note that since the training set is also the testing set, for *k* equal to 1 we get a perfect score, since the item we want to classify each time is already in the dataset and its closest neighbor is itself.
### Perceptron
For the Perceptron, we first need to convert class names to integers. Let's see how it performs in the dataset.
```
iris2 = DataSet(name="iris")
iris2.classes_to_numbers()
perceptron = PerceptronLearner(iris2)
print("Error ratio for Perceptron:", err_ratio(perceptron, iris2))
```
The Perceptron didn't fare very well mainly because the dataset is not linearly separated. On simpler datasets the algorithm performs much better, but unfortunately such datasets are rare in real life scenarios.
## AdaBoost
### Overview
**AdaBoost** is an algorithm which uses **ensemble learning**. In ensemble learning the hypotheses in the collection, or ensemble, vote for what the output should be and the output with the majority votes is selected as the final answer.
AdaBoost algorithm, as mentioned in the book, works with a **weighted training set** and **weak learners** (classifiers that have about 50%+epsilon accuracy i.e slightly better than random guessing). It manipulates the weights attached to the the examples that are showed to it. Importance is given to the examples with higher weights.
All the examples start with equal weights and a hypothesis is generated using these examples. Examples which are incorrectly classified, their weights are increased so that they can be classified correctly by the next hypothesis. The examples that are correctly classified, their weights are reduced. This process is repeated *K* times (here *K* is an input to the algorithm) and hence, *K* hypotheses are generated.
These *K* hypotheses are also assigned weights according to their performance on the weighted training set. The final ensemble hypothesis is the weighted-majority combination of these *K* hypotheses.
The speciality of AdaBoost is that by using weak learners and a sufficiently large *K*, a highly accurate classifier can be learned irrespective of the complexity of the function being learned or the dullness of the hypothesis space.
### Implementation
As seen in the previous section, the `PerceptronLearner` does not perform that well on the iris dataset. We'll use perceptron as the learner for the AdaBoost algorithm and try to increase the accuracy.
Let's first see what AdaBoost is exactly:
```
psource(AdaBoost)
```
AdaBoost takes as inputs: **L** and *K* where **L** is the learner and *K* is the number of hypotheses to be generated. The learner **L** takes in as inputs: a dataset and the weights associated with the examples in the dataset. But the `PerceptronLearner` doesnot handle weights and only takes a dataset as its input.
To remedy that we will give as input to the PerceptronLearner a modified dataset in which the examples will be repeated according to the weights associated to them. Intuitively, what this will do is force the learner to repeatedly learn the same example again and again until it can classify it correctly.
To convert `PerceptronLearner` so that it can take weights as input too, we will have to pass it through the **`WeightedLearner`** function.
```
psource(WeightedLearner)
```
The `WeightedLearner` function will then call the `PerceptronLearner`, during each iteration, with the modified dataset which contains the examples according to the weights associated with them.
### Example
We will pass the `PerceptronLearner` through `WeightedLearner` function. Then we will create an `AdaboostLearner` classifier with number of hypotheses or *K* equal to 5.
```
WeightedPerceptron = WeightedLearner(PerceptronLearner)
AdaboostLearner = AdaBoost(WeightedPerceptron, 5)
iris2 = DataSet(name="iris")
iris2.classes_to_numbers()
adaboost = AdaboostLearner(iris2)
adaboost([5, 3, 1, 0.1])
```
That is the correct answer. Let's check the error rate of adaboost with perceptron.
```
print("Error ratio for adaboost: ", err_ratio(adaboost, iris2))
```
It reduced the error rate considerably. Unlike the `PerceptronLearner`, `AdaBoost` was able to learn the complexity in the iris dataset.
| true |
code
| 0.433382 | null | null | null | null |
|
```
import pandas as pd
import numpy as np
import math
import random
import operator
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn import linear_model
from sklearn.linear_model import LinearRegression
from sklearn import metrics
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error
from sklearn.datasets import make_regression
%matplotlib inline
```
# Creation dataset
```
random.seed(98103)
n = 30
x = np.array([random.random() for i in range(n)])
sin = lambda x: math.sin(4*x)
vsin = np.vectorize(sin)
y = vsin(x)
#np.random.seed(0)
#x = 2 - 3 * np.random.normal(0, 1, 30)
#y = x - 2 * (x ** 2) + 0.5 * (x ** 3) + np.random.normal(-3, 3, 30)
```
Adding Gaussian noise to dataset
```
random.seed(1)
e = np.array([random.gauss(0,1.0/3.0) for i in range(n)])
y = y + e
plt.scatter(x,y, s=10)
plt.show()
data= pd.DataFrame({'X1': x, 'Y': y})
```
# Define a linear fit
```
# transforming the data to include another axis
X = x[:, np.newaxis]
Y = y[:, np.newaxis]
model = LinearRegression()
model.fit(X, Y)
y_pred = model.predict(X)
rmse = np.sqrt(mean_squared_error(Y,y_pred))
r2 = r2_score(Y,y_pred)
print(rmse)
print(r2)
plt.scatter(X, Y, s=10)
plt.plot(X, y_pred, color='r')
plt.show()
```
# define a polynomial fit
## Second degree polynomial
```
polynomial_features= PolynomialFeatures(degree=2)
x_poly = polynomial_features.fit_transform(X)
model = LinearRegression()
model.fit(x_poly, Y)
y_poly_pred = model.predict(x_poly)
rmse = np.sqrt(mean_squared_error(Y,y_poly_pred))
r2 = r2_score(Y,y_poly_pred)
print(rmse)
print(r2)
plt.scatter(X, Y, s=10)
# sort the values of x before line plot
sort_axis = operator.itemgetter(0)
sorted_zip = sorted(zip(X,y_poly_pred), key=sort_axis)
X_z, y_poly_pred = zip(*sorted_zip)
plt.plot(X_z, y_poly_pred, color='m')
plt.show()
# The coefficients
print('Coefficients: \n', model.coef_)
```
## 4th order polynomial
```
polynomial_features= PolynomialFeatures(degree=3)
x_poly = polynomial_features.fit_transform(X)
model = LinearRegression()
model.fit(x_poly, Y)
y_poly_pred = model.predict(x_poly)
rmse = np.sqrt(mean_squared_error(Y,y_poly_pred))
r2 = r2_score(Y,y_poly_pred)
print(rmse)
print(r2)
plt.scatter(X, Y, s=10)
# sort the values of x before line plot
sort_axis = operator.itemgetter(0)
sorted_zip = sorted(zip(X,y_poly_pred), key=sort_axis)
X_z, y_poly_pred = zip(*sorted_zip)
plt.plot(X_z, y_poly_pred, color='m')
plt.show()
# The coefficients
print('Coefficients: \n', model.coef_)
```
## high order polynomial
```
polynomial_features= PolynomialFeatures(degree=20)
x_poly = polynomial_features.fit_transform(X)
model = LinearRegression()
model.fit(x_poly, Y)
y_poly_pred = model.predict(x_poly)
rmse = np.sqrt(mean_squared_error(Y,y_poly_pred))
r2 = r2_score(Y,y_poly_pred)
print(rmse)
print(r2)
plt.scatter(X, Y, s=10)
# sort the values of x before line plot
sort_axis = operator.itemgetter(0)
sorted_zip = sorted(zip(X,y_poly_pred), key=sort_axis)
X_z, y_poly_pred = zip(*sorted_zip)
plt.plot(X_z, y_poly_pred, color='m')
plt.show()
# The coefficients
print('Coefficients: \n', model.coef_)
```
# Ridge regularization
## Generate random dataset
```
X, Y, w = make_regression(n_samples=10, n_features=30, coef=True,
random_state=1, bias=3.5)
model = LinearRegression()
model.fit(X, Y)
y_pred = model.predict(X)
rmse = np.sqrt(mean_squared_error(Y,y_pred))
r2 = r2_score(Y,y_pred)
print(rmse)
print(r2)
# The coefficients
print('Coefficients: \n', model.coef_)
```
# Ridge regression
```
clf = Ridge()
coefs = []
errors = []
alphas = np.logspace(-6, 6, 200)
# Train the model with different regularisation strengths
for a in alphas:
clf.set_params(alpha=a)
clf.fit(X, Y)
coefs.append(clf.coef_)
# Display results
ax = plt.gca()
ax.plot(alphas, coefs)
ax.set_xscale('log')
plt.xlabel('alpha')
plt.ylabel('weights')
plt.title('Ridge coefficients as a function of the regularization')
plt.axis('tight')
plt.show()
```
# Lasso regularization
```
clf = linear_model.Lasso()
coefs = []
errors = []
alphas = np.logspace(-6, 6)
# Train the model with different regularisation strengths
for a in alphas:
clf.set_params(alpha=a, max_iter=10000)
clf.fit(X, Y)
coefs.append(clf.coef_)
# Display results
ax = plt.gca()
ax.plot(alphas, coefs)
ax.set_xscale('log')
plt.xlabel('alpha')
plt.ylabel('weights')
plt.title('Lasso coefficients as a function of the regularization')
plt.axis('tight')
plt.show()
```
https://towardsdatascience.com/how-to-perform-lasso-and-ridge-regression-in-python-3b3b75541ad8
| true |
code
| 0.7162 | null | null | null | null |
|
For MS training we have 3 datasets: train, validation and holdout
```
import numpy as np
import pandas as pd
import nibabel as nib
from scipy import interp
from sklearn.utils import shuffle
from sklearn.model_selection import GroupShuffleSplit
from sklearn.metrics import confusion_matrix, roc_auc_score, roc_curve, auc
from sklearn.model_selection import KFold
from sklearn.svm import SVC
import matplotlib.pyplot as plt
import os
import time
import h5py
from config import *
from utils import specificity, sensitivity, balanced_accuracy, shuffle_data, normalize_float
# Start timing
start_time = time.time()
zero_one_normalize = False
dtype = np.float32
# load hdf5 files and extract columns
train_h5 = h5py.File('/analysis/share/Ritter/MS/CIS/train_dataset_FLAIR_lesions_filled.h5', 'r')
holdout_h5 = h5py.File('/analysis/share/Ritter/MS/CIS/holdout_dataset_FLAIR_lesions_filled.h5', 'r')
# loading only labels from original file
y_train = train_h5['y']
y_holdout = holdout_h5['y']
train_lesions_h5 = h5py.File('/analysis/share/Ritter/MS/CIS/train_dataset_lesions.h5', 'r')
holdout_lesions_h5 = h5py.File('/analysis/share/Ritter/MS/CIS/holdout_dataset_lesions.h5', 'r')
lesion_masks_train = train_lesions_h5['masks']
lesion_masks_holdout = holdout_lesions_h5['masks']
```
## Convert to lesion volume
```
# convert data to numpy arrays using lesions masks
X_train = np.array(lesion_masks_train, dtype=dtype)
y_train = np.array(y_train)
X_holdout = np.array(lesion_masks_holdout, dtype=dtype)
y_holdout = np.array(y_holdout)
print("Total datset length: {}".format(len(y_train)))
print("Number of healthy controls: {}".format(len(y_train[y_train==0.])))
print("Number of MS patients: {}".format(len(y_train[y_train==1.])))
# sum over all dimensions
X_train = np.sum(X_train, axis=(1, 2, 3)).reshape(-1, 1)
X_holdout = np.sum(X_holdout, axis=(1, 2, 3)).reshape(-1, 1)
_, bins, _ = plt.hist(X_train[y_train==1.], bins=20, alpha=0.5, range=[0, 8000])
_ = plt.hist(X_train[y_train==0.], bins=bins, alpha=0.5, range=[0, 8000])
plt.legend(["MS", "HC"])
```
## Normalization
```
def normalize(train, test):
# get training set moments
mean = np.mean(train)
std = np.std(train)
# apply on train and test
train = (train - mean)/std
test = (test - mean)/std
return train, test
```
## Training
```
from sklearn.model_selection import GridSearchCV
from sklearn import preprocessing
from sklearn.pipeline import make_pipeline
def svc_param_selection(X, y, n_folds):
Cs = [0.001, 0.01, 0.1, 1, 10]
kernels = ['linear', 'rbf']
param_grid = {'svc__C': Cs,
'svc__kernel': kernels}
# use standard scaler for preprocessing
scaler = preprocessing.StandardScaler()
pipeline = make_pipeline(scaler, SVC(gamma='auto'))
grid_search = GridSearchCV(pipeline, param_grid, cv=n_folds, n_jobs=10)
grid_search.fit(X, y)
grid_search.best_params_
return grid_search.best_params_, grid_search.cv_results_
kf = KFold(n_splits=7)
fold = 0
best_params = []
train_balanced_accuracies = []
train_sensitivities = []
train_specificities = []
val_balanced_accuracies = []
val_sensitivities = []
val_specificities = []
auc_scores = []
tprs = []
mean_fpr = np.linspace(0, 1, 100)
# shuffle the data once
X_train, y_train = shuffle_data(X_train, y_train)
# nested cross-validation
for train_idx, test_idx in kf.split(X_train):
print("Fold %i" %fold)
fold += 1
# Start inner cross-validation
best_param, cv_result = svc_param_selection(
X_train[train_idx],
y_train[train_idx],
n_folds=5)
print("Best paramter value: {}".format(best_param))
model = SVC(kernel=best_param["svc__kernel"], C=best_param["svc__C"])
model.fit(X_train[train_idx], y_train[train_idx])
# training set results
train_pred = model.predict(X_train[train_idx])
train_bal_acc = balanced_accuracy(y_train[train_idx], train_pred)
train_sens = sensitivity(y_train[train_idx], train_pred)
train_spec = specificity(y_train[train_idx], train_pred)
# val set results
val_pred = model.predict(X_train[test_idx])
val_scores = model.decision_function(X_train[test_idx])
val_bal_acc = balanced_accuracy(y_train[test_idx], val_pred)
val_sens = sensitivity(y_train[test_idx], val_pred)
val_spec = specificity(y_train[test_idx], val_pred)
roc_auc = roc_auc_score(y_train[test_idx], val_scores)
fpr, tpr, thresholds = roc_curve(y_train[test_idx], val_scores)
# Store results
best_params.append(best_param)
train_balanced_accuracies.append(train_bal_acc)
train_sensitivities.append(train_sens)
train_specificities.append(train_spec)
val_balanced_accuracies.append(val_bal_acc)
val_sensitivities.append(val_sens)
val_specificities.append(val_spec)
auc_scores.append(roc_auc)
# interpolate with diagonal to get comparable results
tprs.append(interp(mean_fpr, fpr, tpr))
tprs[-1][0] = 0.0 # correct lowest value after interpolation
# Print results
print("######## Training set results ########")
print("Balanced accuracy {:.2f} %".format(train_bal_acc*100))
print("Sensitivity {:.2f} %".format(train_sens*100))
print("Specificity {:.2f} %".format(train_spec*100))
print("######## Validation set results ########")
print("Balanced accuracy {:.2f} %".format(val_bal_acc*100))
print("Sensitivity {:.2f} %".format(val_sens*100))
print("Specificity {:.2f} %".format(val_spec*100))
print("Area Under the Receiver Operating Curve (ROC AUC score) {:.2f}".format(roc_auc*100))
plt.plot(fpr, tpr, lw=1, alpha=0.3,
label='ROC fold %d (AUC = %0.2f)' % (fold, roc_auc))
training_time = time.time() - start_time
print("Training Time: {}h:{}m:{}s".format(
training_time//3600, (training_time//60)%60, training_time%60))
# Print results
print("######## Final results ########")
print("Validation balanced accuracies: \n {}".format(val_balanced_accuracies))
print("Validation balanced accuracies mean: {}".format(np.mean(val_balanced_accuracies)))
print("Validation final sensitivities: \n {}".format(val_sensitivities))
print("Validation final sensitivities' mean: {}".format(np.mean(val_sensitivities)))
print("Validation final specificities: \n {}".format(val_specificities))
print("Validation final specificities' mean: {}".format(np.mean(val_specificities)))
print("Mean ROC AUC score {:.2f}".format(np.mean(auc_scores)*100))
# Plot ROC Curves
plt.plot([0, 1], [0, 1], linestyle='--', lw=2, color='r',
label='Chance', alpha=.8)
mean_tpr = np.mean(tprs, axis=0)
mean_tpr[-1] = 1.0 # correct max value after interpolation and mean
mean_auc = auc(mean_fpr, mean_tpr)
#assert(mean_auc == np.mean(auc_scores))
std_auc = np.std(auc_scores)
plt.plot(mean_fpr, mean_tpr, color='b',
label=r'Mean ROC (AUC = %0.2f $\pm$ %0.2f)' % (mean_auc, std_auc),
lw=2, alpha=.8)
std_tpr = np.std(tprs, axis=0)
tprs_upper = np.minimum(mean_tpr + std_tpr, 1)
tprs_lower = np.maximum(mean_tpr - std_tpr, 0)
plt.fill_between(mean_fpr, tprs_lower, tprs_upper, color='grey', alpha=.2,
label=r'$\pm$ 1 std. dev.')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
training_time = time.time() - start_time
counter = {}
def majority_vote(best_params):
"""
Find the most often used combination
of parameters.
"""
assert(len(best_params)>=1)
counter = {}
# count unique value list
for i in range(len(best_params)):
# turn values into key
new_key = ""
for x in list(best_params[i].values()):
new_key = new_key + str(x) + "_"
if new_key in counter.keys():
counter[new_key] += 1
else:
counter[new_key] = 1
# select most frequent value list
majority_param = max(counter, key=lambda key: counter[key])
# reformat to list
majority_param = majority_param[:-1].split("_")
# reformat to dictionary
result = {}
for key, value in zip(best_params[0].keys(), majority_param):
result[key] = value
return result
majority_param = majority_vote(best_params)
print(majority_param)
```
# Evaluation
Train on the entire training set with the best parameters from above and test on the holdout dataset for final performance.
```
# training args
kernel = majority_param["svc__kernel"]
C = float(majority_param["svc__C"])
model = SVC(kernel=kernel, C=C)
num_trials = 10
train_balanced_accuracies = []
train_sensitivities = []
train_specificities = []
holdout_balanced_accuracies = []
holdout_sensitivities = []
holdout_specificities = []
auc_scores = []
tprs = []
mean_fpr = np.linspace(0, 1, 100)
for i in range(num_trials):
print("Trial %i" %i)
# shuffle the data each time
X_train, y_train = shuffle_data(X_train, y_train)
# normalize
X_train, X_holdout = normalize(X_train, X_holdout)
# Start training
model.fit(X_train, y_train)
# training set results
train_pred = model.predict(X_train)
train_bal_acc = balanced_accuracy(y_train, train_pred)
train_sens = sensitivity(y_train, train_pred)
train_spec = specificity(y_train, train_pred)
# holdout set results
holdout_pred = model.predict(X_holdout)
holdout_scores = model.decision_function(X_holdout)
holdout_bal_acc = balanced_accuracy(y_holdout, holdout_pred)
holdout_sens = sensitivity(y_holdout, holdout_pred)
holdout_spec = specificity(y_holdout, holdout_pred)
roc_auc = roc_auc_score(y_holdout, holdout_scores)
fpr, tpr, thresholds = roc_curve(y_holdout, holdout_scores)
# Store results
train_balanced_accuracies.append(train_bal_acc)
train_sensitivities.append(train_sens)
train_specificities.append(train_spec)
holdout_balanced_accuracies.append(holdout_bal_acc)
holdout_sensitivities.append(holdout_sens)
holdout_specificities.append(holdout_spec)
auc_scores.append(roc_auc)
# interpolate with diagonal to get comparable results
tprs.append(interp(mean_fpr, fpr, tpr))
tprs[-1][0] = 0.0 # correct lowest value after interpolation
# Print results
print("######## Training set results ########")
print("Balanced accuracy {:.2f} %".format(train_bal_acc*100))
print("Sensitivity {:.2f} %".format(train_sens*100))
print("Specificity {:.2f} %".format(train_spec*100))
print("######## Holdout set results ########")
print("Balanced accuracy {:.2f} %".format(holdout_bal_acc*100))
print("Sensitivity {:.2f} %".format(holdout_sens*100))
print("Specificity {:.2f} %".format(holdout_spec*100))
print("Area Under the Receiver Operating Curve (ROC AUC score) {:.2f}".format(roc_auc*100))
plt.plot(fpr, tpr, lw=1, alpha=0.3,
label='ROC trial %d (AUC = %0.2f)' % (i, roc_auc))
training_time = time.time() - start_time
print("Training Time: {}h:{}m:{}s".format(
training_time//3600, (training_time//60)%60, training_time%60))
# Print results
print("######## Final results ########")
print("Holdout balanced accuracies: \n {}".format(holdout_balanced_accuracies))
print("Holdout balanced accuracies mean: {}".format(np.mean(holdout_balanced_accuracies)))
print("Holdout final sensitivities: \n {}".format(holdout_sensitivities))
print("Holdout final sensitivities' mean: {}".format(np.mean(holdout_sensitivities)))
print("Holdout final specificities: \n {}".format(holdout_specificities))
print("Holdout final specificities' mean: {}".format(np.mean(holdout_specificities)))
print("Mean ROC AUC score {:.2f}".format(np.mean(auc_scores)*100))
# Plot ROC Curves
plt.plot([0, 1], [0, 1], linestyle='--', lw=2, color='r',
label='Chance', alpha=.8)
mean_tpr = np.mean(tprs, axis=0)
mean_tpr[-1] = 1.0 # correct max value after interpolation and mean
mean_auc = auc(mean_fpr, mean_tpr)
#assert(mean_auc == np.mean(auc_scores))
std_auc = np.std(auc_scores)
plt.plot(mean_fpr, mean_tpr, color='b',
label=r'Mean ROC (AUC = %0.2f $\pm$ %0.2f)' % (mean_auc, std_auc),
lw=2, alpha=.8)
std_tpr = np.std(tprs, axis=0)
tprs_upper = np.minimum(mean_tpr + std_tpr, 1)
tprs_lower = np.maximum(mean_tpr - std_tpr, 0)
plt.fill_between(mean_fpr, tprs_lower, tprs_upper, color='grey', alpha=.2,
label=r'$\pm$ 1 std. dev.')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
total_time = time.time() - start_time
print("Training Time: {}h:{}m:{}s".format(
training_time//3600, (training_time//60)%60, training_time%60))
print("Total time elapsed: {}h:{}m:{}s".format(
total_time//3600, (total_time//60)%60, total_time%60))
quit()
```
| true |
code
| 0.539893 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/Educat8n/Reinforcement-Learning-for-Game-Playing-and-More/blob/main/Module3/Module_3.1_DQN.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Module 3: DRL Algorithm Implementations

# DQN to play Atari
```
import gym
import sys
import random
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from datetime import datetime
from collections import deque
from gym import spaces
import numpy as np
from gym.spaces.box import Box
from gym.core import Wrapper, ObservationWrapper
import cv2
## import packages
import tensorflow.keras as K
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.layers import Dense, Dropout, Conv2D, MaxPooling2D, Flatten, add, Embedding, Conv2DTranspose, GlobalMaxPooling2D, Input, UpSampling2D, Reshape, average
def DQNAgent(state_shape, n_actions, LEARN_RATE = 0.1):
model = Sequential()
model.add(Conv2D(32, (8, 8), strides=4, activation='relu',input_shape=state_shape))
model.add(Conv2D(64, (4, 4), strides=2, activation='relu'))
model.add(Conv2D(64, (3, 3), strides=1, activation='relu'))
model.add(Conv2D(1024, (7, 7), strides=1, activation='relu'))
model.add(Flatten())
model.add(Dense(n_actions, activation='linear'))
model.summary()
opt = K.optimizers.RMSprop(lr=LEARN_RATE)
model.compile(loss="mean_squared_error", optimizer=opt)
return model
```

```
class FrameBuffer(Wrapper): # Buffer frames together as observation space
def __init__(self, env, n_frames=4, dim_order='tensorflow'):
"""A gym wrapper that reshapes, crops and scales image into the desired shapes"""
super(FrameBuffer, self).__init__(env)
self.dim_order = dim_order
if dim_order == 'tensorflow':
height, width, n_channels = env.observation_space.shape
"""Multiply channels dimension by number of frames"""
obs_shape = [height, width, n_channels * n_frames]
else:
raise ValueError('dim_order should be "tensorflow" or "pytorch", got {}'.format(dim_order))
self.observation_space = Box(0.0, 1.0, obs_shape)
self.framebuffer = np.zeros(obs_shape, 'float32')
def reset(self):
"""resets breakout, returns initial frames"""
self.framebuffer = np.zeros_like(self.framebuffer)
self.update_buffer(self.env.reset())
return self.framebuffer
def step(self, action):
"""plays breakout for 1 step, returns frame buffer"""
new_img, reward, done, info = self.env.step(action)
self.update_buffer(new_img)
return self.framebuffer, reward, done, info
def update_buffer(self, img):
if self.dim_order == 'tensorflow':
offset = self.env.observation_space.shape[-1]
axis = -1
cropped_framebuffer = self.framebuffer[:,:,:-offset]
self.framebuffer = np.concatenate([img, cropped_framebuffer], axis = axis)
class PreprocessAtari(ObservationWrapper): # Grayscale, Scaling and Cropping
def __init__(self, env):
"""A gym wrapper that crops, scales image into the desired shapes and grayscales it."""
super(PreprocessAtari, self).__init__(env)
self.img_size = (84, 84)
self.observation_space = Box(0.0, 1.0, (self.img_size[0], self.img_size[1], 1))
def observation(self, img):
"""what happens to each observation"""
# crop image (top and bottom, top from 34, bottom remove last 16)
img = img[34:-16, :, :]
# resize image
img = cv2.resize(img, self.img_size)
img = img.mean(-1,keepdims=True)
img = img.astype('float32') / 255.
return img
%%capture
!wget http://www.atarimania.com/roms/Roms.rar
!mkdir /content/ROM/
!unrar e /content/Roms.rar /content/ROM/
!python -m atari_py.import_roms /content/ROM/
env = gym.make("BreakoutDeterministic-v4")
print(f"The original observation space is {env.observation_space}")
env = PreprocessAtari(env)
print(f"The original observation space is {env.observation_space}")
env = FrameBuffer(env, n_frames=4, dim_order='tensorflow')
print(f"The new observation space is {env.observation_space}")
obs = env.reset()
plt.title("Agent observation (4 frames: left most recent)") ##
plt.imshow(obs.transpose([0,2,1]).reshape([env.observation_space.shape[0],-1]), cmap='gray');
for i in range(3):
obs, _, _, _ = env.step(env.action_space.sample())
plt.title("Agent observation (4 frames: left most recent)")
plt.imshow(obs.transpose([0,2,1]).reshape([env.observation_space.shape[0],-1]), cmap='gray');
def epsilon_greedy_policy(state, epsilon):
"""pick actions given qvalues. Uses epsilon-greedy exploration strategy. """
if np.random.random() <= epsilon: #Explore
return env.action_space.sample()
else: #Exploit
return np.argmax(agent.predict(tf.expand_dims(state, axis = 0)))
agent = DQNAgent(env.observation_space.shape, env.action_space.n) # Local Network
target_model = DQNAgent(env.observation_space.shape, env.action_space.n) # Target Network
## Assign same weights to Q and Target Q
target_model.set_weights(agent.get_weights())
```
## Training the DQN agent
```
buffer_size=5000
replay_buffer = deque(maxlen=buffer_size)
def train(epochs=400, gamma = 1.0, epsilon = 1.0, epsilon_min = 0.01, epsilon_decay = 0.995, batch_size = 32 ):
scores = deque(maxlen=100)
avg_scores = []
for e in range(epochs):
state = env.reset()
done = False
i = 0
while not done: # Build memory
action = epsilon_greedy_policy(state,epsilon)
next_state, reward, done, _ = env.step(action)
#print(next_state.shape)
#replay_buffer.add(state, action, reward, next_state, done)
replay_buffer.append((state, action, reward, next_state, done))
state = next_state
epsilon = max(epsilon_min, epsilon_decay*epsilon) # decrease epsilon
i += reward
scores.append(i)
mean_score = np.mean(scores)
avg_scores.append(mean_score)
if mean_score >= 3.5: #Stop if a threshold is reached
print('Solved after {} trials ✔'.format(e))
return avg_scores
if e % 10 == 0: # Print info after every 10 episodes + Update target
print('[Episode {}] - Average Score: {}'.format(e, mean_score))
target_model.set_weights(agent.get_weights()) #Hard Update
replay(batch_size)
print('Did not solve after {} episodes 😞'.format(e))
return avg_scores
```
## Training the DQN agent - Replay Function
```
def replay(batch_size, gamma = 1.0):
x_batch, y_batch = [], []
minibatch = random.sample(replay_buffer, min(len(replay_buffer), batch_size))
for state, action, reward, next_state, done in minibatch:
y_target = target_model.predict(tf.expand_dims(state, axis = 0))
y_target[0][action] = reward if done else reward + gamma * np.max(agent.predict(tf.expand_dims(next_state, axis = 0))[0])
x_batch.append(state)
y_batch.append(y_target[0])
agent.fit(np.array(x_batch), np.array(y_batch), batch_size=len(x_batch), verbose=0)
a = train()
```
## Training the DQN agent - Reward Plot
```
plt.plot(np.linspace(0,400,len(a),endpoint=False), np.asarray(a))
plt.xlabel('Episode Number')
plt.ylabel('Average Reward (Over 100 Episodes)')
plt.show()
```
| true |
code
| 0.656933 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/source%20code%20summarization/sql/small_model.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
**<h3>Summarize the sql source code using codeTrans multitask training model</h3>**
<h4>You can make free prediction online through this
<a href="https://huggingface.co/SEBIS/code_trans_t5_small_source_code_summarization_sql_multitask">Link</a></h4> (When using the prediction online, you need to parse and tokenize the code first.)
**1. Load necessry libraries including huggingface transformers**
```
!pip install -q transformers sentencepiece
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
```
**2. Load the token classification pipeline and load it into the GPU if avilabile**
```
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_sql_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_sql_multitask", skip_special_tokens=True),
device=0
)
```
**3 Give the code for summarization, parse and tokenize it**
```
code = "select time (fieldname) from tablename" #@param {type:"raw"}
import re
import sqlparse
scanner=re.Scanner([
(r"\[[^\]]*\]", lambda scanner,token: token),
(r"\+", lambda scanner,token:"R_PLUS"),
(r"\*", lambda scanner,token:"R_KLEENE"),
(r"%", lambda scanner,token:"R_WILD"),
(r"\^", lambda scanner,token:"R_START"),
(r"\$", lambda scanner,token:"R_END"),
(r"\?", lambda scanner,token:"R_QUESTION"),
(r"[\.~``;_a-zA-Z0-9\s=:\{\}\-\\]+", lambda scanner,token:"R_FREE"),
(r'.', lambda scanner, token: None),
])
def tokenizeRegex(s):
results, remainder=scanner.scan(s)
return results
def my_traverse(token_list, statement_list, result_list):
for t in token_list:
if t.ttype == None:
my_traverse(t, statement_list, result_list)
elif t.ttype != sqlparse.tokens.Whitespace:
statement_list.append(t.ttype)
result_list.append(str(t))
return statement_list, result_list
def sanitizeSql(sql):
s = sql.strip().lower()
if not s[-1] == ";":
s += ';'
s = re.sub(r'\(', r' ( ', s)
s = re.sub(r'\)', r' ) ', s)
s = s.replace('#', '')
return s
statement_list = []
result_list = []
code = sanitizeSql(code)
tokens = sqlparse.parse(code)
statements, result = my_traverse(tokens, statement_list, result_list)
table_map = {}
column_map = {}
for i in range(len(statements)):
if statements[i] in [sqlparse.tokens.Number.Integer, sqlparse.tokens.Literal.Number.Integer]:
result[i] = "CODE_INTEGER"
elif statements[i] in [sqlparse.tokens.Number.Float, sqlparse.tokens.Literal.Number.Float]:
result[i] = "CODE_FLOAT"
elif statements[i] in [sqlparse.tokens.Number.Hexadecimal, sqlparse.tokens.Literal.Number.Hexadecimal]:
result[i] = "CODE_HEX"
elif statements[i] in [sqlparse.tokens.String.Symbol, sqlparse.tokens.String.Single, sqlparse.tokens.Literal.String.Single, sqlparse.tokens.Literal.String.Symbol]:
result[i] = tokenizeRegex(result[i])
elif statements[i] in[sqlparse.tokens.Name, sqlparse.tokens.Name.Placeholder, sqlparse.sql.Identifier]:
old_value = result[i]
if old_value in column_map:
result[i] = column_map[old_value]
else:
result[i] = 'col'+ str(len(column_map))
column_map[old_value] = result[i]
elif (result[i] == "." and statements[i] == sqlparse.tokens.Punctuation and i > 0 and result[i-1].startswith('col')):
old_value = result[i-1]
if old_value in table_map:
result[i-1] = table_map[old_value]
else:
result[i-1] = 'tab'+ str(len(table_map))
table_map[old_value] = result[i-1]
if (result[i].startswith('col') and i > 0 and (result[i-1] in ["from"])):
old_value = result[i]
if old_value in table_map:
result[i] = table_map[old_value]
else:
result[i] = 'tab'+ str(len(table_map))
table_map[old_value] = result[i]
tokenized_code = ' '.join(result)
print("SQL after tokenized: " + tokenized_code)
```
**4. Make Prediction**
```
pipeline([tokenized_code])
```
| true |
code
| 0.293404 | null | null | null | null |
|
| [01_word_embedding/03_Word2Vec.ipynb](https://github.com/shibing624/nlp-tutorial/blob/main/01_word_embedding/03_Word2Vec.ipynb) | 基于gensim使用word2vec模型 |[Open In Colab](https://colab.research.google.com/github/shibing624/nlp-tutorial/blob/main/01_word_embedding/03_Word2Vec.ipynb) |
# Word2Vec
这节通过gensim和pytorch训练日常使用的Word2Vec模型。
## Gensim
```
import gensim
sentences = [['first', 'sentence'], ['second', 'sentence']]
# 传入文本数据,直接初始化并训练Word2Vec模型
model = gensim.models.Word2Vec(sentences, min_count=1)
model.wv.key_to_index['first']
# 词之间的相似度
model.wv.similarity('first', 'second')
```
### 例子1:gensim训练英文word2vec模型
gensim下的word2vec模型可以继续训练,下面的例子把常用参数写上:
```
from gensim.test.utils import common_texts
from gensim.models import Word2Vec
print(common_texts[:200])
model = Word2Vec(sentences=common_texts, vector_size=100,
window=5, min_count=1, workers=4)
model
model.save("word2vec.model")
# 先保存,再继续接力训练
model = Word2Vec.load("word2vec.model")
model.train([["hello", "world"]], total_examples=1, epochs=1)
model
vector1 = model.wv['computer'] # get numpy vector of a word
vector1
sims = model.wv.most_similar('computer', topn=10) # get other similar words
sims
```
仅仅保存模型训练好的词向量键值对,通过 `KeyedVectors` 快速加载到内存,计算词的向量值:
```
from gensim.models import KeyedVectors
# Store just the words + their trained embeddings.
word_vectors = model.wv
word_vectors.save("word2vec.wordvectors")
# Load back with memory-mapping = read-only, shared across processes.
wv = KeyedVectors.load("word2vec.wordvectors", mmap='r')
vector2 = wv['computer'] # Get numpy vector of a word
vector2
compare = vector1 == vector2
compare.all()
```
向量结果是一样的。
### 例子2:gensim训练中文word2vec模型
```
txt_path = 'data/C000008_test.txt'
sentences = [i.split() for i in open(txt_path, 'r', encoding='utf-8').read().split('\n')]
sentences[:2]
model = gensim.models.Word2Vec(
sentences, vector_size=50, window=5, min_count=1, workers=4)
model.save('C000008.word2vec.model')
model.wv.key_to_index
# key index
print(model.wv.key_to_index['中国'])
print(model.wv.key_to_index['澳大利亚'])
# word vector
print(model.wv['中国'])
print(model.wv['澳大利亚'])
# compare two word
print(model.wv.similarity('中国', '澳大利亚'))
```
## PyTorch
演示使用pytorch训练skip-gram的word2vec模型,比上一节的论文实现简化一些。
```
import matplotlib.pyplot as plt
import torch.optim as optim
import torch.nn as nn
import torch
import numpy as np
import os
os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE"
def random_batch():
random_inputs = []
random_labels = []
random_index = np.random.choice(
range(len(skip_grams)), batch_size, replace=False)
for i in random_index:
random_inputs.append(np.eye(voc_size)[skip_grams[i][0]]) # target
random_labels.append(skip_grams[i][1]) # context word
return random_inputs, random_labels
class Word2Vec(nn.Module):
# Model
def __init__(self):
super(Word2Vec, self).__init__()
# W and WT is not Traspose relationship
# voc_size > embedding_size Weight
self.W = nn.Linear(voc_size, embedding_size, bias=False)
# embedding_size > voc_size Weight
self.WT = nn.Linear(embedding_size, voc_size, bias=False)
def forward(self, X):
# X : [batch_size, voc_size]
hidden_layer = self.W(X) # hidden_layer : [batch_size, embedding_size]
# output_layer : [batch_size, voc_size]
output_layer = self.WT(hidden_layer)
return output_layer
```
定义参数,开始训练:
```
batch_size = 2 # mini-batch size
embedding_size = 10 # embedding size
sentences = ["apple banana fruit", "banana orange fruit", "orange banana fruit",
"dog cat animal", "cat monkey animal", "monkey dog animal"]
word_sequence = " ".join(sentences).split()
word_list = " ".join(sentences).split()
word_list = list(set(word_list))
word_dict = {w: i for i, w in enumerate(word_list)}
voc_size = len(word_list)
# Make skip gram of one size window
skip_grams = []
for i in range(1, len(word_sequence) - 1):
target = word_dict[word_sequence[i]]
context = [word_dict[word_sequence[i - 1]],
word_dict[word_sequence[i + 1]]]
for w in context:
skip_grams.append([target, w])
model = Word2Vec()
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
# Training
for epoch in range(9000):
input_batch, target_batch = random_batch()
input_batch = torch.Tensor(input_batch)
target_batch = torch.LongTensor(target_batch)
optimizer.zero_grad()
output = model(input_batch)
# output : [batch_size, voc_size], target_batch : [batch_size] (LongTensor, not one-hot)
loss = criterion(output, target_batch)
if (epoch + 1) % 1000 == 0:
print('Epoch:', '%04d' % (epoch + 1), 'cost =', '{:.6f}'.format(loss))
loss.backward()
optimizer.step()
for i, label in enumerate(word_list):
W, WT = model.parameters()
x, y = W[0][i].item(), W[1][i].item()
plt.scatter(x, y)
plt.annotate(label, xy=(x, y), xytext=(5, 2),
textcoords='offset points', ha='right', va='bottom')
plt.show()
import os
os.remove('word2vec.model')
os.remove('word2vec.wordvectors')
os.remove('C000008.word2vec.model')
```
本节完。
| true |
code
| 0.701968 | null | null | null | null |
|
```
"""
You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.
Instructions for setting up Colab are as follows:
1. Open a new Python 3 notebook.
2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL)
3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator)
4. Run this cell to set up dependencies.
"""
# If you're using Google Colab and not running locally, run this cell.
## Install dependencies
!pip install wget
!apt-get install sox libsndfile1 ffmpeg
!pip install unidecode
# ## Install NeMo
BRANCH = 'r1.0.0rc1'
!python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[asr]
## Install TorchAudio
!pip install torchaudio>=0.6.0 -f https://download.pytorch.org/whl/torch_stable.html
## Grab the config we'll use in this example
!mkdir configs
```
# Introduction
This VAD tutorial is based on the MarbleNet model from paper "[MarbleNet: Deep 1D Time-Channel Separable Convolutional Neural Network for Voice Activity Detection](https://arxiv.org/abs/2010.13886)", which is an modification and extension of [MatchboxNet](https://arxiv.org/abs/2004.08531).
The notebook will follow the steps below:
- Dataset preparation: Instruction of downloading datasets. And how to convert it to a format suitable for use with nemo_asr
- Audio preprocessing (feature extraction): signal normalization, windowing, (log) spectrogram (or mel scale spectrogram, or MFCC)
- Data augmentation using SpecAugment "[SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition](https://arxiv.org/abs/1904.08779)" to increase number of data samples.
- Develop a small Neural classification model which can be trained efficiently.
- Model training on the Google Speech Commands dataset and Freesound dataset in NeMo.
- Evaluation of error cases of the model by audibly hearing the samples
- Add more evaluation metrics and transfer learning/fine tune
```
# Some utility imports
import os
from omegaconf import OmegaConf
```
# Data Preparation
## Download the background data
We suggest to use the background categories of [freesound](https://freesound.org/) dataset as our non-speech/background data.
We provide scripts for downloading and resampling it. Please have a look at Data Preparation part in NeMo docs. Note that downloading this dataset may takes hours.
**NOTE:** Here, this tutorial serves as a demonstration on how to train and evaluate models for vad using NeMo. We avoid using freesound dataset, and use `_background_noise_` category in Google Speech Commands Dataset as non-speech/background data.
## Download the speech data
We will use the open source Google Speech Commands Dataset (we will use V2 of the dataset for the tutorial, but require very minor changes to support V1 dataset) as our speech data. Google Speech Commands Dataset V2 will take roughly 6GB disk space. These scripts below will download the dataset and convert it to a format suitable for use with nemo_asr.
**NOTE**: You may additionally pass `--test_size` or `--val_size` flag for splitting train val and test data.
You may additionally pass `--seg_len` flag for indicating the segment length. Default is 0.63s.
**NOTE**: You may additionally pass a `--rebalance_method='fixed|over|under'` at the end of the script to rebalance the class samples in the manifest.
* 'fixed': Fixed number of samples for each class. For example, train 500, val 100, and test 200. (Change number in script if you want)
* 'over': Oversampling rebalance method
* 'under': Undersampling rebalance method
**NOTE**: We only take a small subset of speech data for demonstration, if you want to use entire speech data. Don't forget to **delete `--demo`** and change rebalance method/number. `_background_noise_` category only has **6** audio files. So we would like to generate more based on the audio files to enlarge our background training data. If you want to use your own background noise data, just change the `background_data_root` and **delete `--demo`**
```
tmp = 'src'
data_folder = 'data'
if not os.path.exists(tmp):
os.makedirs(tmp)
if not os.path.exists(data_folder):
os.makedirs(data_folder)
script = os.path.join(tmp, 'process_vad_data.py')
if not os.path.exists(script):
!wget -P $tmp https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/dataset_processing/process_vad_data.py
speech_data_root = os.path.join(data_folder, 'google_dataset_v2')
background_data_root = os.path.join(data_folder, 'google_dataset_v2/google_speech_recognition_v2/_background_noise_')# your <resampled freesound data directory>
out_dir = os.path.join(data_folder, 'manifest')
if not os.path.exists(speech_data_root):
os.mkdir(speech_data_root)
# This may take a few minutes
!python $script \
--out_dir={out_dir} \
--speech_data_root={speech_data_root} \
--background_data_root={background_data_root}\
--log \
--demo \
--rebalance_method='fixed'
```
## Preparing the manifest file
Manifest files are the data structure used by NeMo to declare a few important details about the data :
1) `audio_filepath`: Refers to the path to the raw audio file <br>
2) `label`: The class label (speech or background) of this sample <br>
3) `duration`: The length of the audio file, in seconds.<br>
4) `offset`: The start of the segment, in seconds.
```
# change below if you don't have or don't want to use rebalanced data
train_dataset = 'data/manifest/balanced_background_training_manifest.json,data/manifest/balanced_speech_training_manifest.json'
val_dataset = 'data/manifest/background_validation_manifest.json,data/manifest/speech_validation_manifest.json'
test_dataset = 'data/manifest/balanced_background_testing_manifest.json,data/manifest/balanced_speech_testing_manifest.json'
```
## Read a few rows of the manifest file
Manifest files are the data structure used by NeMo to declare a few important details about the data :
1) `audio_filepath`: Refers to the path to the raw audio file <br>
2) `command`: The class label (or speech command) of this sample <br>
3) `duration`: The length of the audio file, in seconds.
```
sample_test_dataset = test_dataset.split(',')[0]
!head -n 5 {sample_test_dataset}
```
# Training - Preparation
We will be training a MatchboxNet model from paper "[MatchboxNet: 1D Time-Channel Separable Convolutional Neural Network Architecture for Speech Commands Recognition](https://arxiv.org/abs/2004.08531)" evolved from [QuartzNet](https://arxiv.org/pdf/1910.10261.pdf) model. The benefit of QuartzNet over JASPER models is that they use Separable Convolutions, which greatly reduce the number of parameters required to get good model accuracy.
MatchboxNet models generally follow the model definition pattern QuartzNet-[BxRXC], where B is the number of blocks, R is the number of convolutional sub-blocks, and C is the number of channels in these blocks. Each sub-block contains a 1-D masked convolution, batch normalization, ReLU, and dropout.
```
# NeMo's "core" package
import nemo
# NeMo's ASR collection - this collections contains complete ASR models and
# building blocks (modules) for ASR
import nemo.collections.asr as nemo_asr
```
## Model Configuration
The MatchboxNet Model is defined in a config file which declares multiple important sections.
They are:
1) `model`: All arguments that will relate to the Model - preprocessors, encoder, decoder, optimizer and schedulers, datasets and any other related information
2) `trainer`: Any argument to be passed to PyTorch Lightning
```
MODEL_CONFIG = "marblenet_3x2x64.yaml"
if not os.path.exists(f"configs/{MODEL_CONFIG}"):
!wget -P configs/ "https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/examples/asr/conf/{MODEL_CONFIG}"
# This line will print the entire config of the MatchboxNet model
config_path = f"configs/{MODEL_CONFIG}"
config = OmegaConf.load(config_path)
print(config.pretty())
# Preserve some useful parameters
labels = config.model.labels
sample_rate = config.sample_rate
```
### Setting up the datasets within the config
If you'll notice, there are a few config dictionaries called `train_ds`, `validation_ds` and `test_ds`. These are configurations used to setup the Dataset and DataLoaders of the corresponding config.
```
print(config.model.train_ds.pretty())
```
### `???` inside configs
You will often notice that some configs have `???` in place of paths. This is used as a placeholder so that the user can change the value at a later time.
Let's add the paths to the manifests to the config above.
```
config.model.train_ds.manifest_filepath = train_dataset
config.model.validation_ds.manifest_filepath = val_dataset
config.model.test_ds.manifest_filepath = test_dataset
```
## Building the PyTorch Lightning Trainer
NeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem!
Let's first instantiate a Trainer object!
```
import torch
import pytorch_lightning as pl
print("Trainer config - \n")
print(config.trainer.pretty())
# Let's modify some trainer configs for this demo
# Checks if we have GPU available and uses it
cuda = 1 if torch.cuda.is_available() else 0
config.trainer.gpus = cuda
# Reduces maximum number of epochs to 5 for quick demonstration
config.trainer.max_epochs = 5
# Remove distributed training flags
config.trainer.accelerator = None
trainer = pl.Trainer(**config.trainer)
```
## Setting up a NeMo Experiment
NeMo has an experiment manager that handles logging and checkpointing for us, so let's use it !
```
from nemo.utils.exp_manager import exp_manager
exp_dir = exp_manager(trainer, config.get("exp_manager", None))
# The exp_dir provides a path to the current experiment for easy access
exp_dir = str(exp_dir)
exp_dir
```
## Building the MatchboxNet Model
MatchboxNet is an ASR model with a classification task - it generates one label for the entire provided audio stream. Therefore we encapsulate it inside the `EncDecClassificationModel` as follows.
```
vad_model = nemo_asr.models.EncDecClassificationModel(cfg=config.model, trainer=trainer)
```
# Training a MatchboxNet Model
As MatchboxNet is inherently a PyTorch Lightning Model, it can easily be trained in a single line - `trainer.fit(model)` !
# Training the model
Even with such a small model (73k parameters), and just 5 epochs (should take just a few minutes to train), you should be able to get a test set accuracy score around 98.83% (this result is for the [freesound](https://freesound.org/) dataset) with enough training data.
**NOTE:** If you follow our tutorial and user the generated background data, you may notice the below results are acceptable, but please remember, this tutorial is only for **demonstration** and the dataset is not good enough. Please change background dataset and train with enough data for improvement!
Experiment with increasing the number of epochs or with batch size to see how much you can improve the score!
**NOTE:** Noise robustness is quite important for VAD task. Below we list the augmentation we used in this demo.
Please refer to [05_Online_Noise_Augmentation.ipynb](https://github.com/NVIDIA/NeMo/blob/r1.0.0rc1/tutorials/asr/05_Online_Noise_Augmentation.ipynb) for understanding noise augmentation in NeMo.
```
# Noise augmentation
print(config.model.train_ds.augmentor.pretty()) # noise augmentation
print(config.model.spec_augment.pretty()) # SpecAug data augmentation
```
If you are interested in **pretrained** model, please have a look at [Transfer Leaning & Fine-tuning on a new dataset](#Transfer-Leaning-&-Fine-tuning-on-a-new-dataset) and incoming tutorial 07 Offline_and_Online_VAD_Demo
### Monitoring training progress
Before we begin training, let's first create a Tensorboard visualization to monitor progress
```
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
```
### Training for 5 epochs
We see below that the model begins to get modest scores on the validation set after just 5 epochs of training
```
trainer.fit(vad_model)
```
# Fast Training
We can dramatically improve the time taken to train this model by using Multi GPU training along with Mixed Precision.
For multi-GPU training, take a look at [the PyTorch Lightning Multi-GPU training section](https://pytorch-lightning.readthedocs.io/en/latest/advanced/multi_gpu.html)
For mixed-precision training, take a look at [the PyTorch Lightning Mixed-Precision training section](https://pytorch-lightning.readthedocs.io/en/latest/advanced/amp.html)
```python
# Mixed precision:
trainer = Trainer(amp_level='O1', precision=16)
# Trainer with a distributed backend:
trainer = Trainer(gpus=2, num_nodes=2, accelerator='ddp')
# Of course, you can combine these flags as well.
```
# Evaluation
## Evaluation on the Test set
Let's compute the final score on the test set via `trainer.test(model)`
```
trainer.test(vad_model, ckpt_path=None)
```
## Evaluation of incorrectly predicted samples
Given that we have a trained model, which performs reasonably well, let's try to listen to the samples where the model is least confident in its predictions.
### Extract the predictions from the model
We want to possess the actual logits of the model instead of just the final evaluation score, so we can define a function to perform the forward step for us without computing the final loss. Instead, we extract the logits per batch of samples provided.
### Accessing the data loaders
We can utilize the `setup_test_data` method in order to instantiate a data loader for the dataset we want to analyze.
For convenience, we can access these instantiated data loaders using the following accessors - `vad_model._train_dl`, `vad_model._validation_dl` and `vad_model._test_dl`.
```
vad_model.setup_test_data(config.model.test_ds)
test_dl = vad_model._test_dl
```
### Partial Test Step
Below we define a utility function to perform most of the test step. For reference, the test step is defined as follows:
```python
def test_step(self, batch, batch_idx, dataloader_idx=0):
audio_signal, audio_signal_len, labels, labels_len = batch
logits = self.forward(input_signal=audio_signal, input_signal_length=audio_signal_len)
loss_value = self.loss(logits=logits, labels=labels)
correct_counts, total_counts = self._accuracy(logits=logits, labels=labels)
return {'test_loss': loss_value, 'test_correct_counts': correct_counts, 'test_total_counts': total_counts}
```
```
@torch.no_grad()
def extract_logits(model, dataloader):
logits_buffer = []
label_buffer = []
# Follow the above definition of the test_step
for batch in dataloader:
audio_signal, audio_signal_len, labels, labels_len = batch
logits = model(input_signal=audio_signal, input_signal_length=audio_signal_len)
logits_buffer.append(logits)
label_buffer.append(labels)
print(".", end='')
print()
print("Finished extracting logits !")
logits = torch.cat(logits_buffer, 0)
labels = torch.cat(label_buffer, 0)
return logits, labels
cpu_model = vad_model.cpu()
cpu_model.eval()
logits, labels = extract_logits(cpu_model, test_dl)
print("Logits:", logits.shape, "Labels :", labels.shape)
# Compute accuracy - `_accuracy` is a PyTorch Lightning Metric !
acc = cpu_model._accuracy(logits=logits, labels=labels)
print(f"Accuracy : {float(acc[0]*100)} %")
```
### Filtering out incorrect samples
Let us now filter out the incorrectly labeled samples from the total set of samples in the test set
```
import librosa
import json
import IPython.display as ipd
# First let's create a utility class to remap the integer class labels to actual string label
class ReverseMapLabel:
def __init__(self, data_loader):
self.label2id = dict(data_loader.dataset.label2id)
self.id2label = dict(data_loader.dataset.id2label)
def __call__(self, pred_idx, label_idx):
return self.id2label[pred_idx], self.id2label[label_idx]
# Next, let's get the indices of all the incorrectly labeled samples
sample_idx = 0
incorrect_preds = []
rev_map = ReverseMapLabel(test_dl)
# Remember, evaluated_tensor = (loss, logits, labels)
probs = torch.softmax(logits, dim=-1)
probas, preds = torch.max(probs, dim=-1)
total_count = cpu_model._accuracy.total_counts_k[0]
incorrect_ids = (preds != labels).nonzero()
for idx in incorrect_ids:
proba = float(probas[idx][0])
pred = int(preds[idx][0])
label = int(labels[idx][0])
idx = int(idx[0]) + sample_idx
incorrect_preds.append((idx, *rev_map(pred, label), proba))
print(f"Num test samples : {total_count.item()}")
print(f"Num errors : {len(incorrect_preds)}")
# First let's sort by confidence of prediction
incorrect_preds = sorted(incorrect_preds, key=lambda x: x[-1], reverse=False)
```
### Examine a subset of incorrect samples
Let's print out the (test id, predicted label, ground truth label, confidence) tuple of first 20 incorrectly labeled samples
```
for incorrect_sample in incorrect_preds[:20]:
print(str(incorrect_sample))
```
### Define a threshold below which we designate a model's prediction as "low confidence"
```
# Filter out how many such samples exist
low_confidence_threshold = 0.8
count_low_confidence = len(list(filter(lambda x: x[-1] <= low_confidence_threshold, incorrect_preds)))
print(f"Number of low confidence predictions : {count_low_confidence}")
```
### Let's hear the samples which the model has least confidence in !
```
# First let's create a helper function to parse the manifest files
def parse_manifest(manifest):
data = []
for line in manifest:
line = json.loads(line)
data.append(line)
return data
# Next, let's create a helper function to actually listen to certain samples
def listen_to_file(sample_id, pred=None, label=None, proba=None):
# Load the audio waveform using librosa
filepath = test_samples[sample_id]['audio_filepath']
audio, sample_rate = librosa.load(filepath,
offset = test_samples[sample_id]['offset'],
duration = test_samples[sample_id]['duration'])
if pred is not None and label is not None and proba is not None:
print(f"filepath: {filepath}, Sample : {sample_id} Prediction : {pred} Label : {label} Confidence = {proba: 0.4f}")
else:
print(f"Sample : {sample_id}")
return ipd.Audio(audio, rate=sample_rate)
import json
# Now let's load the test manifest into memory
all_test_samples = []
for _ in test_dataset.split(','):
print(_)
with open(_, 'r') as test_f:
test_samples = test_f.readlines()
all_test_samples.extend(test_samples)
print(len(all_test_samples))
test_samples = parse_manifest(all_test_samples)
# Finally, let's listen to all the audio samples where the model made a mistake
# Note: This list of incorrect samples may be quite large, so you may choose to subsample `incorrect_preds`
count = min(count_low_confidence, 20) # replace this line with just `count_low_confidence` to listen to all samples with low confidence
for sample_id, pred, label, proba in incorrect_preds[:count]:
ipd.display(listen_to_file(sample_id, pred=pred, label=label, proba=proba))
```
## Adding evaluation metrics
Here is an example of how to use more metrics (e.g. from pytorch_lightning) to evaluate your result.
**Note:** If you would like to add metrics for training and testing, have a look at
```python
NeMo/nemo/collections/common/metrics
```
```
from pytorch_lightning.metrics.classification import ConfusionMatrix
_, pred = logits.topk(1, dim=1, largest=True, sorted=True)
pred = pred.squeeze()
metric = ConfusionMatrix(num_classes=2)
metric(pred, labels)
# confusion_matrix(preds=pred, target=labels)
```
# Transfer Leaning & Fine-tuning on a new dataset
For transfer learning, please refer to [**Transfer learning** part of ASR tutorial](https://github.com/NVIDIA/NeMo/blob/r1.0.0rc1/tutorials/asr/01_ASR_with_NeMo.ipynb)
More details on saving and restoring checkpoint, and exporting a model in its entirety, please refer to [**Fine-tuning on a new dataset** & **Advanced Usage parts** of Speech Command tutorial](https://github.com/NVIDIA/NeMo/blob/r1.0.0rc1/tutorials/asr/03_Speech_Commands.ipynb)
# Inference and more
If you are interested in **pretrained** model and **streaming inference**, please have a look at our [VAD inference tutorial](https://github.com/NVIDIA/NeMo/blob/r1.0.0rc1/tutorials/asr/07_Online_Offline_Microphone_VAD_Demo.ipynb) and script [vad_infer.py](https://github.com/NVIDIA/NeMo/blob/r1.0.0rc1/examples/asr/vad_infer.py)
| true |
code
| 0.644253 | null | null | null | null |
|
**Chapter 1 – The Machine Learning landscape**
_This is the code used to generate some of the figures in chapter 1._
# Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
```
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import numpy.random as rnd
import os
# to make this notebook's output stable across runs
rnd.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "fundamentals"
def save_fig(fig_id, tight_layout=True):
path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png")
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)
```
# Load and prepare Life satisfaction data
```
import pandas as pd
# Download CSV from http://stats.oecd.org/index.aspx?DataSetCode=BLI
datapath = "datasets/lifesat/"
oecd_bli = pd.read_csv(datapath+"oecd_bli_2015.csv", thousands=',')
oecd_bli = oecd_bli[oecd_bli["INEQUALITY"]=="TOT"]
oecd_bli = oecd_bli.pivot(index="Country", columns="Indicator", values="Value")
oecd_bli.head(2)
oecd_bli["Life satisfaction"].head()
```
# Load and prepare GDP per capita data
```
# Download data from http://goo.gl/j1MSKe (=> imf.org)
gdp_per_capita = pd.read_csv(datapath+"gdp_per_capita.csv", thousands=',', delimiter='\t',
encoding='latin1', na_values="n/a")
gdp_per_capita.rename(columns={"2015": "GDP per capita"}, inplace=True)
gdp_per_capita.set_index("Country", inplace=True)
gdp_per_capita.head(2)
full_country_stats = pd.merge(left=oecd_bli, right=gdp_per_capita, left_index=True, right_index=True)
full_country_stats.sort_values(by="GDP per capita", inplace=True)
full_country_stats
full_country_stats[["GDP per capita", 'Life satisfaction']].loc["United States"]
remove_indices = [0, 1, 6, 8, 33, 34, 35]
keep_indices = list(set(range(36)) - set(remove_indices))
sample_data = full_country_stats[["GDP per capita", 'Life satisfaction']].iloc[keep_indices]
missing_data = full_country_stats[["GDP per capita", 'Life satisfaction']].iloc[remove_indices]
sample_data.plot(kind='scatter', x="GDP per capita", y='Life satisfaction', figsize=(5,3))
plt.axis([0, 60000, 0, 10])
position_text = {
"Hungary": (5000, 1),
"Korea": (18000, 1.7),
"France": (29000, 2.4),
"Australia": (40000, 3.0),
"United States": (52000, 3.8),
}
for country, pos_text in position_text.items():
pos_data_x, pos_data_y = sample_data.loc[country]
country = "U.S." if country == "United States" else country
plt.annotate(country, xy=(pos_data_x, pos_data_y), xytext=pos_text,
arrowprops=dict(facecolor='black', width=0.5, shrink=0.1, headwidth=5))
plt.plot(pos_data_x, pos_data_y, "ro")
save_fig('money_happy_scatterplot')
plt.show()
sample_data.to_csv("life_satisfaction_vs_gdp_per_capita.csv")
sample_data.loc[list(position_text.keys())]
import numpy as np
sample_data.plot(kind='scatter', x="GDP per capita", y='Life satisfaction', figsize=(5,3))
plt.axis([0, 60000, 0, 10])
X=np.linspace(0, 60000, 1000)
plt.plot(X, 2*X/100000, "r")
plt.text(40000, 2.7, r"$\theta_0 = 0$", fontsize=14, color="r")
plt.text(40000, 1.8, r"$\theta_1 = 2 \times 10^{-5}$", fontsize=14, color="r")
plt.plot(X, 8 - 5*X/100000, "g")
plt.text(5000, 9.1, r"$\theta_0 = 8$", fontsize=14, color="g")
plt.text(5000, 8.2, r"$\theta_1 = -5 \times 10^{-5}$", fontsize=14, color="g")
plt.plot(X, 4 + 5*X/100000, "b")
plt.text(5000, 3.5, r"$\theta_0 = 4$", fontsize=14, color="b")
plt.text(5000, 2.6, r"$\theta_1 = 5 \times 10^{-5}$", fontsize=14, color="b")
save_fig('tweaking_model_params_plot')
plt.show()
from sklearn import linear_model
lin1 = linear_model.LinearRegression()
Xsample = np.c_[sample_data["GDP per capita"]]
ysample = np.c_[sample_data["Life satisfaction"]]
lin1.fit(Xsample, ysample)
t0, t1 = lin1.intercept_[0], lin1.coef_[0][0]
t0, t1
sample_data.plot(kind='scatter', x="GDP per capita", y='Life satisfaction', figsize=(5,3))
plt.axis([0, 60000, 0, 10])
X=np.linspace(0, 60000, 1000)
plt.plot(X, t0 + t1*X, "b")
plt.text(5000, 3.1, r"$\theta_0 = 4.85$", fontsize=14, color="b")
plt.text(5000, 2.2, r"$\theta_1 = 4.91 \times 10^{-5}$", fontsize=14, color="b")
save_fig('best_fit_model_plot')
plt.show()
cyprus_gdp_per_capita = gdp_per_capita.loc["Cyprus"]["GDP per capita"]
print(cyprus_gdp_per_capita)
cyprus_predicted_life_satisfaction = lin1.predict(cyprus_gdp_per_capita)[0][0]
cyprus_predicted_life_satisfaction
sample_data.plot(kind='scatter', x="GDP per capita", y='Life satisfaction', figsize=(5,3), s=1)
X=np.linspace(0, 60000, 1000)
plt.plot(X, t0 + t1*X, "b")
plt.axis([0, 60000, 0, 10])
plt.text(5000, 7.5, r"$\theta_0 = 4.85$", fontsize=14, color="b")
plt.text(5000, 6.6, r"$\theta_1 = 4.91 \times 10^{-5}$", fontsize=14, color="b")
plt.plot([cyprus_gdp_per_capita, cyprus_gdp_per_capita], [0, cyprus_predicted_life_satisfaction], "r--")
plt.text(25000, 5.0, r"Prediction = 5.96", fontsize=14, color="b")
plt.plot(cyprus_gdp_per_capita, cyprus_predicted_life_satisfaction, "ro")
save_fig('cyprus_prediction_plot')
plt.show()
sample_data[7:10]
(5.1+5.7+6.5)/3
backup = oecd_bli, gdp_per_capita
def prepare_country_stats(oecd_bli, gdp_per_capita):
return sample_data
# Code example
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import sklearn
# Load the data
oecd_bli = pd.read_csv(datapath + "oecd_bli_2015.csv", thousands=',')
gdp_per_capita = pd.read_csv(datapath + "gdp_per_capita.csv",thousands=',',delimiter='\t',
encoding='latin1', na_values="n/a")
# Prepare the data
country_stats = prepare_country_stats(oecd_bli, gdp_per_capita)
X = np.c_[country_stats["GDP per capita"]]
y = np.c_[country_stats["Life satisfaction"]]
# Visualize the data
country_stats.plot(kind='scatter', x="GDP per capita", y='Life satisfaction')
plt.show()
# Select a linear model
model = sklearn.linear_model.LinearRegression()
# Train the model
model.fit(X, y)
# Make a prediction for Cyprus
X_new = [[22587]] # Cyprus' GDP per capita
print(model.predict(X_new)) # outputs [[ 5.96242338]]
oecd_bli, gdp_per_capita = backup
missing_data
position_text2 = {
"Brazil": (1000, 9.0),
"Mexico": (11000, 9.0),
"Chile": (25000, 9.0),
"Czech Republic": (35000, 9.0),
"Norway": (60000, 3),
"Switzerland": (72000, 3.0),
"Luxembourg": (90000, 3.0),
}
sample_data.plot(kind='scatter', x="GDP per capita", y='Life satisfaction', figsize=(8,3))
plt.axis([0, 110000, 0, 10])
for country, pos_text in position_text2.items():
pos_data_x, pos_data_y = missing_data.loc[country]
plt.annotate(country, xy=(pos_data_x, pos_data_y), xytext=pos_text,
arrowprops=dict(facecolor='black', width=0.5, shrink=0.1, headwidth=5))
plt.plot(pos_data_x, pos_data_y, "rs")
X=np.linspace(0, 110000, 1000)
plt.plot(X, t0 + t1*X, "b:")
lin_reg_full = linear_model.LinearRegression()
Xfull = np.c_[full_country_stats["GDP per capita"]]
yfull = np.c_[full_country_stats["Life satisfaction"]]
lin_reg_full.fit(Xfull, yfull)
t0full, t1full = lin_reg_full.intercept_[0], lin_reg_full.coef_[0][0]
X = np.linspace(0, 110000, 1000)
plt.plot(X, t0full + t1full * X, "k")
save_fig('representative_training_data_scatterplot')
plt.show()
full_country_stats.plot(kind='scatter', x="GDP per capita", y='Life satisfaction', figsize=(8,3))
plt.axis([0, 110000, 0, 10])
from sklearn import preprocessing
from sklearn import pipeline
poly = preprocessing.PolynomialFeatures(degree=60, include_bias=False)
scaler = preprocessing.StandardScaler()
lin_reg2 = linear_model.LinearRegression()
pipeline_reg = pipeline.Pipeline([('poly', poly), ('scal', scaler), ('lin', lin_reg2)])
pipeline_reg.fit(Xfull, yfull)
curve = pipeline_reg.predict(X[:, np.newaxis])
plt.plot(X, curve)
save_fig('overfitting_model_plot')
plt.show()
full_country_stats.loc[[c for c in full_country_stats.index if "W" in c.upper()]]["Life satisfaction"]
gdp_per_capita.loc[[c for c in gdp_per_capita.index if "W" in c.upper()]].head()
plt.figure(figsize=(8,3))
plt.xlabel("GDP per capita")
plt.ylabel('Life satisfaction')
plt.plot(list(sample_data["GDP per capita"]), list(sample_data["Life satisfaction"]), "bo")
plt.plot(list(missing_data["GDP per capita"]), list(missing_data["Life satisfaction"]), "rs")
X = np.linspace(0, 110000, 1000)
plt.plot(X, t0full + t1full * X, "r--", label="Linear model on all data")
plt.plot(X, t0 + t1*X, "b:", label="Linear model on partial data")
ridge = linear_model.Ridge(alpha=10**9.5)
Xsample = np.c_[sample_data["GDP per capita"]]
ysample = np.c_[sample_data["Life satisfaction"]]
ridge.fit(Xsample, ysample)
t0ridge, t1ridge = ridge.intercept_[0], ridge.coef_[0][0]
plt.plot(X, t0ridge + t1ridge * X, "b", label="Regularized linear model on partial data")
plt.legend(loc="lower right")
plt.axis([0, 110000, 0, 10])
save_fig('ridge_model_plot')
plt.show()
backup = oecd_bli, gdp_per_capita
def prepare_country_stats(oecd_bli, gdp_per_capita):
return sample_data
# Replace this linear model:
model = sklearn.linear_model.LinearRegression()
# with this k-neighbors regression model:
model = sklearn.neighbors.KNeighborsRegressor(n_neighbors=3)
X = np.c_[country_stats["GDP per capita"]]
y = np.c_[country_stats["Life satisfaction"]]
# Train the model
model.fit(X, y)
# Make a prediction for Cyprus
X_new = np.array([[22587.0]]) # Cyprus' GDP per capita
print(model.predict(X_new)) # outputs [[ 5.76666667]]
```
| true |
code
| 0.531088 | null | null | null | null |
|
# Working with preprocessing layers
**Authors:** Francois Chollet, Mark Omernick<br>
**Date created:** 2020/07/25<br>
**Last modified:** 2021/04/23<br>
**Description:** Overview of how to leverage preprocessing layers to create end-to-end models.
## Keras preprocessing
The Keras preprocessing layers API allows developers to build Keras-native input
processing pipelines. These input processing pipelines can be used as independent
preprocessing code in non-Keras workflows, combined directly with Keras models, and
exported as part of a Keras SavedModel.
With Keras preprocessing layers, you can build and export models that are truly
end-to-end: models that accept raw images or raw structured data as input; models that
handle feature normalization or feature value indexing on their own.
## Available preprocessing
### Text preprocessing
- `tf.keras.layers.TextVectorization`: turns raw strings into an encoded
representation that can be read by an `Embedding` layer or `Dense` layer.
### Numerical features preprocessing
- `tf.keras.layers.Normalization`: performs feature-wise normalize of
input features.
- `tf.keras.layers.Discretization`: turns continuous numerical features
into integer categorical features.
### Categorical features preprocessing
- `tf.keras.layers.CategoryEncoding`: turns integer categorical features
into one-hot, multi-hot, or count dense representations.
- `tf.keras.layers.Hashing`: performs categorical feature hashing, also known as
the "hashing trick".
- `tf.keras.layers.StringLookup`: turns string categorical values an encoded
representation that can be read by an `Embedding` layer or `Dense` layer.
- `tf.keras.layers.IntegerLookup`: turns integer categorical values into an
encoded representation that can be read by an `Embedding` layer or `Dense`
layer.
### Image preprocessing
These layers are for standardizing the inputs of an image model.
- `tf.keras.layers.Resizing`: resizes a batch of images to a target size.
- `tf.keras.layers.Rescaling`: rescales and offsets the values of a batch of
image (e.g. go from inputs in the `[0, 255]` range to inputs in the `[0, 1]`
range.
- `tf.keras.layers.CenterCrop`: returns a center crop of a batch of images.
### Image data augmentation
These layers apply random augmentation transforms to a batch of images. They
are only active during training.
- `tf.keras.layers.RandomCrop`
- `tf.keras.layers.RandomFlip`
- `tf.keras.layers.RandomTranslation`
- `tf.keras.layers.RandomRotation`
- `tf.keras.layers.RandomZoom`
- `tf.keras.layers.RandomHeight`
- `tf.keras.layers.RandomWidth`
- `tf.keras.layers.RandomContrast`
## The `adapt()` method
Some preprocessing layers have an internal state that can be computed based on
a sample of the training data. The list of stateful preprocessing layers is:
- `TextVectorization`: holds a mapping between string tokens and integer indices
- `StringLookup` and `IntegerLookup`: hold a mapping between input values and integer
indices.
- `Normalization`: holds the mean and standard deviation of the features.
- `Discretization`: holds information about value bucket boundaries.
Crucially, these layers are **non-trainable**. Their state is not set during training; it
must be set **before training**, either by initializing them from a precomputed constant,
or by "adapting" them on data.
You set the state of a preprocessing layer by exposing it to training data, via the
`adapt()` method:
```
import numpy as np
import tensorflow as tf
from tensorflow.keras import layers
data = np.array([[0.1, 0.2, 0.3], [0.8, 0.9, 1.0], [1.5, 1.6, 1.7],])
layer = layers.Normalization()
layer.adapt(data)
normalized_data = layer(data)
print("Features mean: %.2f" % (normalized_data.numpy().mean()))
print("Features std: %.2f" % (normalized_data.numpy().std()))
```
The `adapt()` method takes either a Numpy array or a `tf.data.Dataset` object. In the
case of `StringLookup` and `TextVectorization`, you can also pass a list of strings:
```
data = [
"ξεῖν᾽, ἦ τοι μὲν ὄνειροι ἀμήχανοι ἀκριτόμυθοι",
"γίγνοντ᾽, οὐδέ τι πάντα τελείεται ἀνθρώποισι.",
"δοιαὶ γάρ τε πύλαι ἀμενηνῶν εἰσὶν ὀνείρων:",
"αἱ μὲν γὰρ κεράεσσι τετεύχαται, αἱ δ᾽ ἐλέφαντι:",
"τῶν οἳ μέν κ᾽ ἔλθωσι διὰ πριστοῦ ἐλέφαντος,",
"οἵ ῥ᾽ ἐλεφαίρονται, ἔπε᾽ ἀκράαντα φέροντες:",
"οἱ δὲ διὰ ξεστῶν κεράων ἔλθωσι θύραζε,",
"οἵ ῥ᾽ ἔτυμα κραίνουσι, βροτῶν ὅτε κέν τις ἴδηται.",
]
layer = layers.TextVectorization()
layer.adapt(data)
vectorized_text = layer(data)
print(vectorized_text)
```
In addition, adaptable layers always expose an option to directly set state via
constructor arguments or weight assignment. If the intended state values are known at
layer construction time, or are calculated outside of the `adapt()` call, they can be set
without relying on the layer's internal computation. For instance, if external vocabulary
files for the `TextVectorization`, `StringLookup`, or `IntegerLookup` layers already
exist, those can be loaded directly into the lookup tables by passing a path to the
vocabulary file in the layer's constructor arguments.
Here's an example where we instantiate a `StringLookup` layer with precomputed vocabulary:
```
vocab = ["a", "b", "c", "d"]
data = tf.constant([["a", "c", "d"], ["d", "z", "b"]])
layer = layers.StringLookup(vocabulary=vocab)
vectorized_data = layer(data)
print(vectorized_data)
```
## Preprocessing data before the model or inside the model
There are two ways you could be using preprocessing layers:
**Option 1:** Make them part of the model, like this:
```python
inputs = keras.Input(shape=input_shape)
x = preprocessing_layer(inputs)
outputs = rest_of_the_model(x)
model = keras.Model(inputs, outputs)
```
With this option, preprocessing will happen on device, synchronously with the rest of the
model execution, meaning that it will benefit from GPU acceleration.
If you're training on GPU, this is the best option for the `Normalization` layer, and for
all image preprocessing and data augmentation layers.
**Option 2:** apply it to your `tf.data.Dataset`, so as to obtain a dataset that yields
batches of preprocessed data, like this:
```python
dataset = dataset.map(lambda x, y: (preprocessing_layer(x), y))
```
With this option, your preprocessing will happen on CPU, asynchronously, and will be
buffered before going into the model.
In addition, if you call `dataset.prefetch(tf.data.AUTOTUNE)` on your dataset,
the preprocessing will happen efficiently in parallel with training:
```python
dataset = dataset.map(lambda x, y: (preprocessing_layer(x), y))
dataset = dataset.prefetch(tf.data.AUTOTUNE)
model.fit(dataset, ...)
```
This is the best option for `TextVectorization`, and all structured data preprocessing
layers. It can also be a good option if you're training on CPU
and you use image preprocessing layers.
**When running on TPU, you should always place preprocessing layers in the `tf.data` pipeline**
(with the exception of `Normalization` and `Rescaling`, which run fine on TPU and are commonly
used as the first layer is an image model).
## Benefits of doing preprocessing inside the model at inference time
Even if you go with option 2, you may later want to export an inference-only end-to-end
model that will include the preprocessing layers. The key benefit to doing this is that
**it makes your model portable** and it **helps reduce the
[training/serving skew](https://developers.google.com/machine-learning/guides/rules-of-ml#training-serving_skew)**.
When all data preprocessing is part of the model, other people can load and use your
model without having to be aware of how each feature is expected to be encoded &
normalized. Your inference model will be able to process raw images or raw structured
data, and will not require users of the model to be aware of the details of e.g. the
tokenization scheme used for text, the indexing scheme used for categorical features,
whether image pixel values are normalized to `[-1, +1]` or to `[0, 1]`, etc. This is
especially powerful if you're exporting
your model to another runtime, such as TensorFlow.js: you won't have to
reimplement your preprocessing pipeline in JavaScript.
If you initially put your preprocessing layers in your `tf.data` pipeline,
you can export an inference model that packages the preprocessing.
Simply instantiate a new model that chains
your preprocessing layers and your training model:
```python
inputs = keras.Input(shape=input_shape)
x = preprocessing_layer(inputs)
outputs = training_model(x)
inference_model = keras.Model(inputs, outputs)
```
## Preprocessing during multi-worker training
Preprocessing layers are compatible with the
[tf.distribute](https://www.tensorflow.org/api_docs/python/tf/distribute) API
for running training across multiple machines.
In general, preprocessing layers should be placed inside a `strategy.scope()`
and called either inside or before the model as discussed above.
```python
with strategy.scope():
inputs = keras.Input(shape=input_shape)
preprocessing_layer = tf.keras.layers.Hashing(10)
dense_layer = tf.keras.layers.Dense(16)
```
For more details, refer to the
[preprocessing section](https://www.tensorflow.org/tutorials/distribute/input#data_preprocessing)
of the distributed input guide.
## Quick recipes
### Image data augmentation
Note that image data augmentation layers are only active during training (similarly to
the `Dropout` layer).
```
from tensorflow import keras
from tensorflow.keras import layers
# Create a data augmentation stage with horizontal flipping, rotations, zooms
data_augmentation = keras.Sequential(
[
layers.RandomFlip("horizontal"),
layers.RandomRotation(0.1),
layers.RandomZoom(0.1),
]
)
# Load some data
(x_train, y_train), _ = keras.datasets.cifar10.load_data()
input_shape = x_train.shape[1:]
classes = 10
# Create a tf.data pipeline of augmented images (and their labels)
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.batch(16).map(lambda x, y: (data_augmentation(x), y))
# Create a model and train it on the augmented image data
inputs = keras.Input(shape=input_shape)
x = layers.Rescaling(1.0 / 255)(inputs) # Rescale inputs
outputs = keras.applications.ResNet50( # Add the rest of the model
weights=None, input_shape=input_shape, classes=classes
)(x)
model = keras.Model(inputs, outputs)
model.compile(optimizer="rmsprop", loss="sparse_categorical_crossentropy")
model.fit(train_dataset, steps_per_epoch=5)
```
You can see a similar setup in action in the example
[image classification from scratch](https://keras.io/examples/vision/image_classification_from_scratch/).
### Normalizing numerical features
```
# Load some data
(x_train, y_train), _ = keras.datasets.cifar10.load_data()
x_train = x_train.reshape((len(x_train), -1))
input_shape = x_train.shape[1:]
classes = 10
# Create a Normalization layer and set its internal state using the training data
normalizer = layers.Normalization()
normalizer.adapt(x_train)
# Create a model that include the normalization layer
inputs = keras.Input(shape=input_shape)
x = normalizer(inputs)
outputs = layers.Dense(classes, activation="softmax")(x)
model = keras.Model(inputs, outputs)
# Train the model
model.compile(optimizer="adam", loss="sparse_categorical_crossentropy")
model.fit(x_train, y_train)
```
### Encoding string categorical features via one-hot encoding
```
# Define some toy data
data = tf.constant([["a"], ["b"], ["c"], ["b"], ["c"], ["a"]])
# Use StringLookup to build an index of the feature values and encode output.
lookup = layers.StringLookup(output_mode="one_hot")
lookup.adapt(data)
# Convert new test data (which includes unknown feature values)
test_data = tf.constant([["a"], ["b"], ["c"], ["d"], ["e"], [""]])
encoded_data = lookup(test_data)
print(encoded_data)
```
Note that, here, index 0 is reserved for out-of-vocabulary values
(values that were not seen during `adapt()`).
You can see the `StringLookup` in action in the
[Structured data classification from scratch](https://keras.io/examples/structured_data/structured_data_classification_from_scratch/)
example.
### Encoding integer categorical features via one-hot encoding
```
# Define some toy data
data = tf.constant([[10], [20], [20], [10], [30], [0]])
# Use IntegerLookup to build an index of the feature values and encode output.
lookup = layers.IntegerLookup(output_mode="one_hot")
lookup.adapt(data)
# Convert new test data (which includes unknown feature values)
test_data = tf.constant([[10], [10], [20], [50], [60], [0]])
encoded_data = lookup(test_data)
print(encoded_data)
```
Note that index 0 is reserved for missing values (which you should specify as the value
0), and index 1 is reserved for out-of-vocabulary values (values that were not seen
during `adapt()`). You can configure this by using the `mask_token` and `oov_token`
constructor arguments of `IntegerLookup`.
You can see the `IntegerLookup` in action in the example
[structured data classification from scratch](https://keras.io/examples/structured_data/structured_data_classification_from_scratch/).
### Applying the hashing trick to an integer categorical feature
If you have a categorical feature that can take many different values (on the order of
10e3 or higher), where each value only appears a few times in the data,
it becomes impractical and ineffective to index and one-hot encode the feature values.
Instead, it can be a good idea to apply the "hashing trick": hash the values to a vector
of fixed size. This keeps the size of the feature space manageable, and removes the need
for explicit indexing.
```
# Sample data: 10,000 random integers with values between 0 and 100,000
data = np.random.randint(0, 100000, size=(10000, 1))
# Use the Hashing layer to hash the values to the range [0, 64]
hasher = layers.Hashing(num_bins=64, salt=1337)
# Use the CategoryEncoding layer to multi-hot encode the hashed values
encoder = layers.CategoryEncoding(num_tokens=64, output_mode="multi_hot")
encoded_data = encoder(hasher(data))
print(encoded_data.shape)
```
### Encoding text as a sequence of token indices
This is how you should preprocess text to be passed to an `Embedding` layer.
```
# Define some text data to adapt the layer
adapt_data = tf.constant(
[
"The Brain is wider than the Sky",
"For put them side by side",
"The one the other will contain",
"With ease and You beside",
]
)
# Create a TextVectorization layer
text_vectorizer = layers.TextVectorization(output_mode="int")
# Index the vocabulary via `adapt()`
text_vectorizer.adapt(adapt_data)
# Try out the layer
print(
"Encoded text:\n", text_vectorizer(["The Brain is deeper than the sea"]).numpy(),
)
# Create a simple model
inputs = keras.Input(shape=(None,), dtype="int64")
x = layers.Embedding(input_dim=text_vectorizer.vocabulary_size(), output_dim=16)(inputs)
x = layers.GRU(8)(x)
outputs = layers.Dense(1)(x)
model = keras.Model(inputs, outputs)
# Create a labeled dataset (which includes unknown tokens)
train_dataset = tf.data.Dataset.from_tensor_slices(
(["The Brain is deeper than the sea", "for if they are held Blue to Blue"], [1, 0])
)
# Preprocess the string inputs, turning them into int sequences
train_dataset = train_dataset.batch(2).map(lambda x, y: (text_vectorizer(x), y))
# Train the model on the int sequences
print("\nTraining model...")
model.compile(optimizer="rmsprop", loss="mse")
model.fit(train_dataset)
# For inference, you can export a model that accepts strings as input
inputs = keras.Input(shape=(1,), dtype="string")
x = text_vectorizer(inputs)
outputs = model(x)
end_to_end_model = keras.Model(inputs, outputs)
# Call the end-to-end model on test data (which includes unknown tokens)
print("\nCalling end-to-end model on test string...")
test_data = tf.constant(["The one the other will absorb"])
test_output = end_to_end_model(test_data)
print("Model output:", test_output)
```
You can see the `TextVectorization` layer in action, combined with an `Embedding` mode,
in the example
[text classification from scratch](https://keras.io/examples/nlp/text_classification_from_scratch/).
Note that when training such a model, for best performance, you should always
use the `TextVectorization` layer as part of the input pipeline.
### Encoding text as a dense matrix of ngrams with multi-hot encoding
This is how you should preprocess text to be passed to a `Dense` layer.
```
# Define some text data to adapt the layer
adapt_data = tf.constant(
[
"The Brain is wider than the Sky",
"For put them side by side",
"The one the other will contain",
"With ease and You beside",
]
)
# Instantiate TextVectorization with "multi_hot" output_mode
# and ngrams=2 (index all bigrams)
text_vectorizer = layers.TextVectorization(output_mode="multi_hot", ngrams=2)
# Index the bigrams via `adapt()`
text_vectorizer.adapt(adapt_data)
# Try out the layer
print(
"Encoded text:\n", text_vectorizer(["The Brain is deeper than the sea"]).numpy(),
)
# Create a simple model
inputs = keras.Input(shape=(text_vectorizer.vocabulary_size(),))
outputs = layers.Dense(1)(inputs)
model = keras.Model(inputs, outputs)
# Create a labeled dataset (which includes unknown tokens)
train_dataset = tf.data.Dataset.from_tensor_slices(
(["The Brain is deeper than the sea", "for if they are held Blue to Blue"], [1, 0])
)
# Preprocess the string inputs, turning them into int sequences
train_dataset = train_dataset.batch(2).map(lambda x, y: (text_vectorizer(x), y))
# Train the model on the int sequences
print("\nTraining model...")
model.compile(optimizer="rmsprop", loss="mse")
model.fit(train_dataset)
# For inference, you can export a model that accepts strings as input
inputs = keras.Input(shape=(1,), dtype="string")
x = text_vectorizer(inputs)
outputs = model(x)
end_to_end_model = keras.Model(inputs, outputs)
# Call the end-to-end model on test data (which includes unknown tokens)
print("\nCalling end-to-end model on test string...")
test_data = tf.constant(["The one the other will absorb"])
test_output = end_to_end_model(test_data)
print("Model output:", test_output)
```
### Encoding text as a dense matrix of ngrams with TF-IDF weighting
This is an alternative way of preprocessing text before passing it to a `Dense` layer.
```
# Define some text data to adapt the layer
adapt_data = tf.constant(
[
"The Brain is wider than the Sky",
"For put them side by side",
"The one the other will contain",
"With ease and You beside",
]
)
# Instantiate TextVectorization with "tf-idf" output_mode
# (multi-hot with TF-IDF weighting) and ngrams=2 (index all bigrams)
text_vectorizer = layers.TextVectorization(output_mode="tf-idf", ngrams=2)
# Index the bigrams and learn the TF-IDF weights via `adapt()`
with tf.device("CPU"):
# A bug that prevents this from running on GPU for now.
text_vectorizer.adapt(adapt_data)
# Try out the layer
print(
"Encoded text:\n", text_vectorizer(["The Brain is deeper than the sea"]).numpy(),
)
# Create a simple model
inputs = keras.Input(shape=(text_vectorizer.vocabulary_size(),))
outputs = layers.Dense(1)(inputs)
model = keras.Model(inputs, outputs)
# Create a labeled dataset (which includes unknown tokens)
train_dataset = tf.data.Dataset.from_tensor_slices(
(["The Brain is deeper than the sea", "for if they are held Blue to Blue"], [1, 0])
)
# Preprocess the string inputs, turning them into int sequences
train_dataset = train_dataset.batch(2).map(lambda x, y: (text_vectorizer(x), y))
# Train the model on the int sequences
print("\nTraining model...")
model.compile(optimizer="rmsprop", loss="mse")
model.fit(train_dataset)
# For inference, you can export a model that accepts strings as input
inputs = keras.Input(shape=(1,), dtype="string")
x = text_vectorizer(inputs)
outputs = model(x)
end_to_end_model = keras.Model(inputs, outputs)
# Call the end-to-end model on test data (which includes unknown tokens)
print("\nCalling end-to-end model on test string...")
test_data = tf.constant(["The one the other will absorb"])
test_output = end_to_end_model(test_data)
print("Model output:", test_output)
```
## Important gotchas
### Working with lookup layers with very large vocabularies
You may find yourself working with a very large vocabulary in a `TextVectorization`, a `StringLookup` layer,
or an `IntegerLookup` layer. Typically, a vocabulary larger than 500MB would be considered "very large".
In such case, for best performance, you should avoid using `adapt()`.
Instead, pre-compute your vocabulary in advance
(you could use Apache Beam or TF Transform for this)
and store it in a file. Then load the vocabulary into the layer at construction
time by passing the filepath as the `vocabulary` argument.
### Using lookup layers on a TPU pod or with `ParameterServerStrategy`.
There is an outstanding issue that causes performance to degrade when using
a `TextVectorization`, `StringLookup`, or `IntegerLookup` layer while
training on a TPU pod or on multiple machines via `ParameterServerStrategy`.
This is slated to be fixed in TensorFlow 2.7.
| true |
code
| 0.752249 | null | null | null | null |
|
# Testing Cnots
In this notebook we take imperfect versions of cnot gates and see how well they would work within a `d=3`, `T=1` surface code and a `d=5`, `T=3` repetition code.
```
import numpy as np
from copy import deepcopy
from topological_codes import RepetitionCode, SurfaceCode, GraphDecoder
from qiskit import QuantumCircuit, transpile
from qiskit.providers.aer import AerSimulator
from qiskit.providers.aer.noise.errors import depolarizing_error
from qiskit.circuit.library import CRXGate
from qiskit.quantum_info import process_fidelity
from matplotlib import pyplot as plt
```
The candidate cnots to be tested need to be provided as a Qiskit instruction. These can be created from forms such as unitaries, Choi matrices, Qiskit gates and Qiskit circuits.
For example, the following function creates a noisy cnot from a noisy circuit, parameterized by an error probability $\epsilon$. This can generate both coherent or incoherent forms of noise.
```
def noisy_cx(eps, coherent=True):
if coherent:
error = CRXGate(np.pi*eps/2)
else:
error = depolarizing_error(eps/2,2)
qc = QuantumCircuit(2,name='noisy cx')
qc.append(error,[1,0])
qc.cx(0,1)
qc.append(error,[1,0])
return qc.to_instruction()
code = SurfaceCode(3,2)
qc = code.circuit['0']
```
Given a code and a candidate cnot, the following function replaces all instances of cnots with the candidate cnot.
```
def make_noisy_code(code, cand_cx):
noisy_code = deepcopy(code)
for log in code.circuit:
qc = noisy_code.circuit[log]
temp_qc = QuantumCircuit()
for qreg in qc.qregs:
temp_qc.add_register(qreg)
for creg in qc.cregs:
temp_qc.add_register(creg)
for gate in qc.data:
if gate[0].name=='cx':
temp_qc.append(cand_cx,gate[1])
else:
temp_qc.data.append(gate)
noisy_code.circuit[log] = temp_qc.copy()
return noisy_code
```
In some cases, it is better to extract the exact probabilities from a simulation rather than using sampling. However, to do this we need to defer all measurements to the end. For this we add auxilliary qubits corresponding to each classical bit. We also need to rewrite the output bit string to reproduce the format that the result should be. The following functions do these things.
```
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister
def move_msm(qc):
bits = []
for creg in qc.cregs:
for bit in creg:
bits.append(bit)
new_qc = QuantumCircuit()
for regs in [qc.qregs, qc.cregs]:
for reg in regs:
new_qc.add_register(reg)
aux = {}
for reg in qc.cregs:
for bit in reg:
aux[bits.index(bit)] = QuantumRegister(1)
new_qc.add_register(aux[bits.index(bit)])
for gate in qc.data:
if gate[0].name=='measure':
new_qc.cx(gate[1][0], aux[bits.index(gate[2][0])])
else:
new_qc.data.append(gate)
new_qc.save_probabilities_dict()
return new_qc, aux
def format_probs(probs, new_qc, aux):
bits = []
for creg in qc.cregs:
for bit in creg:
bits.append(bit)
index = {}
for reg in new_qc.cregs:
for bit in reg:
index[bit] = new_qc.qubits.index(aux[bits.index(bit)][0])
new_probs = {}
for string,prob in probs.items():
new_string = ''
for reg in new_qc.cregs:
for bit in reg:
j = index[bit]
new_string += string[-1-j]
new_string += ' '
new_string = new_string[::-1][1::]
if new_string in new_probs:
new_probs[new_string] += prob
else:
new_probs[new_string] = prob
return new_probs
```
Now we can run simulations of the codes for different candidate cnots, and see what logical error rates we find.
```
# choose the type of code to study
repetition = False
# and the type of noise
coherent = True
# set the noise levels to study
noise = [0.1+0.02*j for j in range(10)]
# and calculate the corresponding process infidelities
infidelity = [ 1-process_fidelity(noisy_cx(eps),noisy_cx(0)) for eps in noise ]
backend = AerSimulator(zero_threshold=1e-5)
if repetition:
d,T = 3,3
else:
d,T = 3,1
sample = (not coherent) or (not repetition)
if sample:
shots = 4*8192
else:
shots = 1
logical = {'z':[], 'x':[]}
for basis in ['z', 'x']:
if repetition:
decoder = GraphDecoder(RepetitionCode(d,T,xbasis=(basis=='x')))
else:
decoder = GraphDecoder(SurfaceCode(d,T,basis=basis))
for eps in noise:
# make the noisy code
cand_cx = noisy_cx(eps,coherent=coherent)
if repetition:
code = make_noisy_code(RepetitionCode(d,T,xbasis=(basis=='x')),cand_cx)
else:
code = make_noisy_code(SurfaceCode(d,T,basis=basis),cand_cx)
# run it
raw_results = {}
if sample:
circuits = code.get_circuit_list()
else:
auxs = []
circuits = []
for qc in code.get_circuit_list():
new_qc,aux = move_msm(qc)
circuits.append(new_qc)
auxs.append(aux)
circuits = transpile(circuits,backend)
job = backend.run(circuits, shots=shots)
if sample:
for log in ['0','1']:
raw_results[log] = job.result().get_counts(int(log))
else:
for qc,aux in zip(circuits,auxs):
probs = job.result().data(qc)['probabilities']
n = str(len(qc.qubits))
probs = {('{0:0'+n+'b}').format(output):shots for output,shots in probs.items()}
raw_results[str(circuits.index(qc))] = {string:prob for string,prob in format_probs(probs, qc, aux).items()}
results = code.process_results(raw_results)
# get logical error probs
logical[basis].append( max(decoder.get_logical_prob(results).values()) )
print('Complete:',basis,eps)
plt.scatter(infidelity,[max(logical['z'][j],logical['x'][j]) for j in range(len(noise))],label='max')
```
| true |
code
| 0.358704 | null | null | null | null |
|
# Grid search forecaster
Skforecast library combines grid search strategy with backtesting to identify the combination of lags and hyperparameters that achieve the best prediction performance.
The grid search requires two grids, one with the different lags configuration (`lags_grid`) and the other with the list of hyperparameters to be tested (`param_grid`). The process comprises the following steps:
1. `grid_search_forecaster` creates a copy of the forecaster object and replaces the `lags` argument with the first option appearing in `lags_grid`.
2. The function validates all combinations of hyperparameters presented in `param_grid` by [backtesting](https://joaquinamatrodrigo.github.io/skforecast/latest/user_guides/backtesting.html).
3. The function repeats these two steps until it runs through all the possibilities (lags + hyperparameters).
4. If `return_best = True`, the original forecaster is trained with the best lags and hyperparameters configuration found.
## Libraries
```
# Libraries
# ==============================================================================
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestRegressor
from skforecast.ForecasterAutoreg import ForecasterAutoreg
from skforecast.model_selection import grid_search_forecaster
from sklearn.metrics import mean_squared_error
```
## Data
```
# Download data
# ==============================================================================
url = ('https://raw.githubusercontent.com/JoaquinAmatRodrigo/skforecast/master/data/h2o.csv')
data = pd.read_csv(url, sep=',', header=0, names=['y', 'datetime'])
# Data preprocessing
# ==============================================================================
data['datetime'] = pd.to_datetime(data['datetime'], format='%Y/%m/%d')
data = data.set_index('datetime')
data = data.asfreq('MS')
data = data[['y']]
data = data.sort_index()
# Train-val-test dates
# ==============================================================================
end_train = '2001-01-01 23:59:00'
end_val = '2006-01-01 23:59:00'
print(f"Train dates : {data.index.min()} --- {data.loc[:end_train].index.max()} (n={len(data.loc[:end_train])})")
print(f"Validation dates : {data.loc[end_train:].index.min()} --- {data.loc[:end_val].index.max()} (n={len(data.loc[end_train:end_val])})")
print(f"Test dates : {data.loc[end_val:].index.min()} --- {data.index.max()} (n={len(data.loc[end_val:])})")
# Plot
# ==============================================================================
fig, ax=plt.subplots(figsize=(9, 4))
data.loc[:end_train].plot(ax=ax, label='train')
data.loc[end_train:end_val].plot(ax=ax, label='validation')
data.loc[end_val:].plot(ax=ax, label='test')
ax.legend();
```
## Grid search
```
# Grid search hyperparameter and lags
# ==============================================================================
forecaster = ForecasterAutoreg(
regressor = RandomForestRegressor(random_state=123),
lags = 10 # Placeholder, the value will be overwritten
)
# Lags used as predictors
lags_grid = [3, 10, [1, 2, 3, 20]]
# Regressor hyperparameters
param_grid = {'n_estimators': [50, 100],
'max_depth': [5, 10, 15]}
results_grid = grid_search_forecaster(
forecaster = forecaster,
y = data.loc[:end_val, 'y'],
param_grid = param_grid,
lags_grid = lags_grid,
steps = 12,
refit = True,
metric = 'mean_squared_error',
initial_train_size = len(data.loc[:end_train]),
fixed_train_size = False,
return_best = True,
verbose = False
)
results_grid
forecaster
```
## Grid search with custom metric
Besides the frequently used metrics: mean_squared_error, mean_absolute_error, and mean_absolute_percentage_error, it is possible to use any custom function as long as:
+ It includes the arguments:
+ `y_true`: true values of the series.
+ `y_pred`: predicted values.
+ It returns a numeric value (`float` or `int`).
It allows evaluating the predictive capability of the model in a wide range of scenarios, for example:
+ Consider only certain months, days, hours...
+ Consider only dates that are holidays.
+ Consider only the last step of the predicted horizon.
The following example shows how to forecast a 12-month horizon but considering only the last 3 months of each year to calculate the interest metric.
```
# Grid search hyperparameter and lags with custom metric
# ==============================================================================
def custom_metric(y_true, y_pred):
'''
Calculate the mean squared error using only the predicted values of the last
3 months of the year.
'''
mask = y_true.index.month.isin([10, 11, 12])
metric = mean_squared_error(y_true[mask], y_pred[mask])
return metric
forecaster = ForecasterAutoreg(
regressor = RandomForestRegressor(random_state=123),
lags = 10 # Placeholder, the value will be overwritten
)
# Lags used as predictors
lags_grid = [3, 10, [1, 2, 3, 20]]
# Regressor hyperparameters
param_grid = {'n_estimators': [50, 100],
'max_depth': [5, 10, 15]}
results_grid = grid_search_forecaster(
forecaster = forecaster,
y = data.loc[:end_val, 'y'],
param_grid = param_grid,
lags_grid = lags_grid,
steps = 12,
refit = True,
metric = custom_metric,
initial_train_size = len(data.loc[:end_train]),
fixed_train_size = False,
return_best = True,
verbose = False
)
```
## Hide progress bar
It is possible to hide the progress bar using the following code.
```
from tqdm import tqdm
from functools import partialmethod
tqdm.__init__ = partialmethod(tqdm.__init__, disable=True)
results_grid = grid_search_forecaster(
forecaster = forecaster,
y = data.loc[:end_val, 'y'],
param_grid = param_grid,
lags_grid = lags_grid,
steps = 12,
refit = True,
metric = 'mean_squared_error',
initial_train_size = len(data.loc[:end_train]),
fixed_train_size = False,
return_best = True,
verbose = False
)
%%html
<style>
.jupyter-wrapper .jp-CodeCell .jp-Cell-inputWrapper .jp-InputPrompt {display: none;}
</style>
```
| true |
code
| 0.658472 | null | null | null | null |
|
# The IMDb Dataset
The IMDb dataset consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. We use the two-way (positive/negative) class split, and use only sentence-level labels.
```
from IPython.display import display, Markdown
with open('../../doc/env_variables_setup.md', 'r') as fh:
content = fh.read()
display(Markdown(content))
```
## Import Packages
```
import tensorflow as tf
import tensorflow_datasets
from tensorflow.keras.utils import to_categorical
from transformers import (
BertConfig,
BertTokenizer,
TFBertModel,
TFBertForSequenceClassification,
glue_convert_examples_to_features,
glue_processors
)
from sklearn.metrics import confusion_matrix, accuracy_score
from sklearn.metrics import classification_report
import matplotlib.pyplot as plt
import math
import numpy as np
import os
import time
from datetime import timedelta
import shutil
from datetime import datetime
import pickle
# new
import re
from keras.models import Sequential, load_model
```
## Check configuration
```
print(tf.version.GIT_VERSION, tf.version.VERSION)
print(tf.keras.__version__)
gpus = tf.config.list_physical_devices('GPU')
if len(gpus)>0:
for gpu in gpus:
print('Name:', gpu.name, ' Type:', gpu.device_type)
else:
print('No GPU available !!!!')
```
## Define Paths
```
# note: these need to be specified in the config.sh file
try:
data_dir=os.environ['PATH_DATASETS']
except KeyError:
print('missing PATH_DATASETS')
try:
tensorboard_dir=os.environ['PATH_TENSORBOARD']
except KeyError:
print('missing PATH_TENSORBOARD')
try:
savemodel_dir=os.environ['PATH_SAVE_MODEL']
except KeyError:
print('missing PATH_SAVE_MODEL')
```
## Import local packages
```
import preprocessing.preprocessing as pp
import utils.model_metrics as mm
import importlib
importlib.reload(pp);
importlib.reload(mm);
```
## Loading a data from Tensorflow Datasets
```
data, info = tensorflow_datasets.load(name="imdb_reviews",
data_dir=data_dir,
as_supervised=True,
with_info=True)
# IMDb specific:
data_valid = data['test'].take(1000)
# trying to create a true validation data set for after the computation
#data_valid_ext = data['test'].take(2000)
#data_valid = data_valid_ext.take(1000)
```
### Checking basic info from the metadata
```
info
pp.print_info_dataset(info)
```
### Checking basic info from the metadata
```
data
data.keys()
# only works for glue-compatible datasets
try:
pp.print_info_data(data['train'])
except AttributeError:
print('data format incompatible')
```
## Define parameters of the model
```
# changes: had to eliminate all lines concerning a test data set because we only have train and valid
# define parameters
#BATCH_SIZE_TRAIN = 32
#BATCH_SIZE_TEST = 32
#BATCH_SIZE_VALID = 64
#EPOCH = 2
#TOKENIZER = 'bert-base-multilingual-uncased'
#MAX_LENGTH = 512
# extract parameters
size_train_dataset = info.splits['train'].num_examples
# the size for the validation data set has been manually computed according to the function
# pp.print_info_data because the test set has been manually split above
size_valid_dataset = np.shape(np.array(list(data_valid.as_numpy_iterator())))[0]
number_label = info.features["label"].num_classes
# computer parameter
#STEP_EPOCH_TRAIN = math.ceil(size_train_dataset/BATCH_SIZE_TRAIN)
#STEP_EPOCH_VALID = math.ceil(size_valid_dataset/BATCH_SIZE_VALID)
#print('Dataset size: {:6}/{:6}'.format(size_train_dataset, size_valid_dataset))
#print('Batch size: {:6}/{:6}'.format(BATCH_SIZE_TRAIN, BATCH_SIZE_VALID))
#print('Step per epoch: {:6}/{:6}'.format(STEP_EPOCH_TRAIN, STEP_EPOCH_VALID))
#print('Total number of batch: {:6}/{:6}'.format(STEP_EPOCH_TRAIN*(EPOCH+1), STEP_EPOCH_VALID*(EPOCH+1)))
```
### Additional steps for the IMDb dataset specifically
#### Cleaning
```
def preprocess_reviews(reviews):
#REPLACE_NO_SPACE = re.compile("[.;:!\'?,\"()\[\]]")
REPLACE_WITH_SPACE = re.compile("(<br\s*/><br\s*/>)|(\-)|(\/)")
#ae, oe, ue => only for GERMAN data
#REPLACE_UMLAUT_AE = re.compile("(ae)")
#REPLACE_UMLAUT_OE = re.compile("(oe)")
#REPLACE_UMLAUT_UE = re.compile("(ue)")
#reviews = [REPLACE_NO_SPACE.sub("", line[0].decode("utf-8").lower()) for line in np.array(list(reviews.as_numpy_iterator()))]
reviews = [REPLACE_WITH_SPACE.sub(" ", line[0].decode("utf-8")) for line in np.array(list(reviews.as_numpy_iterator()))]# for line in reviews]
#reviews = [REPLACE_UMLAUT_AE.sub("ä", line[0]) for line in reviews]
#reviews = [REPLACE_UMLAUT_OE.sub("ö", line[0]) for line in reviews]
#reviews = [REPLACE_UMLAUT_UE.sub("ü", line[0]) for line in reviews]
return reviews
reviews_train_clean = preprocess_reviews(data['train'])
reviews_valid_clean = preprocess_reviews(data_valid)
# calculate the number of characters
x = []
for i in reviews_valid_clean:
x.append(len(i))
sum(x)
# divide into two batches
batch_1 = reviews_valid_clean[:500]
batch_2 = reviews_valid_clean[500:]
```
## Translating the Validation Dataset
```
# do this for 3 examples first
# step 1: save data in the right format (.txt, .tsv or html)
with open('en_batch_2.txt', 'w') as f:
for item in batch_2:
# for item in reviews_valid_clean[:3]:
f.write("%s\n\n\n" % item)
# step 2: upload to storage bucket 1 (os.environ['BUCKET_NAME'])
# gsutil cp /home/vera_luechinger/proj_multilingual_text_classification/notebook/00-Test/en_batch_2.txt gs://os.environ['BUCKET_NAME']/
# step 3: translate in storage and store in bucket 2 (os.environ['BUCKET_NAME']_translation: must be empty before the translation process begins)
# batch translation
from google.cloud import translate
import time
def batch_translate_text(
input_uri="gs://"+os.environ['BUCKET_NAME']+"/en_batch_2.txt",
output_uri="gs://"+os.environ['BUCKET_NAME_TRANSLATION']+"/",
project_id=os.environ['PROJECT_ID']
):
"""Translates a batch of texts on GCS and stores the result in a GCS location."""
client = translate.TranslationServiceClient()
location = "us-central1"
# Supported file types: https://cloud.google.com/translate/docs/supported-formats
gcs_source = {"input_uri": input_uri}
input_configs_element = {
"gcs_source": gcs_source,
"mime_type": "text/plain" # Can be "text/plain" or "text/html".
}
gcs_destination = {"output_uri_prefix": output_uri}
output_config = {"gcs_destination": gcs_destination}
parent = client.location_path(project_id, location)
# Supported language codes: https://cloud.google.com/translate/docs/language
start_time = time.time()
operation = client.batch_translate_text(
parent=parent,
source_language_code="en",
target_language_codes=["fr","de"], # Up to 10 language codes here.
input_configs=[input_configs_element],
output_config=output_config)
print(u"Waiting for operation to complete...")
response = operation.result(180)
elapsed_time_secs = time.time() - start_time
print(u"Execution Time: {}".format(elapsed_time_secs))
print(u"Total Characters: {}".format(response.total_characters))
print(u"Translated Characters: {}".format(response.translated_characters))
batch_translate_text()
# step 4: save files in the first bucket
#gsutil cp gs://os.environ['BUCKET_NAME']+_translation/os.environ['BUCKET_NAME']_en_batch_2_fr_translations.txt gs://os.environ['BUCKET_NAME']/batch_2/
de_1_dir = "gs://"+os.environ['BUCKET_NAME']+"/batch_1/"+os.environ['BUCKET_NAME']+"_en_batch_1_de_translations.txt"
from google.cloud import storage
#from config import bucketName, localFolder, bucketFolder
storage_client = storage.Client()
bucket = storage_client.get_bucket(os.environ['BUCKET_NAME'])
#bucket
def download_file(bucketName, file, localFolder):
"""Download file from GCP bucket."""
#fileList = list_files(bucketName)
#rand = randint(0, len(fileList) - 1)
storage_client = storage.Client()
bucket = storage_client.get_bucket(bucketName)
blob = bucket.blob(file)
fileName = blob.name.split('/')[-1]
blob.download_to_filename(localFolder + fileName)
return f'{fileName} downloaded from bucket.'
# drop this before pushing
download_file(os.environ['BUCKET_NAME'], "batch_1/"+os.environ['BUCKET_NAME']+"_en_batch_1_fr_translations.txt", "/home/vera_luechinger/data/imdb_reviews/")
download_file(os.environ['BUCKET_NAME'], "batch_1/"+os.environ['BUCKET_NAME']+"_en_batch_1_de_translations.txt", "/home/vera_luechinger/data/imdb_reviews/")
download_file(os.environ['BUCKET_NAME'], "batch_2/"+os.environ['BUCKET_NAME']+"_en_batch_2_fr_translations.txt", "/home/vera_luechinger/data/imdb_reviews/")
download_file(os.environ['BUCKET_NAME'], "batch_2/"+os.environ['BUCKET_NAME']+"_en_batch_2_de_translations.txt", "/home/vera_luechinger/data/imdb_reviews/")
print("")
# step 5: get translated files from storage to use in notebook
with open("/home/vera_luechinger/data/imdb_reviews/"+os.environ['BUCKET_NAME']+"_en_batch_1_de_translations.txt", 'r') as file:
de_1 = file.readlines()
with open("/home/vera_luechinger/data/imdb_reviews/"+os.environ['BUCKET_NAME']+"_en_batch_2_de_translations.txt", 'r') as file:
de_2 = file.readlines()
with open("/home/vera_luechinger/data/imdb_reviews/"+os.environ['BUCKET_NAME']+"_en_batch_1_fr_translations.txt", 'r') as file:
fr_1 = file.readlines()
with open("/home/vera_luechinger/data/imdb_reviews/"+os.environ['BUCKET_NAME']+"_en_batch_2_fr_translations.txt", 'r') as file:
fr_2 = file.readlines()
de = de_1 + de_2
fr = fr_1 + fr_2
de = [item.replace("\n","") for item in de]
fr = [item.replace("\n","") for item in fr]
len(de)
```
| true |
code
| 0.532121 | null | null | null | null |
|
# Lecture 30 – Perception, Case Study
## Data 94, Spring 2021
```
from datascience import *
import numpy as np
Table.interactive_plots()
import plotly.express as px
sky = Table.read_table('data/skyscrapers.csv') \
.where('status.current', are.contained_in(['completed', 'under construction'])) \
.select('name', 'location.city', 'location.latitude', 'location.longitude',
'statistics.floors above', 'statistics.height', 'status.completed.year') \
.relabeled(['location.city', 'location.latitude', 'location.longitude',
'statistics.floors above', 'statistics.height', 'status.completed.year'],
['city', 'latitude', 'longitude', 'floors', 'height', 'year']) \
.where('height', are.above(0)) \
.where('floors', are.above(0))
sky
```
## Perception
```
sky.group('city') \
.where('count', are.above_or_equal_to(40)) \
.sort('count', descending = True) \
.barh('city', title = 'Number of Skyscrapers Per City')
# Remember, you're not responsible for the code here.
px.pie(sky.group('city').where('count', are.above_or_equal_to(40)).to_df(),
values = 'count',
names = 'city',
title = 'Number of Skyscrapers Per City (Top 10 Only)'
)
```
## Case Study – Skyscrapers
```
sky.shuffle()
```
### Which cities have the most skyscrapers?
```
sky.group('city') \
.where('count', are.above_or_equal_to(20)) \
.sort('count', descending = True)
sky.group('city') \
.where('count', are.above_or_equal_to(20)) \
.sort('count', descending = True) \
.barh('city', title = 'Number of Skyscrapers Per City (Min. 20)')
```
Do any of the above cities stick out to you?
### What is the distribution of skyscraper heights?
```
sky.column('height').min()
sky.column('height').max()
sky.hist('height', density = False, bins = np.arange(0, 600, 25),
title = 'Distribution of Skyscraper Heights')
```
Let's zoom in a little more.
```
sky.where('height', are.below(300)) \
.hist('height', density = False, bins = np.arange(0, 310, 10),
title = 'Distribution of Skyscraper Heights Below 300m')
```
### What's the distribution of short vs. tall skyscrapers in each city?
```
sky
```
Let's say a skyscraper is "short" if its height is less than or equal to 150 meters; otherwise, it's "tall".
```
def height_cat(height):
if height <= 150:
return 'short'
return 'tall'
sky.apply(height_cat, 'height')
sky = sky.with_columns('height category', sky.apply(height_cat, 'height'))
sky
```
We can use `pivot` to draw a bar chart of the number of short and tall skyscrapers per city.
### [Quick Check 1](https://edstem.org/us/courses/3251/lessons/12407/slides/60647)
Fill in the blanks to create the table `short_and_tall`, which has two columns, `'short'` and `'tall'`, and one row for each city with **at least 5 short and 5 tall skyscrapers**. The first five rows of `short_and_tall` are shown below.
| city | short | tall |
|--------------:|--------:|-------:|
| New York City | 341 | 217 |
| Chicago | 268 | 108 |
| Miami | 58 | 49 |
| Houston | 34 | 27 |
| San Francisco | 43 | 22 |
```py
short_and_tall = sky.pivot(__(a)__, __(b)__) \
.where(__(c)__, are.above_or_equal_to(5)) \
.where('tall', are.above_or_equal_to(5)) \
.sort('tall', descending = True)
```
```
# short_and_tall = sky.pivot(__(a)__, __(b)__) \
# .where(__(c)__, are.above_or_equal_to(5)) \
# .where('tall', are.above_or_equal_to(5)) \
# .sort('tall', descending = True)
# short_and_tall.barh('city', title = 'Number of Short and Tall Skyscrapers Per City (Min. 5 Each)')
```
It seems like most cities have roughly twice as many "short" skyscrapers as they do "tall" skyscrapers.
What if we want to look at the distribution of the number of floors per skyscraper, separated by height category?
```
sky.hist('floors', group = 'height category',
density = False,
bins = np.arange(0, 150, 5),
title = 'Distribution of Number of Floors Per Skyscraper')
```
Since there is overlap between the two histograms, we have that there are some short skyscrapers (below 150m) with more floors than some tall skyscrapers!
### What's the relationship between height and number of floors?
```
sky
sky.scatter('height', 'floors',
s = 30,
group = 'height category',
title = 'Number of Floors vs. Height',
yaxis_title = 'Number of Floors')
sky.where('height', are.above(300)) \
.scatter('height', 'floors',
s = 50,
labels = 'name',
title = 'Number of Floors vs. Height (Min. 300m)')
```
### How many skyscrapers were built per year?
```
sky
sky.group('year')
```
This is obviously an error in our data.
```
sky.where('year', 0)
sky.where('year', are.not_equal_to(0)) \
.group('year') \
.plot('year', title = 'Number of Skyscrapers Built Per Year')
```
What if we want to look at the number of skyscrapers per year built in different cities?
```
sky.where('city', are.contained_in(['New York City', 'Chicago'])) \
.where('year', are.not_equal_to(0)) \
.pivot('city', 'year')
sky.where('city', are.contained_in(['New York City', 'Chicago'])) \
.where('year', are.not_equal_to(0)) \
.pivot('city', 'year') \
.plot('year',
title = 'Number of Skyscrapers Built Per Year in NYC and Chicago')
```
### Where on a map are most skyscrapers located?
```
sky
Circle.map_table(sky.select('latitude', 'longitude'),
line_color = None,
fill_opacity = 0.65,
area = 75,
color = 'orange')
```
Let's look at a map of tall skyscrapers in New York City.
```
ny_tall = sky.where('city', 'New York City') \
.where('height category', 'tall') \
.select('latitude', 'longitude', 'name', 'height') \
.relabeled(['name', 'height'], ['labels', 'color_scale'])
ny_tall
Circle.map_table(ny_tall,
line_color = None,
fill_opacity = 0.65,
area = 150,
color_scale = None)
```
It seems like most skyscrapers in NYC are either in the financial district or in Midtown. The circles for One World Trade Center and the Empire State Building are bright.
Lastly, what if we want to look at where short and tall skyscrapers are throughout the country?
```
sky
```
There are two solutions here.
1. Create a function that takes in `'short'` or `'tall'` and returns the desired color. (We did this in Lecture 28.)
2. Create a table with two columns, one with `'short'` and `'tall'` and the other with the desired colors, and join this table with `sky`.
We will use the second approach here.
```
sky_to_color = Table().with_columns(
'category', np.array(['short', 'tall']),
'colors', np.array(['orange', 'green'])
)
sky_to_color
sky_with_colors = sky.join('height category', sky_to_color, 'category') \
.select('latitude', 'longitude', 'colors')
sky_with_colors
Circle.map_table(sky_with_colors,
line_color = None,
fill_opacity = 0.7)
```
While there seem to be short skyscrapers (orange) throughout the country, tall skyscrapers generally seem to be concentrated in larger cities.
| true |
code
| 0.409103 | null | null | null | null |
|
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import warnings
warnings.filterwarnings('ignore', category=DeprecationWarning)
warnings.filterwarnings('ignore', category=FutureWarning)
import sklearn
sklearn.set_config(print_changed_only=True)
```
# Algorithm Chains and Pipelines
```
from sklearn.svm import SVC
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
# load and split the data
cancer = load_breast_cancer()
X_train, X_test, y_train, y_test = train_test_split(
cancer.data, cancer.target, random_state=0)
# compute minimum and maximum on the training data
scaler = MinMaxScaler().fit(X_train)
# rescale training data
X_train_scaled = scaler.transform(X_train)
svm = SVC()
# learn an SVM on the scaled training data
svm.fit(X_train_scaled, y_train)
# scale test data and score the scaled data
X_test_scaled = scaler.transform(X_test)
svm.score(X_test_scaled, y_test)
```
### Building Pipelines
```
from sklearn.pipeline import Pipeline
pipe = Pipeline([("scaler", MinMaxScaler()), ("svm", SVC())])
pipe.fit(X_train, y_train)
pipe.score(X_test, y_test)
```
### Using Pipelines in Grid-searches
```
param_grid = {'svm__C': [0.001, 0.01, 0.1, 1, 10, 100],
'svm__gamma': [0.001, 0.01, 0.1, 1, 10, 100]}
from sklearn.model_selection import GridSearchCV
grid = GridSearchCV(pipe, param_grid=param_grid)
grid.fit(X_train, y_train)
print("best cross-validation accuracy:", grid.best_score_)
print("test set score: ", grid.score(X_test, y_test))
print("best parameters: ", grid.best_params_)
```
# Not using Pipelines vs feature selection
```
rnd = np.random.RandomState(seed=0)
X = rnd.normal(size=(100, 10000))
y = rnd.normal(size=(100,))
from sklearn.feature_selection import SelectPercentile, f_regression
select = SelectPercentile(score_func=f_regression,
percentile=5)
select.fit(X, y)
X_selected = select.transform(X)
print(X_selected.shape)
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import Ridge
np.mean(cross_val_score(Ridge(), X_selected, y))
pipe = Pipeline([("select", SelectPercentile(score_func=f_regression, percentile=5)),
("ridge", Ridge())])
np.mean(cross_val_score(pipe, X, y))
```
### The General Pipeline Interface
```
def fit(self, X, y):
X_transformed = X
for step in self.steps[:-1]:
# iterate over all but the final step
# fit and transform the data
X_transformed = step[1].fit_transform(X_transformed, y)
# fit the last step
self.steps[-1][1].fit(X_transformed, y)
return self
def predict(self, X):
X_transformed = X
for step in self.steps[:-1]:
# iterate over all but the final step
# transform the data
X_transformed = step[1].transform(X_transformed)
# predict using the last step
return self.steps[-1][1].predict(X_transformed)
```
### Convenient Pipeline creation with ``make_pipeline``
```
from sklearn.pipeline import make_pipeline
# standard syntax
pipe_long = Pipeline([("scaler", MinMaxScaler()), ("svm", SVC(C=100))])
# abbreviated syntax
pipe_short = make_pipeline(MinMaxScaler(), SVC(C=100))
pipe_short.steps
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
pipe = make_pipeline(StandardScaler(), PCA(n_components=2),
StandardScaler())
pipe.steps
```
#### Accessing step attributes
```
# fit the pipeline defined above to the cancer dataset
pipe.fit(cancer.data)
# extract the first two principal components from the "pca" step
components = pipe.named_steps.pca.components_
print(components.shape)
pipe['pca'].components_.shape
pipe[0]
pipe[1]
pipe[:2]
```
#### Accessing attributes in grid-searched pipeline.
```
from sklearn.linear_model import LogisticRegression
pipe = make_pipeline(StandardScaler(), LogisticRegression(max_iter=1000))
param_grid = {'logisticregression__C': [0.01, 0.1, 1, 10, 100]}
X_train, X_test, y_train, y_test = train_test_split(
cancer.data, cancer.target, random_state=4)
grid = GridSearchCV(pipe, param_grid)
grid.fit(X_train, y_train)
print(grid.best_estimator_)
print(grid.best_estimator_.named_steps.logisticregression)
print(grid.best_estimator_['logisticregression'])
print(grid.best_estimator_.named_steps.logisticregression.coef_)
print(grid.best_estimator_['logisticregression'].coef_)
```
### Grid-searching preprocessing steps and model parameters
```
from sklearn.datasets import load_boston
boston = load_boston()
X_train, X_test, y_train, y_test = train_test_split(
boston.data, boston.target, random_state=0)
from sklearn.preprocessing import PolynomialFeatures
pipe = make_pipeline(
StandardScaler(),
PolynomialFeatures(),
Ridge())
param_grid = {'polynomialfeatures__degree': [1, 2, 3],
'ridge__alpha': [0.001, 0.01, 0.1, 1, 10, 100]}
grid = GridSearchCV(pipe, param_grid=param_grid,
n_jobs=-1, return_train_score=True)
grid.fit(X_train, y_train)
res = pd.DataFrame(grid.cv_results_)
res.head()
res = pd.pivot_table(res, index=['param_polynomialfeatures__degree', 'param_ridge__alpha'],
values=['mean_train_score', 'mean_test_score'])
res['mean_train_score'].unstack()
res['mean_test_score'].unstack()
print(grid.best_params_)
grid.score(X_test, y_test)
from sklearn.linear_model import Lasso
from sklearn.model_selection import RepeatedKFold
pipe = Pipeline([('scaler', StandardScaler()), ('regressor', Ridge())])
param_grid = {'scaler': [StandardScaler(), MinMaxScaler(), 'passthrough'],
'regressor': [Ridge(), Lasso()],
'regressor__alpha': np.logspace(-3, 3, 7)}
grid = GridSearchCV(pipe, param_grid,
cv=RepeatedKFold(n_splits=10, n_repeats=10))
grid.fit(X_train, y_train)
grid.score(X_test, y_test)
grid.best_score_
grid.best_params_
from sklearn.tree import DecisionTreeRegressor
param_grid = [{'regressor': [DecisionTreeRegressor()],
'regressor__max_depth': [2, 3, 4]},
{'regressor': [Ridge()],
'regressor__alpha': [0.1, 1]}
]
```
# More on ColumnTransformer
```
from sklearn.compose import make_column_transformer, ColumnTransformer
bike = pd.read_csv("data/bike_day_raw.csv")
bike.head()
bike.dtypes
bike_data = bike.drop("cnt", axis=1)
cat_features = bike.columns[:6]
cat_features
from sklearn.preprocessing import OneHotEncoder
ct = make_column_transformer((OneHotEncoder(sparse=False), cat_features),
remainder=StandardScaler())
ct.transformers
ColumnTransformer([('ohe', OneHotEncoder(sparse=False), cat_features)],
remainder=StandardScaler())
ColumnTransformer([('ohe', OneHotEncoder(sparse=False), cat_features),
('scaler', StandardScaler(), [6, 7, 8, 9])])
ct.fit(bike_data)
bike_data.shape
ct.transform(bike_data).shape
ct.transform(bike_data)
ct = make_column_transformer((OneHotEncoder(sparse=False), cat_features),
remainder=StandardScaler())
ohe_pipe = make_pipeline(ct, Ridge())
X_train, X_test, y_train, y_test = train_test_split(bike_data, bike.cnt, random_state=42)
cross_val_score(ohe_pipe, X_train, y_train)
from sklearn.preprocessing import PowerTransformer
ct = make_column_transformer((OneHotEncoder(sparse=False), cat_features))
ohe_pipe = make_pipeline(ct, Ridge())
param_grid = {'columntransformer__remainder':
[StandardScaler(), PowerTransformer(method='yeo-johnson')],
'ridge__alpha': np.logspace(-3, 2, 6)}
grid = GridSearchCV(ohe_pipe, param_grid)
grid.fit(X_train, y_train)
grid.score(X_test, y_test)
grid.best_params_
res = pd.DataFrame(grid.cv_results_)
res
plt.plot(res.mean_test_score[:6].values, label="StandardScaler")
plt.plot(res.mean_test_score[6:].values, label="PowerTransformer")
plt.legend()
```
# Exercise
Load the adult dataset. Create a pipline using the ColumnTransformer, OneHotEncoder, Scaling, and polynomial features and a linear classifier.
Search over the best options for the polynomial features together with the regularization of a linear model.
```
pd.read_csv("data/adult.csv", index_col=0).head()
# use OneHotEncoder(handle_unknown='ignore') to ignore new categories in test set.
```
| true |
code
| 0.619356 | null | null | null | null |
|
# Example: Compare CZT to FFT
```
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pyplot as plt
# CZT package
import czt
# https://github.com/garrettj403/SciencePlots
plt.style.use(['science', 'notebook'])
```
# Generate Time-Domain Signal
```
# Time data
t = np.arange(0, 20, 0.1) * 1e-3
dt = t[1] - t[0]
Fs = 1 / dt
N = len(t)
print("Sampling period: {:5.2f} ms".format(dt * 1e3))
print("Sampling frequency: {:5.2f} kHz".format(Fs / 1e3))
print("Nyquist frequency: {:5.2f} kHz".format(Fs / 2 / 1e3))
print("Number of points: {:5d}".format(N))
# Signal data
def model1(t):
"""Exponentially decaying sine wave with higher-order distortion."""
output = (1.0 * np.sin(2 * np.pi * 1e3 * t) +
0.3 * np.sin(2 * np.pi * 2.5e3 * t) +
0.1 * np.sin(2 * np.pi * 3.5e3 * t)) * np.exp(-1e3 * t)
return output
def model2(t):
"""Exponentially decaying sine wave without higher-order distortion."""
output = (1.0 * np.sin(2 * np.pi * 1e3 * t)) * np.exp(-1e3 * t)
return output
sig = model1(t)
# Plot time-domain data
plt.figure()
t_tmp = np.linspace(0, 6, 601) / 1e3
plt.plot(t_tmp*1e3, model1(t_tmp), 'k', lw=0.5, label='Data')
plt.plot(t*1e3, sig, 'ro--', label='Samples')
plt.xlabel("Time (ms)")
plt.ylabel("Signal")
plt.xlim([0, 6])
plt.legend()
plt.title("Time-domain signal");
```
# Frequency-domain
```
sig_fft = np.fft.fftshift(np.fft.fft(sig))
f_fft = np.fft.fftshift(np.fft.fftfreq(N, d=dt))
freq, sig_f = czt.time2freq(t, sig)
# Plot results
fig1 = plt.figure(1)
frame1a = fig1.add_axes((.1,.3,.8,.6))
plt.plot(f_fft / 1e3, np.abs(sig_fft), 'k', label='FFT')
plt.plot(freq / 1e3, np.abs(sig_f), 'ro--', label='CZT')
plt.ylabel("Signal magnitude")
plt.xlim([f_fft.min()/1e3, f_fft.max()/1e3])
plt.legend()
plt.title("Frequency-domain")
frame1b = fig1.add_axes((.1,.1,.8,.2))
plt.plot(f_fft / 1e3, (np.abs(sig_fft) - np.abs(sig_f)) * 1e13, 'r-', label="Data")
plt.xlabel("Frequency (kHz)")
plt.ylabel("Residual\n" + r"($\times10^{-13}$)")
plt.xlim([f_fft.min()/1e3, f_fft.max()/1e3])
plt.savefig("results/freq-domain.png", dpi=600)
# Plot results
fig2 = plt.figure(2)
frame2a = fig2.add_axes((.1,.3,.8,.6))
plt.plot(f_fft / 1e3, np.angle(sig_fft), 'k', label='FFT')
plt.plot(freq / 1e3, np.angle(sig_f), 'ro--', label='CZT')
plt.ylabel("Signal phase")
plt.xlim([f_fft.min()/1e3, f_fft.max()/1e3])
plt.legend()
plt.title("Frequency-domain")
frame2b = fig2.add_axes((.1,.1,.8,.2))
plt.plot(f_fft / 1e3, (np.angle(sig_fft) - np.angle(sig_f)) * 1e13, 'r-', label="Data")
plt.xlabel("Frequency (kHz)")
plt.ylabel("Residual\n" + r"($\times10^{-13}$)")
plt.xlim([f_fft.min()/1e3, f_fft.max()/1e3]);
```
| true |
code
| 0.735932 | null | null | null | null |
|
.. meta::
:description: A guide which introduces the most important steps to get started with pymoo, an open-source multi-objective optimization framework in Python.
.. meta::
:keywords: Multi-objective Optimization, Python, Evolutionary Computation, Optimization Test Problem, Hypervolume
```
%%capture
%run part_2.ipynb
```
# Part IV: Analysis of Convergence
**Great!** So far, we have executed an algorithm and already obtained a solution set. But let us not stop here without knowing how the algorithm has performed. This will also answer how we should define a termination criterion if we solve the problem again. The convergence analysis shall consider two cases, i) the Pareto-front is not known, or ii) the Pareto-front has been derived analytically, or a reasonable approximation exists.
## Result
To further check how close the results match the analytically derived optimum, we have to convert the objective space values to the original definition where the second objective $f_2$ was maximized. Plotting then the Pareto-front shows how close the algorithm was able to converge.
```
from pymoo.util.misc import stack
class MyTestProblem(MyProblem):
def _calc_pareto_front(self, flatten=True, *args, **kwargs):
f2 = lambda f1: ((f1/100) ** 0.5 - 1)**2
F1_a, F1_b = np.linspace(1, 16, 300), np.linspace(36, 81, 300)
F2_a, F2_b = f2(F1_a), f2(F1_b)
pf_a = np.column_stack([F1_a, F2_a])
pf_b = np.column_stack([F1_b, F2_b])
return stack(pf_a, pf_b, flatten=flatten)
def _calc_pareto_set(self, *args, **kwargs):
x1_a = np.linspace(0.1, 0.4, 50)
x1_b = np.linspace(0.6, 0.9, 50)
x2 = np.zeros(50)
a, b = np.column_stack([x1_a, x2]), np.column_stack([x1_b, x2])
return stack(a,b, flatten=flatten)
problem = MyTestProblem()
```
For IGD, the Pareto front needs to be known or to be approximated.
In our framework, the Pareto front of **test problems** can be obtained by:
```
pf_a, pf_b = problem.pareto_front(use_cache=False, flatten=False)
pf = problem.pareto_front(use_cache=False, flatten=True)
plt.figure(figsize=(7, 5))
plt.scatter(F[:, 0], F[:, 1], s=30, facecolors='none', edgecolors='b', label="Solutions")
plt.plot(pf_a[:, 0], pf_a[:, 1], alpha=0.5, linewidth=2.0, color="red", label="Pareto-front")
plt.plot(pf_b[:, 0], pf_b[:, 1], alpha=0.5, linewidth=2.0, color="red")
plt.title("Objective Space")
plt.legend()
plt.show()
```
Whether the optimum for your problem is known or not, we encourage all end-users of *pymoo* not to skip the analysis of the obtained solution set. Visualizations for high-dimensional objective spaces (in design and/or objective space) are also provided and shown [here](../visualization/index.ipynb).
In **Part II**, we have run the algorithm without storing, keeping track of the optimization progress, and storing information. However, for analyzing the convergence, historical data need to be stored. One way of accomplishing that is enabling the `save_history` flag, which will store a deep copy of the algorithm object in each iteration and save it in the `Result` object. This approach is more memory-intensive (especially for many iterations) but has the advantage that **any** algorithm-dependent variable can be analyzed posteriorly.
A not negligible step is the post-processing after having obtained the results. We strongly recommend not only analyzing the final result but also the algorithm's behavior. This gives more insights into the convergence of the algorithm.
For such an analysis, intermediate steps of the algorithm need to be considered. This can either be achieved by:
- A `Callback` class storing the necessary information in each iteration of the algorithm.
- Enabling the `save_history` flag when calling the minimize method to store a deep copy of the algorithm's objective each iteration.
We provide some more details about each variant in our [convergence](../misc/convergence.ipynb) tutorial.
As you might have already seen, we have set `save_history=True` when calling the `minmize` method in this getting started guide and, thus, will you the `history` for our analysis. Moreover, we need to decide what metric should be used to measure the performance of our algorithm. In this tutorial, we are going to use `Hypervolume` and `IGD`. Feel free to look at our [performance indicators](../misc/indicators.ipynb) to find more information about metrics to measure the performance of multi-objective algorithms.
```
from pymoo.optimize import minimize
res = minimize(problem,
algorithm,
("n_gen", 40),
seed=1,
save_history=True,
verbose=False)
X, F = res.opt.get("X", "F")
hist = res.history
print(len(hist))
```
From the `history` it is relatively easy to extract the information we need for an analysis.
```
n_evals = [] # corresponding number of function evaluations\
hist_F = [] # the objective space values in each generation
hist_cv = [] # constraint violation in each generation
hist_cv_avg = [] # average constraint violation in the whole population
for algo in hist:
# store the number of function evaluations
n_evals.append(algo.evaluator.n_eval)
# retrieve the optimum from the algorithm
opt = algo.opt
# store the least contraint violation and the average in each population
hist_cv.append(opt.get("CV").min())
hist_cv_avg.append(algo.pop.get("CV").mean())
# filter out only the feasible and append and objective space values
feas = np.where(opt.get("feasible"))[0]
hist_F.append(opt.get("F")[feas])
```
## Constraint Satisfaction
First, let us quickly see when the first feasible solution has been found:
```
k = np.where(np.array(hist_cv) <= 0.0)[0].min()
print(f"At least one feasible solution in Generation {k} after {n_evals[k]} evaluations.")
```
Because this problem does not have much complexity, a feasible solution was found right away. Nevertheless, this can be entirely different for your optimization problem and is also worth being analyzed first.
```
# replace this line by `hist_cv` if you like to analyze the least feasible optimal solution and not the population
vals = hist_cv_avg
k = np.where(np.array(vals) <= 0.0)[0].min()
print(f"Whole population feasible in Generation {k} after {n_evals[k]} evaluations.")
plt.figure(figsize=(7, 5))
plt.plot(n_evals, vals, color='black', lw=0.7, label="Avg. CV of Pop")
plt.scatter(n_evals, vals, facecolor="none", edgecolor='black', marker="p")
plt.axvline(n_evals[k], color="red", label="All Feasible", linestyle="--")
plt.title("Convergence")
plt.xlabel("Function Evaluations")
plt.ylabel("Constraint Violation")
plt.legend()
plt.show()
```
## Pareto-front is unknown
If the Pareto-front is not known, we can not know if the algorithm has converged to the true optimum or not. At least not without any further information. However, we can see when the algorithm has made most of its progress during optimization and thus if the number of iterations should be less or more. Additionally, the metrics serve to compare two algorithms with each other.
In multi-objective optimization **normalization** the very important. For that reason, you see below that the Hypervolume is based on a normalized set normalized by the bounds (idea)
More details about it will be shown in Part IV.
### Hypvervolume (HV)
Hypervolume is a very well-known performance indicator for multi-objective problems. It is Pareto-compliant and is based on the volume between a predefined reference point and the solution provided. Therefore, hypervolume requires defining a reference point `ref_point`, which shall be larger than the maximum value of the Pareto front.
```
approx_ideal = F.min(axis=0)
approx_nadir = F.max(axis=0)
from pymoo.indicators.hv import Hypervolume
metric = Hypervolume(ref_point= np.array([1.1, 1.1]),
norm_ref_point=False,
zero_to_one=True,
ideal=approx_ideal,
nadir=approx_nadir)
hv = [metric.do(_F) for _F in hist_F]
plt.figure(figsize=(7, 5))
plt.plot(n_evals, hv, color='black', lw=0.7, label="Avg. CV of Pop")
plt.scatter(n_evals, hv, facecolor="none", edgecolor='black', marker="p")
plt.title("Convergence")
plt.xlabel("Function Evaluations")
plt.ylabel("Hypervolume")
plt.show()
```
**Note:** Hypervolume becomes computationally expensive with increasing dimensionality. The exact hypervolume can be calculated efficiently for 2 and 3 objectives. For higher dimensions, some researchers use a hypervolume approximation, which is not available yet in pymoo.
### Running Metric
Another way of analyzing a run when the true Pareto front is **not** known is the recently proposed [running metric](https://www.egr.msu.edu/~kdeb/papers/c2020003.pdf). The running metric shows the difference in the objective space from one generation to another and uses the algorithm's survival to visualize the improvement.
This metric is also being used in pymoo to determine the termination of a multi-objective optimization algorithm if no default termination criteria have been defined.
For instance, this analysis reveals that the algorithm improved from the 4th to the 5th generation significantly.
```
from pymoo.util.running_metric import RunningMetric
running = RunningMetric(delta_gen=5,
n_plots=3,
only_if_n_plots=True,
key_press=False,
do_show=True)
for algorithm in res.history[:15]:
running.notify(algorithm)
```
Plotting until the final population shows the algorithm seems to have more a less converged, and only a slight improvement has been made.
```
from pymoo.util.running_metric import RunningMetric
running = RunningMetric(delta_gen=10,
n_plots=4,
only_if_n_plots=True,
key_press=False,
do_show=True)
for algorithm in res.history:
running.notify(algorithm)
```
## Pareto-front is known or approximated
### IGD/GD/IGD+/GD+
The Pareto-front for a problem can either be provided manually or directly implemented in the `Problem` definition to analyze the run on the fly. Here, we show an example of using the history of the algorithm as an additional post-processing step.
For real-world problems, you have to use an **approximation**. An approximation can be obtained by running an algorithm a couple of times and extracting the non-dominated solutions out of all solution sets. If you have only a single run, an alternative is to use the obtained non-dominated set of solutions as an approximation. However, the result only indicates how much the algorithm's progress in converging to the final set.
```
from pymoo.indicators.igd import IGD
metric = IGD(pf, zero_to_one=True)
igd = [metric.do(_F) for _F in hist_F]
plt.plot(n_evals, igd, color='black', lw=0.7, label="Avg. CV of Pop")
plt.scatter(n_evals, igd, facecolor="none", edgecolor='black', marker="p")
plt.axhline(10**-2, color="red", label="10^-2", linestyle="--")
plt.title("Convergence")
plt.xlabel("Function Evaluations")
plt.ylabel("IGD")
plt.yscale("log")
plt.legend()
plt.show()
from pymoo.indicators.igd_plus import IGDPlus
metric = IGDPlus(pf, zero_to_one=True)
igd = [metric.do(_F) for _F in hist_F]
plt.plot(n_evals, igd, color='black', lw=0.7, label="Avg. CV of Pop")
plt.scatter(n_evals, igd, facecolor="none", edgecolor='black', marker="p")
plt.axhline(10**-2, color="red", label="10^-2", linestyle="--")
plt.title("Convergence")
plt.xlabel("Function Evaluations")
plt.ylabel("IGD+")
plt.yscale("log")
plt.legend()
plt.show()
```
| true |
code
| 0.695002 | null | null | null | null |
|
```
%reload_ext autoreload
%autoreload 2
from fastai.basics import *
```
# Rossmann
## Data preparation / Feature engineering
In addition to the provided data, we will be using external datasets put together by participants in the Kaggle competition. You can download all of them [here](http://files.fast.ai/part2/lesson14/rossmann.tgz). Then you shold untar them in the dirctory to which `PATH` is pointing below.
For completeness, the implementation used to put them together is included below.
```
PATH=Config().data_path()/Path('rossmann/')
table_names = ['train', 'store', 'store_states', 'state_names', 'googletrend', 'weather', 'test']
tables = [pd.read_csv(PATH/f'{fname}.csv', low_memory=False) for fname in table_names]
train, store, store_states, state_names, googletrend, weather, test = tables
len(train),len(test)
```
We turn state Holidays to booleans, to make them more convenient for modeling. We can do calculations on pandas fields using notation very similar (often identical) to numpy.
```
train.StateHoliday = train.StateHoliday!='0'
test.StateHoliday = test.StateHoliday!='0'
```
`join_df` is a function for joining tables on specific fields. By default, we'll be doing a left outer join of `right` on the `left` argument using the given fields for each table.
Pandas does joins using the `merge` method. The `suffixes` argument describes the naming convention for duplicate fields. We've elected to leave the duplicate field names on the left untouched, and append a "\_y" to those on the right.
```
def join_df(left, right, left_on, right_on=None, suffix='_y'):
if right_on is None: right_on = left_on
return left.merge(right, how='left', left_on=left_on, right_on=right_on,
suffixes=("", suffix))
```
Join weather/state names.
```
weather = join_df(weather, state_names, "file", "StateName")
```
In pandas you can add new columns to a dataframe by simply defining it. We'll do this for googletrends by extracting dates and state names from the given data and adding those columns.
We're also going to replace all instances of state name 'NI' to match the usage in the rest of the data: 'HB,NI'. This is a good opportunity to highlight pandas indexing. We can use `.loc[rows, cols]` to select a list of rows and a list of columns from the dataframe. In this case, we're selecting rows w/ statename 'NI' by using a boolean list `googletrend.State=='NI'` and selecting "State".
```
googletrend['Date'] = googletrend.week.str.split(' - ', expand=True)[0]
googletrend['State'] = googletrend.file.str.split('_', expand=True)[2]
googletrend.loc[googletrend.State=='NI', "State"] = 'HB,NI'
```
The following extracts particular date fields from a complete datetime for the purpose of constructing categoricals.
You should *always* consider this feature extraction step when working with date-time. Without expanding your date-time into these additional fields, you can't capture any trend/cyclical behavior as a function of time at any of these granularities. We'll add to every table with a date field.
```
def add_datepart(df, fldname, drop=True, time=False):
"Helper function that adds columns relevant to a date."
fld = df[fldname]
fld_dtype = fld.dtype
if isinstance(fld_dtype, pd.core.dtypes.dtypes.DatetimeTZDtype):
fld_dtype = np.datetime64
if not np.issubdtype(fld_dtype, np.datetime64):
df[fldname] = fld = pd.to_datetime(fld, infer_datetime_format=True)
targ_pre = re.sub('[Dd]ate$', '', fldname)
attr = ['Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear',
'Is_month_end', 'Is_month_start', 'Is_quarter_end', 'Is_quarter_start', 'Is_year_end', 'Is_year_start']
if time: attr = attr + ['Hour', 'Minute', 'Second']
for n in attr: df[targ_pre + n] = getattr(fld.dt, n.lower())
df[targ_pre + 'Elapsed'] = fld.astype(np.int64) // 10 ** 9
if drop: df.drop(fldname, axis=1, inplace=True)
add_datepart(weather, "Date", drop=False)
add_datepart(googletrend, "Date", drop=False)
add_datepart(train, "Date", drop=False)
add_datepart(test, "Date", drop=False)
```
The Google trends data has a special category for the whole of the Germany - we'll pull that out so we can use it explicitly.
```
trend_de = googletrend[googletrend.file == 'Rossmann_DE']
```
Now we can outer join all of our data into a single dataframe. Recall that in outer joins everytime a value in the joining field on the left table does not have a corresponding value on the right table, the corresponding row in the new table has Null values for all right table fields. One way to check that all records are consistent and complete is to check for Null values post-join, as we do here.
*Aside*: Why not just do an inner join?
If you are assuming that all records are complete and match on the field you desire, an inner join will do the same thing as an outer join. However, in the event you are wrong or a mistake is made, an outer join followed by a null-check will catch it. (Comparing before/after # of rows for inner join is equivalent, but requires keeping track of before/after row #'s. Outer join is easier.)
```
store = join_df(store, store_states, "Store")
len(store[store.State.isnull()])
joined = join_df(train, store, "Store")
joined_test = join_df(test, store, "Store")
len(joined[joined.StoreType.isnull()]),len(joined_test[joined_test.StoreType.isnull()])
joined = join_df(joined, googletrend, ["State","Year", "Week"])
joined_test = join_df(joined_test, googletrend, ["State","Year", "Week"])
len(joined[joined.trend.isnull()]),len(joined_test[joined_test.trend.isnull()])
joined = joined.merge(trend_de, 'left', ["Year", "Week"], suffixes=('', '_DE'))
joined_test = joined_test.merge(trend_de, 'left', ["Year", "Week"], suffixes=('', '_DE'))
len(joined[joined.trend_DE.isnull()]),len(joined_test[joined_test.trend_DE.isnull()])
joined = join_df(joined, weather, ["State","Date"])
joined_test = join_df(joined_test, weather, ["State","Date"])
len(joined[joined.Mean_TemperatureC.isnull()]),len(joined_test[joined_test.Mean_TemperatureC.isnull()])
for df in (joined, joined_test):
for c in df.columns:
if c.endswith('_y'):
if c in df.columns: df.drop(c, inplace=True, axis=1)
```
Next we'll fill in missing values to avoid complications with `NA`'s. `NA` (not available) is how Pandas indicates missing values; many models have problems when missing values are present, so it's always important to think about how to deal with them. In these cases, we are picking an arbitrary *signal value* that doesn't otherwise appear in the data.
```
for df in (joined,joined_test):
df['CompetitionOpenSinceYear'] = df.CompetitionOpenSinceYear.fillna(1900).astype(np.int32)
df['CompetitionOpenSinceMonth'] = df.CompetitionOpenSinceMonth.fillna(1).astype(np.int32)
df['Promo2SinceYear'] = df.Promo2SinceYear.fillna(1900).astype(np.int32)
df['Promo2SinceWeek'] = df.Promo2SinceWeek.fillna(1).astype(np.int32)
```
Next we'll extract features "CompetitionOpenSince" and "CompetitionDaysOpen". Note the use of `apply()` in mapping a function across dataframe values.
```
for df in (joined,joined_test):
df["CompetitionOpenSince"] = pd.to_datetime(dict(year=df.CompetitionOpenSinceYear,
month=df.CompetitionOpenSinceMonth, day=15))
df["CompetitionDaysOpen"] = df.Date.subtract(df.CompetitionOpenSince).dt.days
```
We'll replace some erroneous / outlying data.
```
for df in (joined,joined_test):
df.loc[df.CompetitionDaysOpen<0, "CompetitionDaysOpen"] = 0
df.loc[df.CompetitionOpenSinceYear<1990, "CompetitionDaysOpen"] = 0
```
We add "CompetitionMonthsOpen" field, limiting the maximum to 2 years to limit number of unique categories.
```
for df in (joined,joined_test):
df["CompetitionMonthsOpen"] = df["CompetitionDaysOpen"]//30
df.loc[df.CompetitionMonthsOpen>24, "CompetitionMonthsOpen"] = 24
joined.CompetitionMonthsOpen.unique()
```
Same process for Promo dates. You may need to install the `isoweek` package first.
```
# If needed, uncomment:
# ! pip install isoweek
from isoweek import Week
for df in (joined,joined_test):
df["Promo2Since"] = pd.to_datetime(df.apply(lambda x: Week(
x.Promo2SinceYear, x.Promo2SinceWeek).monday(), axis=1))
df["Promo2Days"] = df.Date.subtract(df["Promo2Since"]).dt.days
for df in (joined,joined_test):
df.loc[df.Promo2Days<0, "Promo2Days"] = 0
df.loc[df.Promo2SinceYear<1990, "Promo2Days"] = 0
df["Promo2Weeks"] = df["Promo2Days"]//7
df.loc[df.Promo2Weeks<0, "Promo2Weeks"] = 0
df.loc[df.Promo2Weeks>25, "Promo2Weeks"] = 25
df.Promo2Weeks.unique()
joined.to_pickle(PATH/'joined')
joined_test.to_pickle(PATH/'joined_test')
```
## Durations
It is common when working with time series data to extract data that explains relationships across rows as opposed to columns, e.g.:
* Running averages
* Time until next event
* Time since last event
This is often difficult to do with most table manipulation frameworks, since they are designed to work with relationships across columns. As such, we've created a class to handle this type of data.
We'll define a function `get_elapsed` for cumulative counting across a sorted dataframe. Given a particular field `fld` to monitor, this function will start tracking time since the last occurrence of that field. When the field is seen again, the counter is set to zero.
Upon initialization, this will result in datetime na's until the field is encountered. This is reset every time a new store is seen. We'll see how to use this shortly.
```
def get_elapsed(fld, pre):
day1 = np.timedelta64(1, 'D')
last_date = np.datetime64()
last_store = 0
res = []
for s,v,d in zip(df.Store.values,df[fld].values, df.Date.values):
if s != last_store:
last_date = np.datetime64()
last_store = s
if v: last_date = d
res.append(((d-last_date).astype('timedelta64[D]') / day1))
df[pre+fld] = res
```
We'll be applying this to a subset of columns:
```
columns = ["Date", "Store", "Promo", "StateHoliday", "SchoolHoliday"]
#df = train[columns]
df = train[columns].append(test[columns])
```
Let's walk through an example.
Say we're looking at School Holiday. We'll first sort by Store, then Date, and then call `add_elapsed('SchoolHoliday', 'After')`:
This will apply to each row with School Holiday:
* A applied to every row of the dataframe in order of store and date
* Will add to the dataframe the days since seeing a School Holiday
* If we sort in the other direction, this will count the days until another holiday.
```
fld = 'SchoolHoliday'
df = df.sort_values(['Store', 'Date'])
get_elapsed(fld, 'After')
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
get_elapsed(fld, 'Before')
```
We'll do this for two more fields.
```
fld = 'StateHoliday'
df = df.sort_values(['Store', 'Date'])
get_elapsed(fld, 'After')
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
get_elapsed(fld, 'Before')
fld = 'Promo'
df = df.sort_values(['Store', 'Date'])
get_elapsed(fld, 'After')
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
get_elapsed(fld, 'Before')
```
We're going to set the active index to Date.
```
df = df.set_index("Date")
```
Then set null values from elapsed field calculations to 0.
```
columns = ['SchoolHoliday', 'StateHoliday', 'Promo']
for o in ['Before', 'After']:
for p in columns:
a = o+p
df[a] = df[a].fillna(0).astype(int)
```
Next we'll demonstrate window functions in pandas to calculate rolling quantities.
Here we're sorting by date (`sort_index()`) and counting the number of events of interest (`sum()`) defined in `columns` in the following week (`rolling()`), grouped by Store (`groupby()`). We do the same in the opposite direction.
```
bwd = df[['Store']+columns].sort_index().groupby("Store").rolling(7, min_periods=1).sum()
fwd = df[['Store']+columns].sort_index(ascending=False
).groupby("Store").rolling(7, min_periods=1).sum()
```
Next we want to drop the Store indices grouped together in the window function.
Often in pandas, there is an option to do this in place. This is time and memory efficient when working with large datasets.
```
bwd.drop('Store',1,inplace=True)
bwd.reset_index(inplace=True)
fwd.drop('Store',1,inplace=True)
fwd.reset_index(inplace=True)
df.reset_index(inplace=True)
```
Now we'll merge these values onto the df.
```
df = df.merge(bwd, 'left', ['Date', 'Store'], suffixes=['', '_bw'])
df = df.merge(fwd, 'left', ['Date', 'Store'], suffixes=['', '_fw'])
df.drop(columns,1,inplace=True)
df.head()
```
It's usually a good idea to back up large tables of extracted / wrangled features before you join them onto another one, that way you can go back to it easily if you need to make changes to it.
```
df.to_pickle(PATH/'df')
df["Date"] = pd.to_datetime(df.Date)
df.columns
joined = pd.read_pickle(PATH/'joined')
joined_test = pd.read_pickle(PATH/f'joined_test')
joined = join_df(joined, df, ['Store', 'Date'])
joined_test = join_df(joined_test, df, ['Store', 'Date'])
```
The authors also removed all instances where the store had zero sale / was closed. We speculate that this may have cost them a higher standing in the competition. One reason this may be the case is that a little exploratory data analysis reveals that there are often periods where stores are closed, typically for refurbishment. Before and after these periods, there are naturally spikes in sales that one might expect. By ommitting this data from their training, the authors gave up the ability to leverage information about these periods to predict this otherwise volatile behavior.
```
joined = joined[joined.Sales!=0]
```
We'll back this up as well.
```
joined.reset_index(inplace=True)
joined_test.reset_index(inplace=True)
joined.to_pickle(PATH/'train_clean')
joined_test.to_pickle(PATH/'test_clean')
```
| true |
code
| 0.304682 | null | null | null | null |
|
Neuromorphic engineering I
## Lab 8: Silicon Synaptic Circuits
Team member 1: Jan Hohenheim
Team member 2: Maxim Gärtner
Date:
----------------------------------------------------------------------------------------------------------------------
This week, we will see how synaptic circuits generate currents when stimulated by voltage pulses. Specifically we will measure the response of the synapse to a single pulse, and to a sequence of spikes.
The objectives of this lab are to:
- Analyze log-domain synapse circuits.
- Measure the response properties of the diff-pair integrator (DPI) synapse and of the dual diff-pair integrator (DDI) synapse.
## 1. Prelab
**A Differential Pair Integrator circuit**

**(1)** Write the equations characterizing $I_{w}, I_{thr} , I_{in}, I_{\tau}, I_{syn}, I_C$ assuming all corresponding FETs are in saturation and operate in weak-inversion.
> - $I_w = I_0 e^\frac{\kappa V_w}{U_T}$
> - $I_{thr} = I_0 e^\frac{\kappa V_{thr} - V_{dd}}{U_T}$
> - $I_{in} = I_0 e^\frac{\kappa V_{syn} - V_{syn}}{U_T} = I_w \frac{e^\frac{\kappa V_{syn}}{U_T}}{e^\frac{\kappa V_{syn}}{U_T} + e^\frac{\kappa V_{thr}}{U_T}}$
> - $I_{\tau} = I_0 e^\frac{\kappa(V_{dd} - V_\tau)}{U_T}$
> - $I_{syn} = I_0 e^\frac{\kappa(V_{dd} - V_{syn})}{U_T}$
> - $I_C = C \frac{d}{dt} (V_{dd} - V_{syn})$
**(2)** What is the time constant of the circuit?
> $\tau = \frac{CU_T}{\kappa I_\tau}$
**(3)** Derive the circuit's response to a step input assuming $I_{w}(t < 0) = 0, I_{w}(t > 0) \gg I_{\tau}$.
> - $I_w \ll I_\tau \Rightarrow \tau \frac{d}{dt}I_{syn} + I_{syn} = 0 \Rightarrow \frac{d}{dt}I_{syn} = - \frac{I_{syn}}{\tau}$
> - $I_w \gg I_\tau \Rightarrow \tau \frac{d}{dt}I_{syn} + I_{syn} = \frac{I_w I_{thr}}{I_\tau} \Rightarrow \frac{d}{dt}I_{syn} = \frac{I_w I_{thr} - I_{syn}I_\tau}{\tau I_\tau}$
```
import numpy as np
import matplotlib.pyplot as plt
def get_next_I_syn(I_syn, tau, I_tau, I_thr, I_w, dt):
return I_syn + (I_w*I_thr - I_syn*I_tau)/(tau * I_tau)*dt
tau = 0.3
I_tau = 5e-9
I_w = 5e-7
I_thr = 5e-6
x = np.linspace(0, 2, 100)
dt = x[1] - x[0]
y = [0]
for _ in range(len(x[1:])):
I_syn = get_next_I_syn(y[-1], tau, I_tau, I_thr, I_w, dt)
y.append(I_syn)
plt.plot(x, y, label="$I_{syn}$")
plt.title(r"$I_{syn}$ with $I_{w}(t < 0) = 0, I_{w}(t > 0) \gg I_{\tau}$")
plt.ylabel("$I_{syn}$ [A]")
plt.xlabel("t [s]")
plt.legend()
plt.show()
```
**(4)** Derive the circuit's response to a step input assuming $I_{w}(t < 0) \gg I_{\tau}, I_{w}(t > 0) = 0$.
> - $I_w \ll I_\tau \Rightarrow \tau \frac{d}{dt}I_{syn} + I_{syn} = 0 \Rightarrow \frac{d}{dt}I_{syn} = - \frac{I_{syn}}{\tau}$
> - $I_w \gg I_\tau \Rightarrow \tau \frac{d}{dt}I_{syn} + I_{syn} = \frac{I_w I_{thr}}{I_\tau} \Rightarrow \frac{d}{dt}I_{syn} = \frac{I_w I_{thr} - I_{syn}I_\tau}{\tau I_\tau}$
```
import numpy as np
import matplotlib.pyplot as plt
def get_next_I_syn(I_syn, tau, I_tau, I_thr, I_w, dt):
return I_syn + (I_w*I_thr - I_syn*I_tau)/(tau * I_tau)*dt
tau = 0.3
I_tau = 5e-7
I_w = 5e-9
I_thr = 5e-6
x = np.linspace(0, 2, 100)
dt = x[1] - x[0]
y = [5e-4]
for _ in range(len(x[1:])):
I_syn = get_next_I_syn(y[-1], tau, I_tau, I_thr, I_w, dt)
y.append(I_syn)
plt.plot(x, y, label="$I_{syn}$")
plt.title(r"$I_{syn}$ with $I_{w}(t < 0) \gg I_{\tau}, I_{w}(t > 0) = 0$")
plt.ylabel("$I_{syn}$ [A]")
plt.xlabel("t [s]")
plt.legend()
plt.show()
```
**(5)** Suppose we stimulate the circuit with a regular spike train of frequency $f$ (high enough). What happens to $I_{syn}$ in steady-state (average value)?
> $\tau \frac{d}{dt}I_{syn} + I_{syn} = \frac{I_w I_{thr}}{I_\tau}$
> Steady-state $\Rightarrow \frac{d}{dt}I_{syn} = 0\frac{A}{s}$
> $\Rightarrow I_{syn} = \frac{I_w I_{thr}}{I_\tau}$
**(6)** In what conditions (tau and thr) is the step response dependent only on $I_{w}$?
> Per the formula above, when $I_{thr} = I_\tau$
# 2 Setup
## 2.1 Connect the device
```
# import the necessary libraries
import pyplane
import time
import numpy as np
import matplotlib.pyplot as plt
from scipy import interpolate
# create a Plane object and open the communication
if 'p' not in locals():
p = pyplane.Plane()
try:
p.open('/dev/ttyACM0')
except RuntimeError as e:
del p
print(e)
p.get_firmware_version()
# Send a reset signal to the board, check if the LED blinks
p.reset(pyplane.ResetType.Soft)
time.sleep(0.5)
# NOTE: You must send this request events every time you do a reset operetion, otherwise the recieved data is noisy.
# Because the class chip need to do handshake to get the communication correct.
p.request_events(1)
# Try to read something, make sure the chip responses
p.read_current(pyplane.AdcChannel.GO0_N)
# If any of the above steps fail, delete the object, close and halt, stop the server and ask the TA to restart
# please also say your board number: ttyACMx
# del p
```
## 2.2 Chip configuration
* To measure DPI synapse:
```
p.send_coach_events([pyplane.Coach.generate_aerc_event(
pyplane.pyplane.Coach.CurrentOutputSelect.SelectLine5,
pyplane.Coach.VoltageOutputSelect.SelectLine2,
pyplane.Coach.VoltageInputSelect.NoneSelected,
pyplane.Coach.SynapseSelect.DPI,0)])
```
## 2.3 C2F
* To set up the C2F circuit:
```
# setup C2F
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.C2F_HYS_P, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 100)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.C2F_BIAS_P, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I240nA, 255)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.C2F_PWLK_P, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I240nA, 255)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.C2F_REF_L, \
pyplane.Coach.BiasType.N, \
pyplane.Coach.BiasGenMasterCurrent.I240nA, 255)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.C2F_REF_H, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I240nA, 255)])
# setup output rail-to-rail buffer
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.RR_BIAS_P, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I240nA, 255)])
```
## 2.4 BiasGen
In a simplified form, the output of a branch of the BiasGen will be the gate voltage $V_b$ for the bias current $I_b$, and if the current mirror has a ratio of $w$ and the bias transistor operates in subthreshold-saturation:
\begin{equation}
I_b = w\frac{BG_{fine}}{256}I_{BG_{master}}
\end{equation}
Where $I_{BG_{master}}$ is the `BiasGenMasterCurrent` $\in \left\{ 60~\rm{pA}, 460~\rm{pA}, 3.8~\rm{nA}, 30~\rm{nA}, 240~\rm{nA} \right\}$, $BG_{fine}$ is the integer fine value $\in [0, 256)$
To set a bias, use the function similar to the following:
```
p.send_coach_event(pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.BIAS_NAME, \
pyplane.Coach.BiasType.BIAS_TYPE, \
pyplane.Coach.BiasGenMasterCurrent.MASTER_CURRENT, FINE_VALUE))
```
**You may have noticed that there are some biases that are not used to directly generate a current, but rather what matters is the voltage, e.g. $V_{gain}$, $V_{ex}$ and $V_{inh}$ in our HWTA circuit. Even though they may have a `BIAS_NAME` ending with `_N` or `_P` it only indicates that they are connected to the gate of an N- or a P-FET, but the `BIAS_TYPE` parameter can be both `_N` or `_P`. For example, setting a `_N` bias to `BIAS_TYPE = P` will only make this voltage very close to GND, which _is_ sometimes the designed use case.**
## 2.5 Pulse extender circuit
In case you didn't look into the last problem in prelab, the pulse extender circuit basically defines the pulse width, which is inversely proportional to the parameter `PEX_VTAU_N`.
# 3 DPI synapse
The **DPI synapse** receives a voltage pulse train, $V_{pulse}$, as input and
outputs a corresponding synaptic current, $I_{syn}$. Additionally, the synaptic voltage, $V_{syn}$, is provided.
Bias parameters $V_{weight}$ & $V_{tau}$ affect the amplitude and decay of the response, while $V_{thr}$ acts as an additional weight bias. $C_{syn}$ sizing was chosen for a capacitance of 2pF.

**Pin map**
**$V_{syn}$ = adc[14]**
**$I_{syn}$ = c2f[9]**
The task of this exercise it to tune the parameters and observe the behavior of the DPI synapse.
## 3.1 Basic impulse response
- **Set parameters**
```
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VTAU_P, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 25)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VTHR_N, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 30)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VWEIGHT_N, \
pyplane.Coach.BiasType.N, \
pyplane.Coach.BiasGenMasterCurrent.I30nA, 100)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.PEX_VTAU_N, \
pyplane.Coach.BiasType.N, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 10)])
```
- **Data acquisition**
```
N_pulses = 2 # for each trial, send 2 input pulses
N_samples_per_pulse = 10 # for each input pulse, sample 10 points
N_samples = N_pulses*N_samples_per_pulse
dT = 0.02 # delta t between the samples, DO NOT CHANGE
t = np.arange(N_samples)*dT
vsyn = np.zeros(N_samples)
isyn = np.zeros(N_samples)
for k in range(N_pulses):
p.send_coach_events([pyplane.Coach.generate_pulse_event()])
for i in range(N_samples_per_pulse):
vsyn[k*N_samples_per_pulse+i] += p.read_voltage(pyplane.AdcChannel.AOUT14)
inter=p.read_c2f_output(dT)
isyn[k*N_samples_per_pulse+i] += inter[9]
```
- **Plot the data**
```
import matplotlib.pyplot as plt
import numpy as np
plt.rcParams.update({'font.size': 15})
t,vsyn,isyn = np.loadtxt('data/data_ex_3_1.csv',delimiter=',')
plt.plot(t,vsyn,'-')
plt.xlabel('t [s]')
plt.ylabel('$V_{syn}$ [V]')
plt.legend(['$V_{syn}$'],bbox_to_anchor=(1.05, 1),loc='upper left')
plt.title('Fig. 1: Measured values of $V_{syn}$ as a function of time')
plt.grid()
plt.show()
plt.plot(t,isyn,'-')
plt.xlabel('t [s]')
plt.ylabel('C2F [Hz]')
plt.legend(['C2F$(I_{syn})$'],bbox_to_anchor=(1.05, 1),loc='upper left')
plt.title('Fig. 2: Measured C2F values of $I_{syn}$ as a function of time')
plt.grid()
plt.show()
```
- **Save the data**
```
np.savetxt('data/data_ex_3_1.csv',[t,vsyn,isyn] , delimiter=',')
```
## 3.2 Different $I_{weight}$
Repeat 3.1 with a smaller and a larger $I_{weight}$, compare the three curves in the same plot.
- **Set smaller bias**
```
## REMINDER , RESET ALL PARAMETERS AS 3.1
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VTAU_P, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 25)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VTHR_N, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 30)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VWEIGHT_N, \
pyplane.Coach.BiasType.N, \
pyplane.Coach.BiasGenMasterCurrent.I30nA, 50)]) #change weight
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.PEX_VTAU_N, \
pyplane.Coach.BiasType.N, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 10)])
```
- **Data acquisition**
```
N_pulses = 2 # for each trial, send 2 input pulses
N_samples_per_pulse = 10 # for each input pulse, sample 10 points
N_samples = N_pulses*N_samples_per_pulse
dT = 0.02 # delta t between the samples, DO NOT CHANGE
t = np.arange(N_samples)*dT
vsyn = np.zeros(N_samples)
isyn = np.zeros(N_samples)
for k in range(N_pulses):
p.send_coach_events([pyplane.Coach.generate_pulse_event()])
for i in range(N_samples_per_pulse):
vsyn[k*N_samples_per_pulse+i] += p.read_voltage(pyplane.AdcChannel.AOUT14)
inter=p.read_c2f_output(dT)
isyn[k*N_samples_per_pulse+i] += inter[9]
```
- **Save data**
```
np.savetxt('data/data_ex_3_2_smaller.csv',[t,vsyn,isyn] , delimiter=',')
```
- **Set larger bias**
```
#Insert a bigger I weight
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VTAU_P, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 25)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VTHR_N, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 30)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VWEIGHT_N, \
pyplane.Coach.BiasType.N, \
pyplane.Coach.BiasGenMasterCurrent.I30nA, 150)]) #change weight
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.PEX_VTAU_N, \
pyplane.Coach.BiasType.N, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 10)])
```
- **Data acquisition**
```
N_pulses = 2 # for each trial, send 2 input pulses
N_samples_per_pulse = 10 # for each input pulse, sample 10 points
N_samples = N_pulses*N_samples_per_pulse
dT = 0.02 # delta t between the samples, DO NOT CHANGE
t = np.arange(N_samples)*dT
vsyn = np.zeros(N_samples)
isyn = np.zeros(N_samples)
for k in range(N_pulses):
p.send_coach_events([pyplane.Coach.generate_pulse_event()])
for i in range(N_samples_per_pulse):
vsyn[k*N_samples_per_pulse+i] += p.read_voltage(pyplane.AdcChannel.AOUT14)
inter=p.read_c2f_output(dT)
isyn[k*N_samples_per_pulse+i] += inter[9]
```
- **Save data**
```
np.savetxt('data/data_ex_3_2_bigger.csv',[t,vsyn,isyn] , delimiter=',')
```
- **Plot**
```
import matplotlib.pyplot as plt
import numpy as np
plt.rcParams.update({'font.size': 15})
t,vsyn_smaller,isyn_smaller = np.loadtxt('data/data_ex_3_2_smaller.csv',delimiter=',')
_,vsyn_normal,isyn_normal = np.loadtxt('data/data_ex_3_1.csv',delimiter=',')
_,vsyn_bigger,isyn_bigger = np.loadtxt('data/data_ex_3_2_bigger.csv',delimiter=',')
plt.plot(t,vsyn_smaller,t,vsyn_normal,t,vsyn_bigger)
plt.xlabel('t [s]')
plt.ylabel('$V_{syn}$ [V]')
plt.legend(['$V_{syn}$ - Smaller $I_w$','$V_{syn}$ - Normal $I_w$','$V_{syn}$ - Larger $I_w$'],bbox_to_anchor=(1.05, 1),loc='upper left')
plt.title('Fig. 3: Measured values of $V_{syn}$ as function of time for varying $I_{w}$')
plt.grid()
plt.show()
plt.plot(t[1:],isyn_smaller[1:],t,isyn_normal,t,isyn_bigger)
plt.xlabel('t [s]')
plt.ylabel('C2F [Hz]')
plt.legend(['C2F$(I_{syn})$ - Smaller $I_w$','C2F$(I_{syn})$ - Normal $I_w$','C2F$(I_{syn})$ - Larger $I_w$'],bbox_to_anchor=(1.05, 1),loc='upper left')
plt.title('Fig. 4: Measured values of $I_{syn}$ as function of time for varying $I_{w}$')
plt.grid()
plt.show()
```
## 3.3 Different $I_{tau}$
Repeat 3.1 with a smaller and a larger $I_{tau}$, compare the three curves in the same plot.
```
## REMINDER , RESET ALL PARAMETERS AS 3.1
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VTAU_P, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 10)]) #change tau
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VTHR_N, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 30)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VWEIGHT_N, \
pyplane.Coach.BiasType.N, \
pyplane.Coach.BiasGenMasterCurrent.I30nA, 100)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.PEX_VTAU_N, \
pyplane.Coach.BiasType.N, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 10)])
N_pulses = 2 # for each trial, send 2 input pulses
N_samples_per_pulse = 10 # for each input pulse, sample 10 points
N_samples = N_pulses*N_samples_per_pulse
dT = 0.02 # delta t between the samples, DO NOT CHANGE
t = np.arange(N_samples)*dT
vsyn = np.zeros(N_samples)
isyn = np.zeros(N_samples)
for k in range(N_pulses):
p.send_coach_events([pyplane.Coach.generate_pulse_event()])
for i in range(N_samples_per_pulse):
vsyn[k*N_samples_per_pulse+i] += p.read_voltage(pyplane.AdcChannel.AOUT14)
inter=p.read_c2f_output(dT)
isyn[k*N_samples_per_pulse+i] += inter[9]
np.savetxt('data/data_ex_3_3_smaller.csv',[t,vsyn,isyn] , delimiter=',')
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VTAU_P, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 40)]) #change tau
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VTHR_N, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 30)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VWEIGHT_N, \
pyplane.Coach.BiasType.N, \
pyplane.Coach.BiasGenMasterCurrent.I30nA, 100)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.PEX_VTAU_N, \
pyplane.Coach.BiasType.N, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 10)])
N_pulses = 2 # for each trial, send 2 input pulses
N_samples_per_pulse = 10 # for each input pulse, sample 10 points
N_samples = N_pulses*N_samples_per_pulse
dT = 0.02 # delta t between the samples, DO NOT CHANGE
t = np.arange(N_samples)*dT
vsyn = np.zeros(N_samples)
isyn = np.zeros(N_samples)
for k in range(N_pulses):
p.send_coach_events([pyplane.Coach.generate_pulse_event()])
for i in range(N_samples_per_pulse):
vsyn[k*N_samples_per_pulse+i] += p.read_voltage(pyplane.AdcChannel.AOUT14)
inter=p.read_c2f_output(dT)
isyn[k*N_samples_per_pulse+i] += inter[9]
np.savetxt('data/data_ex_3_3_bigger.csv',[t,vsyn,isyn] , delimiter=',')
import matplotlib.pyplot as plt
import numpy as np
plt.rcParams.update({'font.size': 15})
t,vsyn_smaller,isyn_smaller = np.loadtxt('data/data_ex_3_3_smaller.csv',delimiter=',')
_,vsyn_normal,isyn_normal = np.loadtxt('data/data_ex_3_1.csv',delimiter=',')
_,vsyn_bigger,isyn_bigger = np.loadtxt('data/data_ex_3_3_bigger.csv',delimiter=',')
plt.plot(t,vsyn_smaller,t,vsyn_normal,t,vsyn_bigger)
plt.xlabel('t [s]')
plt.ylabel('$V_{syn}$ [V]')
plt.legend(['$V_{syn}$ - Smaller $I_{𝜏}$','$V_{syn}$ - Normal $I_{𝜏}$','$V_{syn}$ - Larger $I_{𝜏}$'],bbox_to_anchor=(1.05, 1),loc='upper left')
plt.title('Fig. 5: Measured values of $V_{syn}$ as function of time for varying $I_{𝜏}$')
plt.grid()
plt.show()
plt.plot(t,isyn_smaller,t,isyn_normal,t,isyn_bigger)
plt.xlabel('t [s]')
plt.ylabel('C2F [Hz]')
plt.legend(['C2F$(I_{syn})$ - Smaller $I_{𝜏}$','C2F$(I_{syn})$ - Normal $I_{𝜏}$','C2F$(I_{syn})$ - Larger $I_{𝜏}$'],bbox_to_anchor=(1.05, 1),loc='upper left')
plt.title('Fig. 6: Measured values of $I_{syn}$ as function of time for varying $I_{𝜏}$')
plt.grid()
plt.show()
```
## 3.4 Different $I_{thr}$
Repeat 3.1 with a smaller and a larger $I_{thr}$, compare the three curves in the same plot.
```
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VTAU_P, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 25)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VTHR_N, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 10)]) #change threshold
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VWEIGHT_N, \
pyplane.Coach.BiasType.N, \
pyplane.Coach.BiasGenMasterCurrent.I30nA, 100)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.PEX_VTAU_N, \
pyplane.Coach.BiasType.N, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 10)])
N_pulses = 2 # for each trial, send 2 input pulses
N_samples_per_pulse = 10 # for each input pulse, sample 10 points
N_samples = N_pulses*N_samples_per_pulse
dT = 0.02 # delta t between the samples, DO NOT CHANGE
t = np.arange(N_samples)*dT
vsyn = np.zeros(N_samples)
isyn = np.zeros(N_samples)
for k in range(N_pulses):
p.send_coach_events([pyplane.Coach.generate_pulse_event()])
for i in range(N_samples_per_pulse):
vsyn[k*N_samples_per_pulse+i] += p.read_voltage(pyplane.AdcChannel.AOUT14)
inter=p.read_c2f_output(dT)
isyn[k*N_samples_per_pulse+i] += inter[9]
np.savetxt('data/data_ex_3_4_smaller.csv',[t,vsyn,isyn] , delimiter=',')
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VTAU_P, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 25)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VTHR_N, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 80)]) #change threshold
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VWEIGHT_N, \
pyplane.Coach.BiasType.N, \
pyplane.Coach.BiasGenMasterCurrent.I30nA, 100)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.PEX_VTAU_N, \
pyplane.Coach.BiasType.N, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 10)])
N_pulses = 2 # for each trial, send 2 input pulses
N_samples_per_pulse = 10 # for each input pulse, sample 10 points
N_samples = N_pulses*N_samples_per_pulse
dT = 0.02 # delta t between the samples, DO NOT CHANGE
t = np.arange(N_samples)*dT
vsyn = np.zeros(N_samples)
isyn = np.zeros(N_samples)
for k in range(N_pulses):
p.send_coach_events([pyplane.Coach.generate_pulse_event()])
for i in range(N_samples_per_pulse):
vsyn[k*N_samples_per_pulse+i] += p.read_voltage(pyplane.AdcChannel.AOUT14)
inter=p.read_c2f_output(dT)
isyn[k*N_samples_per_pulse+i] += inter[9]
np.savetxt('data/data_ex_3_4_bigger.csv',[t,vsyn,isyn] , delimiter=',')
import matplotlib.pyplot as plt
import numpy as np
plt.rcParams.update({'font.size': 15})
t,vsyn_smaller,isyn_smaller = np.loadtxt('data/data_ex_3_4_smaller.csv',delimiter=',')
_,vsyn_normal,isyn_normal = np.loadtxt('data/data_ex_3_1.csv',delimiter=',')
_,vsyn_bigger,isyn_bigger = np.loadtxt('data/data_ex_3_4_bigger.csv',delimiter=',')
plt.plot(t,vsyn_smaller,t,vsyn_normal,t,vsyn_bigger)
plt.xlabel('t [s]')
plt.ylabel('$V_{syn}$ [V]')
plt.legend(['$V_{syn}$ - Smaller $I_{thr}$','$V_{syn}$ - Normal $I_{thr}$','$V_{syn}$ - Larger $I_{thr}$'],bbox_to_anchor=(1.05, 1),loc='upper left')
plt.title('Fig. 7: Measured values of $V_{syn}$ as function of time for varying $I_{thr}$')
plt.grid()
plt.show()
plt.plot(t[1:],isyn_smaller[1:],t,isyn_normal,t,isyn_bigger)
plt.xlabel('t [s]')
plt.ylabel('C2F [Hz]')
plt.legend(['C2F$(I_{syn})$ - Smaller $I_{thr}$','C2F$(I_{syn})$ - Normal $I_{thr}$','C2F$(I_{syn})$ - Larger $I_{thr}$'],bbox_to_anchor=(1.05, 1),loc='upper left')
plt.title('Fig. 8: Measured values of $I_{syn}$ as function of time for varying $I_{thr}$')
plt.grid()
plt.show()
```
## 3.5 Different pulse width
Repeat 3.1 with a smaller and a larger pulse width, compare the three curves in the same plot.
```
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VTAU_P, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 25)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VTHR_N, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 30)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VWEIGHT_N, \
pyplane.Coach.BiasType.N, \
pyplane.Coach.BiasGenMasterCurrent.I30nA, 100)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.PEX_VTAU_N, \
pyplane.Coach.BiasType.N, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 6)]) # Change pulse width
N_pulses = 2 # for each trial, send 2 input pulses
N_samples_per_pulse = 10 # for each input pulse, sample 10 points
N_samples = N_pulses*N_samples_per_pulse
dT = 0.02 # delta t between the samples, DO NOT CHANGE
t = np.arange(N_samples)*dT
vsyn = np.zeros(N_samples)
isyn = np.zeros(N_samples)
for k in range(N_pulses):
p.send_coach_events([pyplane.Coach.generate_pulse_event()])
for i in range(N_samples_per_pulse):
vsyn[k*N_samples_per_pulse+i] += p.read_voltage(pyplane.AdcChannel.AOUT14)
inter=p.read_c2f_output(dT)
isyn[k*N_samples_per_pulse+i] += inter[9]
np.savetxt('data/data_ex_3_5_smaller.csv',[t,vsyn,isyn] , delimiter=',')
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VTAU_P, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 25)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VTHR_N, \
pyplane.Coach.BiasType.P, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 30)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.DPI_VWEIGHT_N, \
pyplane.Coach.BiasType.N, \
pyplane.Coach.BiasGenMasterCurrent.I30nA, 100)])
p.send_coach_events([pyplane.Coach.generate_biasgen_event(\
pyplane.Coach.BiasAddress.PEX_VTAU_N, \
pyplane.Coach.BiasType.N, \
pyplane.Coach.BiasGenMasterCurrent.I60pA, 14)]) # Change pulse width
N_pulses = 2 # for each trial, send 2 input pulses
N_samples_per_pulse = 10 # for each input pulse, sample 10 points
N_samples = N_pulses*N_samples_per_pulse
dT = 0.02 # delta t between the samples, DO NOT CHANGE
t = np.arange(N_samples)*dT
vsyn = np.zeros(N_samples)
isyn = np.zeros(N_samples)
for k in range(N_pulses):
p.send_coach_events([pyplane.Coach.generate_pulse_event()])
for i in range(N_samples_per_pulse):
vsyn[k*N_samples_per_pulse+i] += p.read_voltage(pyplane.AdcChannel.AOUT14)
inter=p.read_c2f_output(dT)
isyn[k*N_samples_per_pulse+i] += inter[9]
np.savetxt('data/data_ex_3_5_bigger.csv',[t,vsyn,isyn] , delimiter=',')
import matplotlib.pyplot as plt
import numpy as np
plt.rcParams.update({'font.size': 15})
t,vsyn_smaller,isyn_smaller = np.loadtxt('data/data_ex_3_5_smaller.csv',delimiter=',')
_,vsyn_normal,isyn_normal = np.loadtxt('data/data_ex_3_1.csv',delimiter=',')
_,vsyn_bigger,isyn_bigger = np.loadtxt('data/data_ex_3_5_bigger.csv',delimiter=',')
plt.plot(t,vsyn_smaller,t,vsyn_normal,t,vsyn_bigger)
plt.xlabel('t [s]')
plt.ylabel('$V_{syn}$ [V]')
plt.legend(['$V_{syn}$ - Smaller $I_{\\rm{pulse\ width}}$','$V_{syn}$ - Normal $I_{\\rm{pulse\ width}}$','$V_{syn}$ - Larger $I_{\\rm{pulse\ width}}$'],bbox_to_anchor=(1.05, 1),loc='upper left')
plt.title('Fig. 9: Measured values of $V_{syn}$ as function of time for varying $I_{\\rm{pulse\ width}}$')
plt.grid()
plt.show()
plt.plot(t[1:],isyn_smaller[1:],t,isyn_normal,t,isyn_bigger)
plt.xlabel('t [s]')
plt.ylabel('C2F [Hz]')
plt.legend(['C2F$(I_{syn})$ - Smaller $I_{\\rm{pulse\ width}}$','C2F$(I_{syn})$ - Normal $I_{\\rm{pulse\ width}}$','C2F$(I_{syn})$ - Larger $I_{\\rm{pulse\ width}}$'],bbox_to_anchor=(1.05, 1),loc='upper left')
plt.title('Fig. 10: Measured values of $I_{syn}$ as function of time for varying $I_{\\rm{pulse\ width}}$')
plt.grid()
plt.show()
```
| true |
code
| 0.339321 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_12_01_ai_gym.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# T81-558: Applications of Deep Neural Networks
**Module 12: Reinforcement Learning**
* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)
* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).
# Module 12 Video Material
* **Part 12.1: Introduction to the OpenAI Gym** [[Video]](https://www.youtube.com/watch?v=_KbUxgyisjM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_01_ai_gym.ipynb)
* Part 12.2: Introduction to Q-Learning [[Video]](https://www.youtube.com/watch?v=A3sYFcJY3lA&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_02_qlearningreinforcement.ipynb)
* Part 12.3: Keras Q-Learning in the OpenAI Gym [[Video]](https://www.youtube.com/watch?v=qy1SJmsRhvM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_03_keras_reinforce.ipynb)
* Part 12.4: Atari Games with Keras Neural Networks [[Video]](https://www.youtube.com/watch?v=co0SwPWoZh0&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_04_atari.ipynb)
* Part 12.5: Application of Reinforcement Learning [[Video]](https://www.youtube.com/watch?v=1jQPP3RfwMI&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_05_apply_rl.ipynb)
# Part 12.1: Introduction to the OpenAI Gym
[OpenAI Gym](https://gym.openai.com/) aims to provide an easy-to-setup general-intelligence benchmark with a wide variety of different environments. The goal is to standardize how environments are defined in AI research publications so that published research becomes more easily reproducible. The project claims to provide the user with a simple interface. As of June 2017, developers can only use Gym with Python.
OpenAI gym is pip-installed onto your local machine. There are a few significant limitations to be aware of:
* OpenAI Gym Atari only **directly** supports Linux and Macintosh
* OpenAI Gym Atari can be used with Windows; however, it requires a particular [installation procedure](https://towardsdatascience.com/how-to-install-openai-gym-in-a-windows-environment-338969e24d30)
* OpenAI Gym can not directly render animated games in Google CoLab.
Because OpenAI Gym requires a graphics display, the only way to display Gym in Google CoLab is an embedded video. The presentation of OpenAI Gym game animations in Google CoLab is discussed later in this module.
### OpenAI Gym Leaderboard
The OpenAI Gym does have a leaderboard, similar to Kaggle; however, the OpenAI Gym's leaderboard is much more informal compared to Kaggle. The user's local machine performs all scoring. As a result, the OpenAI gym's leaderboard is strictly an "honor's system." The leaderboard is maintained the following GitHub repository:
* [OpenAI Gym Leaderboard](https://github.com/openai/gym/wiki/Leaderboard)
If you submit a score, you are required to provide a writeup with sufficient instructions to reproduce your result. A video of your results is suggested, but not required.
### Looking at Gym Environments
The centerpiece of Gym is the environment, which defines the "game" in which your reinforcement algorithm will compete. An environment does not need to be a game; however, it describes the following game-like features:
* **action space**: What actions can we take on the environment, at each step/episode, to alter the environment.
* **observation space**: What is the current state of the portion of the environment that we can observe. Usually, we can see the entire environment.
Before we begin to look at Gym, it is essential to understand some of the terminology used by this library.
* **Agent** - The machine learning program or model that controls the actions.
Step - One round of issuing actions that affect the observation space.
* **Episode** - A collection of steps that terminates when the agent fails to meet the environment's objective, or the episode reaches the maximum number of allowed steps.
* **Render** - Gym can render one frame for display after each episode.
* **Reward** - A positive reinforcement that can occur at the end of each episode, after the agent acts.
* **Nondeterministic** - For some environments, randomness is a factor in deciding what effects actions have on reward and changes to the observation space.
It is important to note that many of the gym environments specify that they are not nondeterministic even though they make use of random numbers to process actions. It is generally agreed upon (based on the gym GitHub issue tracker) that nondeterministic property means that a deterministic environment will still behave randomly even when given consistent seed value. The seed method of an environment can be used by the program to seed the random number generator for the environment.
The Gym library allows us to query some of these attributes from environments. I created the following function to query gym environments.
```
import gym
def query_environment(name):
env = gym.make(name)
spec = gym.spec(name)
print(f"Action Space: {env.action_space}")
print(f"Observation Space: {env.observation_space}")
print(f"Max Episode Steps: {spec.max_episode_steps}")
print(f"Nondeterministic: {spec.nondeterministic}")
print(f"Reward Range: {env.reward_range}")
print(f"Reward Threshold: {spec.reward_threshold}")
```
We will begin by looking at the MountainCar-v0 environment, which challenges an underpowered car to escape the valley between two mountains. The following code describes the Mountian Car environment.
```
query_environment("MountainCar-v0")
```
There are three distinct actions that can be taken: accelrate forward, decelerate, or accelerate backwards. The observation space contains two continuous (floating point) values, as evident by the box object. The observation space is simply the position and velocity of the car. The car has 200 steps to escape for each epasode. You would have to look at the code to know, but the mountian car recieves no incramental reward. The only reward for the car is given when it escapes the valley.
```
query_environment("CartPole-v1")
```
The CartPole-v1 environment challenges the agent to move a cart while keeping a pole balanced. The environment has an observation space of 4 continuous numbers:
* Cart Position
* Cart Velocity
* Pole Angle
* Pole Velocity At Tip
To achieve this goal, the agent can take the following actions:
* Push cart to the left
* Push cart to the right
There is also a continuous variant of the mountain car. This version does not simply have the motor on or off. For the continuous car the action space is a single floating point number that specifies how much forward or backward force is being applied.
```
query_environment("MountainCarContinuous-v0")
```
Note: ignore the warning above, it is a relativly inconsequential bug in OpenAI Gym.
Atari games, like breakout can use an observation space that is either equal to the size of the Atari screen (210x160) or even use the RAM memory of the Atari (128 bytes) to determine the state of the game. Yes thats bytes, not kilobytes!
```
query_environment("Breakout-v0")
query_environment("Breakout-ram-v0")
```
### Render OpenAI Gym Environments from CoLab
It is possible to visualize the game your agent is playing, even on CoLab. This section provides information on how to generate a video in CoLab that shows you an episode of the game your agent is playing. This video process is based on suggestions found [here](https://colab.research.google.com/drive/1flu31ulJlgiRL1dnN2ir8wGh9p7Zij2t).
Begin by installing **pyvirtualdisplay** and **python-opengl**.
```
!pip install gym pyvirtualdisplay > /dev/null 2>&1
!apt-get install -y xvfb python-opengl ffmpeg > /dev/null 2>&1
```
Next, we install needed requirements to display an Atari game.
```
!apt-get update > /dev/null 2>&1
!apt-get install cmake > /dev/null 2>&1
!pip install --upgrade setuptools 2>&1
!pip install ez_setup > /dev/null 2>&1
!pip install gym[atari] > /dev/null 2>&1
```
Next we define functions used to show the video by adding it to the CoLab notebook.
```
import gym
from gym.wrappers import Monitor
import glob
import io
import base64
from IPython.display import HTML
from pyvirtualdisplay import Display
from IPython import display as ipythondisplay
display = Display(visible=0, size=(1400, 900))
display.start()
"""
Utility functions to enable video recording of gym environment
and displaying it.
To enable video, just do "env = wrap_env(env)""
"""
def show_video():
mp4list = glob.glob('video/*.mp4')
if len(mp4list) > 0:
mp4 = mp4list[0]
video = io.open(mp4, 'r+b').read()
encoded = base64.b64encode(video)
ipythondisplay.display(HTML(data='''<video alt="test" autoplay
loop controls style="height: 400px;">
<source src="data:video/mp4;base64,{0}" type="video/mp4" />
</video>'''.format(encoded.decode('ascii'))))
else:
print("Could not find video")
def wrap_env(env):
env = Monitor(env, './video', force=True)
return env
```
Now we are ready to play the game. We use a simple random agent.
```
#env = wrap_env(gym.make("MountainCar-v0"))
env = wrap_env(gym.make("Atlantis-v0"))
observation = env.reset()
while True:
env.render()
#your agent goes here
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
if done:
break;
env.close()
show_video()
```
| true |
code
| 0.272738 | null | null | null | null |
|
# Using Python and NumPy more efficiently
As with any programming language, there are more efficient and less efficient ways to write code that has the same functional behavior. In Python, it can be particularly jarring that `for` loops have a relatively high per-loop cost. For simple `for` loops, there can be alternative approaches using regular Python that are both better performing and easier to read. For numerical calculations, `NumPy` provides additional capabilities that can dramatically improve performance.
```
# Math libraries
import math
import numpy as np
# Create a convenience function for using the Python `timeit` module
import timeit
def ms_from_timeit(function_as_string, argument_as_string, runs=100, repeat=10):
"""Returns the milliseconds per function call"""
timer = timeit.Timer(function_as_string+'('+argument_as_string+')',
setup='from __main__ import '+function_as_string+', '+argument_as_string)
return min(timer.repeat(repeat, runs)) / runs * 1000
```
## Calling a function on 10,000 values
Let's start with a simple task: calculate the square root on 10,000 randomly generated values.
```
# Create a list of 10000 random floats in [0, 1)
import random
random_list = [random.random() for i in range(10000)]
```
### Using a `for` loop
A simple implementation is to use a `for` loop to step through the input list and append each square-root value to an output list.
```
def sqrt_python_loop(python_list):
result = []
for value in python_list:
result.append(math.sqrt(value))
return result
print("Using a Python loop takes {0:5.3f} ms".format(ms_from_timeit('sqrt_python_loop', 'random_list')))
```
### Using list comprehension
For `for` loops that only need to operate on an element-by-element basis, we can use Python's list comprehension for a significant performance boost.
```
def sqrt_python_list_comprehension(python_list):
result = [math.sqrt(value) for value in python_list]
return result
print("Using Python list comprehension takes {0:5.3f} ms".format(ms_from_timeit('sqrt_python_list_comprehension', 'random_list')))
```
### Using `map`
One can also use the built-in function `map` to obtain faster performance, although it may be less readable than using list comprehension.
```
def sqrt_python_map(python_list):
result = map(math.sqrt, python_list)
return result
print("Using Python map takes {0:5.3f} ms".format(ms_from_timeit('sqrt_python_map', 'random_list')))
```
## Calling a numerical function on 10,000 numbers
The above examples have significant overhead due to the adherence to "vanilla" Python. For numerical calculations, use NumPy.
```
# Create a NumPy ndarray equivalent for the same list of random floats
random_ndarray = np.array(random_list)
```
### Using NumPy incorrectly
While NumPy is quite powerful, it's entirely possible to use it sub-optimally. In the following example, which sticks with using `map`, the additional overhead of converting to/from NumPy ndarrays completely dominates the run time.
```
def sqrt_numpy_map(numpy_array):
result = np.array(map(np.sqrt, numpy_array))
return result
print("Using NumPy with map takes {0:5.3f} ms".format(ms_from_timeit('sqrt_numpy_map', 'random_ndarray')))
```
### Using NumPy correctly
Most of NumPy's functions are already designed to act element-wise on NumPy arrays, so there's actually no need to use `map`.
```
def sqrt_numpy_ufunc(numpy_array):
result = np.sqrt(numpy_array)
return result
print("Using NumPy universal function takes {0:5.3f} ms".format(ms_from_timeit('sqrt_numpy_ufunc', 'random_ndarray')))
```
## Using NumPy on two-dimensional arrays
```
# Create a 2D NumPy ndarray from the same list of random floats
random_ndarray_2d = np.array(random_list).reshape(100, 100)
def std_1d(numpy_2d_array):
result = np.zeros(numpy_2d_array.shape[1])
for index in np.arange(numpy_2d_array.shape[0]):
result[index] = np.std(numpy_2d_array[index, :])
return result
print("Using NumPy avoiding `axis` takes {0:5.3f} ms".format(ms_from_timeit('std_1d', 'random_ndarray_2d')))
def std_1d_axis(numpy_2d_array):
result = np.std(numpy_2d_array, axis=0)
return result
print("Using NumPy using `axis` takes {0:5.3f} ms".format(ms_from_timeit('std_1d_axis', 'random_ndarray_2d')))
```
| true |
code
| 0.556641 | null | null | null | null |
|
```
import sys
from pathlib import Path
curr_path = str(Path().absolute())
parent_path = str(Path().absolute().parent)
sys.path.append(parent_path) # 添加路径到系统路径
import gym
import torch
import math
import datetime
import numpy as np
from collections import defaultdict
from envs.gridworld_env import CliffWalkingWapper
from QLearning.agent import QLearning
from common.utils import plot_rewards
from common.utils import save_results,make_dir
```
## QLearning算法
```
class QLearning(object):
def __init__(self,state_dim,
action_dim,cfg):
self.action_dim = action_dim
self.lr = cfg.lr # 学习率
self.gamma = cfg.gamma
self.epsilon = 0
self.sample_count = 0
self.epsilon_start = cfg.epsilon_start
self.epsilon_end = cfg.epsilon_end
self.epsilon_decay = cfg.epsilon_decay
self.Q_table = defaultdict(lambda: np.zeros(action_dim)) # 用嵌套字典存放状态->动作->状态-动作值(Q值)的映射,即Q表
def choose_action(self, state):
self.sample_count += 1
self.epsilon = self.epsilon_end + (self.epsilon_start - self.epsilon_end) * \
math.exp(-1. * self.sample_count / self.epsilon_decay) # epsilon是会递减的,这里选择指数递减
# e-greedy 策略
if np.random.uniform(0, 1) > self.epsilon:
action = np.argmax(self.Q_table[str(state)]) # 选择Q(s,a)最大对应的动作
else:
action = np.random.choice(self.action_dim) # 随机选择动作
return action
def predict(self,state):
action = np.argmax(self.Q_table[str(state)])
return action
def update(self, state, action, reward, next_state, done):
Q_predict = self.Q_table[str(state)][action]
if done: # 终止状态
Q_target = reward
else:
Q_target = reward + self.gamma * np.max(self.Q_table[str(next_state)])
self.Q_table[str(state)][action] += self.lr * (Q_target - Q_predict)
def save(self,path):
import dill
torch.save(
obj=self.Q_table,
f=path+"Qleaning_model.pkl",
pickle_module=dill
)
print("保存模型成功!")
def load(self, path):
import dill
self.Q_table =torch.load(f=path+'Qleaning_model.pkl',pickle_module=dill)
print("加载模型成功!")
```
## 训练
```
def train(cfg,env,agent):
print('开始训练!')
print(f'环境:{cfg.env_name}, 算法:{cfg.algo_name}, 设备:{cfg.device}')
rewards = [] # 记录奖励
ma_rewards = [] # 记录滑动平均奖励
for i_ep in range(cfg.train_eps):
ep_reward = 0 # 记录每个episode的reward
state = env.reset() # 重置环境, 重新开一局(即开始新的一个episode)
while True:
action = agent.choose_action(state) # 根据算法选择一个动作
next_state, reward, done, _ = env.step(action) # 与环境进行一次动作交互
agent.update(state, action, reward, next_state, done) # Q-learning算法更新
state = next_state # 存储上一个观察值
ep_reward += reward
if done:
break
rewards.append(ep_reward)
if ma_rewards:
ma_rewards.append(ma_rewards[-1]*0.9+ep_reward*0.1)
else:
ma_rewards.append(ep_reward)
if (i_ep+1)%20 == 0:
print('回合:{}/{}, 奖励:{}'.format(i_ep+1, cfg.train_eps, ep_reward))
print('完成训练!')
return rewards,ma_rewards
```
## 测试
```
def test(cfg,env,agent):
# env = gym.make("FrozenLake-v0", is_slippery=False) # 0 left, 1 down, 2 right, 3 up
# env = FrozenLakeWapper(env)
print('开始测试!')
print(f'环境:{cfg.env_name}, 算法:{cfg.algo_name}, 设备:{cfg.device}')
# 由于测试不需要使用epsilon-greedy策略,所以相应的值设置为0
cfg.epsilon_start = 0.0 # e-greedy策略中初始epsilon
cfg.epsilon_end = 0.0 # e-greedy策略中的终止epsilon
rewards = [] # 记录所有回合的奖励
ma_rewards = [] # 记录所有回合的滑动平均奖励
rewards = [] # 记录所有episode的reward
ma_rewards = [] # 滑动平均的reward
for i_ep in range(cfg.test_eps):
ep_reward = 0 # 记录每个episode的reward
state = env.reset() # 重置环境, 重新开一局(即开始新的一个episode)
while True:
action = agent.predict(state) # 根据算法选择一个动作
next_state, reward, done, _ = env.step(action) # 与环境进行一个交互
state = next_state # 存储上一个观察值
ep_reward += reward
if done:
break
rewards.append(ep_reward)
if ma_rewards:
ma_rewards.append(ma_rewards[-1]*0.9+ep_reward*0.1)
else:
ma_rewards.append(ep_reward)
print(f"回合:{i_ep+1}/{cfg.test_eps},奖励:{ep_reward:.1f}")
print('完成测试!')
return rewards,ma_rewards
```
## 设置参数
```
curr_time = datetime.datetime.now().strftime("%Y%m%d-%H%M%S") # 获取当前时间
algo_name = 'Q-learning' # 算法名称
env_name = 'CliffWalking-v0' # 环境名称
device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # 检测GPU
class QlearningConfig:
'''训练相关参数'''
def __init__(self):
self.algo_name = algo_name # 算法名称
self.env_name = env_name # 环境名称
self.device = device # 检测GPU
self.train_eps = 400 # 训练的回合数
self.test_eps = 20 # 测试的回合数
self.gamma = 0.9 # reward的衰减率
self.epsilon_start = 0.95 # e-greedy策略中初始epsilon
self.epsilon_end = 0.01 # e-greedy策略中的终止epsilon
self.epsilon_decay = 300 # e-greedy策略中epsilon的衰减率
self.lr = 0.1 # 学习率
class PlotConfig:
''' 绘图相关参数设置
'''
def __init__(self) -> None:
self.algo_name = algo_name # 算法名称
self.env_name = env_name # 环境名称
self.device = device # 检测GPU
self.result_path = curr_path + "/outputs/" + self.env_name + \
'/' + curr_time + '/results/' # 保存结果的路径
self.model_path = curr_path + "/outputs/" + self.env_name + \
'/' + curr_time + '/models/' # 保存模型的路径
self.save = True # 是否保存图片
```
## 创建环境和智能体
```
def env_agent_config(cfg,seed=1):
'''创建环境和智能体
Args:
cfg ([type]): [description]
seed (int, optional): 随机种子. Defaults to 1.
Returns:
env [type]: 环境
agent : 智能体
'''
env = gym.make(cfg.env_name)
env = CliffWalkingWapper(env)
env.seed(seed) # 设置随机种子
state_dim = env.observation_space.n # 状态维度
action_dim = env.action_space.n # 动作维度
agent = QLearning(state_dim,action_dim,cfg)
return env,agent
```
## 执行训练并输出结果
```
cfg = QlearningConfig()
plot_cfg = PlotConfig()
# 训练
env, agent = env_agent_config(cfg, seed=1)
rewards, ma_rewards = train(cfg, env, agent)
make_dir(plot_cfg.result_path, plot_cfg.model_path) # 创建保存结果和模型路径的文件夹
agent.save(path=plot_cfg.model_path) # 保存模型
save_results(rewards, ma_rewards, tag='train',
path=plot_cfg.result_path) # 保存结果
plot_rewards(rewards, ma_rewards, plot_cfg, tag="train") # 画出结果
# 测试
env, agent = env_agent_config(cfg, seed=10)
agent.load(path=plot_cfg.model_path) # 导入模型
rewards, ma_rewards = test(cfg, env, agent)
save_results(rewards, ma_rewards, tag='test', path=plot_cfg.result_path) # 保存结果
plot_rewards(rewards, ma_rewards, plot_cfg, tag="test") # 画出结果
```
| true |
code
| 0.332947 | null | null | null | null |
|
## Multi-Fidelity BO with Discrete Fidelities using KG
In this tutorial, we show how to do multi-fidelity BO with discrete fidelities based on [1], where each fidelity is a different "information source." This tutorial uses the same setup as the [continuous multi-fidelity BO tutorial](https://botorch.org/tutorials/multi_fidelity_bo), except with discrete fidelity parameters that are interpreted as multiple information sources.
We use a GP model with a single task that models the design and fidelity parameters jointly. In some cases, where there is not a natural ordering in the fidelity space, it may be more appropriate to use a multi-task model (with, say, an ICM kernel). We will provide a tutorial once this functionality is in place.
[1] [M. Poloczek, J. Wang, P.I. Frazier. Multi-Information Source Optimization. NeurIPS, 2017](https://papers.nips.cc/paper/2017/file/df1f1d20ee86704251795841e6a9405a-Paper.pdf)
[2] [J. Wu, S. Toscano-Palmerin, P.I. Frazier, A.G. Wilson. Practical Multi-fidelity Bayesian Optimization for Hyperparameter Tuning. Conference on Uncertainty in Artificial Intelligence (UAI), 2019](https://arxiv.org/pdf/1903.04703.pdf)
### Set dtype and device
```
import os
import torch
tkwargs = {
"dtype": torch.double,
"device": torch.device("cuda" if torch.cuda.is_available() else "cpu"),
}
SMOKE_TEST = os.environ.get("SMOKE_TEST")
```
### Problem setup
We'll consider the Augmented Hartmann multi-fidelity synthetic test problem. This function is a version of the Hartmann6 test function with an additional dimension representing the fidelity parameter; details are in [2]. The function takes the form $f(x,s)$ where $x \in [0,1]^6$ and $s \in \{0.5, 0.75, 1\}$. The target fidelity is 1.0, which means that our goal is to solve $\max_x f(x,1.0)$ by making use of cheaper evaluations $f(x,s)$ for $s \in \{0.5, 0.75\}$. In this example, we'll assume that the cost function takes the form $5.0 + s$, illustrating a situation where the fixed cost is $5.0$.
```
from botorch.test_functions.multi_fidelity import AugmentedHartmann
problem = AugmentedHartmann(negate=True).to(**tkwargs)
fidelities = torch.tensor([0.5, 0.75, 1.0], **tkwargs)
```
#### Model initialization
We use a `SingleTaskMultiFidelityGP` as the surrogate model, which uses a kernel from [2] that is well-suited for multi-fidelity applications. The `SingleTaskMultiFidelityGP` models the design and fidelity parameters jointly, so its domain is $[0,1]^7$.
```
from botorch.models.gp_regression_fidelity import SingleTaskMultiFidelityGP
from botorch.models.transforms.outcome import Standardize
from gpytorch.mlls.exact_marginal_log_likelihood import ExactMarginalLogLikelihood
from botorch.utils.transforms import unnormalize, standardize
from botorch.utils.sampling import draw_sobol_samples
def generate_initial_data(n=16):
# generate training data
train_x = torch.rand(n, 6, **tkwargs)
train_f = fidelities[torch.randint(3, (n,1))]
train_x_full = torch.cat((train_x, train_f), dim=1)
train_obj = problem(train_x_full).unsqueeze(-1) # add output dimension
return train_x_full, train_obj
def initialize_model(train_x, train_obj):
# define a surrogate model suited for a "training data"-like fidelity parameter
# in dimension 6, as in [2]
model = SingleTaskMultiFidelityGP(
train_x,
train_obj,
outcome_transform=Standardize(m=1),
data_fidelity=6
)
mll = ExactMarginalLogLikelihood(model.likelihood, model)
return mll, model
```
#### Define a helper function to construct the MFKG acquisition function
The helper function illustrates how one can initialize an $q$MFKG acquisition function. In this example, we assume that the affine cost is known. We then use the notion of a `CostAwareUtility` in BoTorch to scalarize the "competing objectives" of information gain and cost. The MFKG acquisition function optimizes the ratio of information gain to cost, which is captured by the `InverseCostWeightedUtility`.
In order for MFKG to evaluate the information gain, it uses the model to predict the function value at the highest fidelity after conditioning on the observation. This is handled by the `project` argument, which specifies how to transform a tensor `X` to its target fidelity. We use a default helper function called `project_to_target_fidelity` to achieve this.
An important point to keep in mind: in the case of standard KG, one can ignore the current value and simply optimize the expected maximum posterior mean of the next stage. However, for MFKG, since the goal is optimize information *gain* per cost, it is important to first compute the current value (i.e., maximum of the posterior mean at the target fidelity). To accomplish this, we use a `FixedFeatureAcquisitionFunction` on top of a `PosteriorMean`.
```
from botorch import fit_gpytorch_model
from botorch.models.cost import AffineFidelityCostModel
from botorch.acquisition.cost_aware import InverseCostWeightedUtility
from botorch.acquisition import PosteriorMean
from botorch.acquisition.knowledge_gradient import qMultiFidelityKnowledgeGradient
from botorch.acquisition.fixed_feature import FixedFeatureAcquisitionFunction
from botorch.optim.optimize import optimize_acqf
from botorch.acquisition.utils import project_to_target_fidelity
bounds = torch.tensor([[0.0] * problem.dim, [1.0] * problem.dim], **tkwargs)
target_fidelities = {6: 1.0}
cost_model = AffineFidelityCostModel(fidelity_weights={6: 1.0}, fixed_cost=5.0)
cost_aware_utility = InverseCostWeightedUtility(cost_model=cost_model)
def project(X):
return project_to_target_fidelity(X=X, target_fidelities=target_fidelities)
def get_mfkg(model):
curr_val_acqf = FixedFeatureAcquisitionFunction(
acq_function=PosteriorMean(model),
d=7,
columns=[6],
values=[1],
)
_, current_value = optimize_acqf(
acq_function=curr_val_acqf,
bounds=bounds[:,:-1],
q=1,
num_restarts=10 if not SMOKE_TEST else 2,
raw_samples=1024 if not SMOKE_TEST else 4,
options={"batch_limit": 10, "maxiter": 200},
)
return qMultiFidelityKnowledgeGradient(
model=model,
num_fantasies=128 if not SMOKE_TEST else 2,
current_value=current_value,
cost_aware_utility=cost_aware_utility,
project=project,
)
```
#### Define a helper function that performs the essential BO step
This helper function optimizes the acquisition function and returns the batch $\{x_1, x_2, \ldots x_q\}$ along with the observed function values. The function `optimize_acqf_mixed` sequentially optimizes the acquisition function over $x$ for each value of the fidelity $s \in \{0, 0.5, 1.0\}$.
```
from botorch.optim.initializers import gen_one_shot_kg_initial_conditions
from botorch.optim.optimize import optimize_acqf_mixed
torch.set_printoptions(precision=3, sci_mode=False)
NUM_RESTARTS = 10 if not SMOKE_TEST else 2
RAW_SAMPLES = 512 if not SMOKE_TEST else 4
def optimize_mfkg_and_get_observation(mfkg_acqf):
"""Optimizes MFKG and returns a new candidate, observation, and cost."""
X_init = gen_one_shot_kg_initial_conditions(
acq_function = mfkg_acqf,
bounds=bounds,
q=4,
num_restarts=10,
raw_samples=512,
)
candidates, _ = optimize_acqf_mixed(
acq_function=mfkg_acqf,
bounds=bounds,
fixed_features_list=[{6: 0.5}, {6: 0.75}, {6: 1.0}],
q=4,
num_restarts=NUM_RESTARTS,
raw_samples=RAW_SAMPLES,
batch_initial_conditions=X_init,
options={"batch_limit": 5, "maxiter": 200},
)
# observe new values
cost = cost_model(candidates).sum()
new_x = candidates.detach()
new_obj = problem(new_x).unsqueeze(-1)
print(f"candidates:\n{new_x}\n")
print(f"observations:\n{new_obj}\n\n")
return new_x, new_obj, cost
```
### Perform a few steps of multi-fidelity BO
First, let's generate some initial random data and fit a surrogate model.
```
train_x, train_obj = generate_initial_data(n=16)
```
We can now use the helper functions above to run a few iterations of BO.
```
cumulative_cost = 0.0
N_ITER = 3 if not SMOKE_TEST else 1
for _ in range(N_ITER):
mll, model = initialize_model(train_x, train_obj)
fit_gpytorch_model(mll)
mfkg_acqf = get_mfkg(model)
new_x, new_obj, cost = optimize_mfkg_and_get_observation(mfkg_acqf)
train_x = torch.cat([train_x, new_x])
train_obj = torch.cat([train_obj, new_obj])
cumulative_cost += cost
```
### Make a final recommendation
In multi-fidelity BO, there are usually fewer observations of the function at the target fidelity, so it is important to use a recommendation function that uses the correct fidelity. Here, we maximize the posterior mean with the fidelity dimension fixed to the target fidelity of 1.0.
```
def get_recommendation(model):
rec_acqf = FixedFeatureAcquisitionFunction(
acq_function=PosteriorMean(model),
d=7,
columns=[6],
values=[1],
)
final_rec, _ = optimize_acqf(
acq_function=rec_acqf,
bounds=bounds[:,:-1],
q=1,
num_restarts=10,
raw_samples=512,
options={"batch_limit": 5, "maxiter": 200},
)
final_rec = rec_acqf._construct_X_full(final_rec)
objective_value = problem(final_rec)
print(f"recommended point:\n{final_rec}\n\nobjective value:\n{objective_value}")
return final_rec
final_rec = get_recommendation(model)
print(f"\ntotal cost: {cumulative_cost}\n")
```
### Comparison to standard EI (always use target fidelity)
Let's now repeat the same steps using a standard EI acquisition function (note that this is not a rigorous comparison as we are only looking at one trial in order to keep computational requirements low).
```
from botorch.acquisition import qExpectedImprovement
def get_ei(model, best_f):
return FixedFeatureAcquisitionFunction(
acq_function=qExpectedImprovement(model=model, best_f=best_f),
d=7,
columns=[6],
values=[1],
)
def optimize_ei_and_get_observation(ei_acqf):
"""Optimizes EI and returns a new candidate, observation, and cost."""
candidates, _ = optimize_acqf(
acq_function=ei_acqf,
bounds=bounds[:,:-1],
q=4,
num_restarts=10,
raw_samples=512,
options={"batch_limit": 5, "maxiter": 200},
)
# add the fidelity parameter
candidates = ei_acqf._construct_X_full(candidates)
# observe new values
cost = cost_model(candidates).sum()
new_x = candidates.detach()
new_obj = problem(new_x).unsqueeze(-1)
print(f"candidates:\n{new_x}\n")
print(f"observations:\n{new_obj}\n\n")
return new_x, new_obj, cost
cumulative_cost = 0.0
train_x, train_obj = generate_initial_data(n=16)
for _ in range(N_ITER):
mll, model = initialize_model(train_x, train_obj)
fit_gpytorch_model(mll)
ei_acqf = get_ei(model, best_f=train_obj.max())
new_x, new_obj, cost = optimize_ei_and_get_observation(ei_acqf)
train_x = torch.cat([train_x, new_x])
train_obj = torch.cat([train_obj, new_obj])
cumulative_cost += cost
final_rec = get_recommendation(model)
print(f"\ntotal cost: {cumulative_cost}\n")
```
| true |
code
| 0.748298 | null | null | null | null |
|
# Clustered Multitask GP (w/ Pyro/GPyTorch High-Level Interface)
## Introduction
In this example, we use the Pyro integration for a GP model with additional latent variables.
We are modelling a multitask GP in this example. Rather than assuming a linear correlation among the different tasks, we assume that there is cluster structure for the different tasks. Let's assume there are $k$ different clusters of tasks. The generative model for task $i$ is:
$$
p(\mathbf y_i \mid \mathbf x_i) = \int \sum_{z_i=1}^k p(\mathbf y_i \mid \mathbf f (\mathbf x_i), z_i) \: p(z_i) \: p(\mathbf f (\mathbf x_i) ) \: d \mathbf f
$$
where $z_i$ is the cluster assignment for task $i$. There are therefore $k$ latent functions $\mathbf f = [f_1 \ldots f_k]$, each modelled by a GP, representing each cluster.
Our goal is therefore to infer:
- The latent functions $f_1 \ldots f_k$
- The cluster assignments $z_i$ for each task
```
import math
import torch
import pyro
import gpytorch
from matplotlib import pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
# this is for running the notebook in our testing framework
import os
smoke_test = ('CI' in os.environ)
```
## Adding additional latent variables to the likelihood
The standard GPyTorch variational objects will take care of inferring the latent functions $f_1 \ldots f_k$. However, we do need to add the additional latent variables $z_i$ to the models. We will do so by creating a custom likelihood that models:
$$
\sum_{z_i=1}^k p(\mathbf y_i \mid \mathbf f (\mathbf x_i), z_i) \: p(z_i)
$$
GPyTorch's likelihoods are capable of modeling additional latent variables. Our custom likelihood needs to define the following three functions:
- `pyro_model` (needs to call through to `super().pyro_model` at the end), which defines the prior distribution for additional latent variables
- `pyro_guide` (needs to call through to `super().pyro_guide` at the end), which defines the variational (guide) distribution for additional latent variables
- `forward`, which defines the observation distributions conditioned on `\mathbf f (\mathbf x_i)` and any additional latent variables.
### The pyro_model function
For each task, we will model the cluster assignment with a `OneHotCategorical` variable, where each cluster has equal probability. The `pyro_model` function will make a `pyro.sample` call to this prior distribution and then call the super method:
```python
# self.prior_cluster_logits = torch.zeros(num_tasks, num_clusters)
def pyro_model(self, function_dist, target):
cluster_assignment_samples = pyro.sample(
self.name_prefix + ".cluster_logits", # self.name_prefix is added by PyroGP
pyro.distributions.OneHotCategorical(logits=self.prior_cluster_logits).to_event(1)
)
return super().pyro_model(
function_dist,
target,
cluster_assignment_samples=cluster_assignment_samples
)
```
Note that we are adding an additional argument `cluster_assignment_samples` to the `super().pyro_model` call. This will pass the cluster assignment samples to the `forward` call, which is necessary for inference.
### The pyro_guide function
For each task, the variational (guide) diustribution will also be a `OneHotCategorical` variable, which will be defined by the parameter `self.variational_cluster_logits`. The `pyro_guide` function will make a `pyro.sample` call to this prior distribution and then call the super method:
```python
def pyro_guide(self, function_dist, target):
pyro.sample(
self.name_prefix + ".cluster_logits", # self.name_prefix is added by PyroGP
pyro.distributions.OneHotCategorical(logits=self.variational_cluster_logits).to_event(1)
)
return super().pyro_guide(function_dist, target)
```
Note that we are adding an additional argument `cluster_assignment_samples` to the `super().pyro_model` call. This will pass the cluster assignment samples to the `forward` call, which is necessary for inference.
### The forward function
The `pyro_model` fuction passes the additional keyword argument `cluster_assignment_samples` to the `forward` call. Therefore, our forward method will define the conditional probability $p(\mathbf y_i \mid \mathbf f(\mathbf x), z_i)$, where $\mathbf f(\mathbf x)$ corresponds to the variable `function_samples` and $z_i$ corresponds to the variable `cluster_assignment_samples`.
In our example $p(\mathbf y_i \mid \mathbf f(\mathbf x), z_i)$ corresponds to a Gaussian noise model.
```python
# self.raw_noise is the Gaussian noise parameter
# function_samples is `n x k`
# cluster_assignment_samples is `k x t`, where `t` is the number of tasks
def forward(self, function_samples, cluster_assignment_samples):
return pyro.distributions.Normal(
loc=(function_samples.unsqueeze(-2) * cluster_assignment_samples).sum(-1),
scale=torch.nn.functional.softplus(self.raw_noise).sqrt()
).to_event(1)
# The to_event call is necessary because we are returning a multitask distribution,
# where each task dimension corresponds to each of the `t` tasks
```
This is all we need for inference! However, if we want to use this model to make predictions, the `cluster_assignment_samples` keyword argument will not be passed into the function. Therefore, we need to make sure that `forward` can handle both inference and predictions:
```python
def forward(self, function_samples, cluster_assignment_samples=None):
if cluster_assignment_samples is None:
# We'll get here at prediction time
# We'll use the variational distribution when making predictions
cluster_assignment_samples = pyro.sample(
self.name_prefix + ".cluster_logits", self._cluster_dist(self.variational_cluster_logits)
)
return pyro.distributions.Normal(
loc=(function_samples.unsqueeze(-2) * cluster_assignment_samples).sum(-1),
scale=torch.nn.functional.softplus(self.raw_noise).sqrt()
).to_event(1)
```
```
class ClusterGaussianLikelihood(gpytorch.likelihoods.Likelihood):
def __init__(self, num_tasks, num_clusters):
super().__init__()
# These are parameters/buffers for the cluster assignment latent variables
self.register_buffer("prior_cluster_logits", torch.zeros(num_tasks, num_clusters))
self.register_parameter("variational_cluster_logits", torch.nn.Parameter(torch.randn(num_tasks, num_clusters)))
# The Gaussian observational noise
self.register_parameter("raw_noise", torch.nn.Parameter(torch.tensor(0.0)))
# Other info
self.num_tasks = num_tasks
self.num_clusters = num_clusters
self.max_plate_nesting = 1
def pyro_guide(self, function_dist, target):
# Here we add the extra variational distribution for the cluster latent variable
pyro.sample(
self.name_prefix + ".cluster_logits", # self.name_prefix is added by PyroGP
pyro.distributions.OneHotCategorical(logits=self.variational_cluster_logits).to_event(1)
)
return super().pyro_guide(function_dist, target)
def pyro_model(self, function_dist, target):
# Here we add the extra prior distribution for the cluster latent variable
cluster_assignment_samples = pyro.sample(
self.name_prefix + ".cluster_logits", # self.name_prefix is added by PyroGP
pyro.distributions.OneHotCategorical(logits=self.prior_cluster_logits).to_event(1)
)
return super().pyro_model(function_dist, target, cluster_assignment_samples=cluster_assignment_samples)
def forward(self, function_samples, cluster_assignment_samples=None):
# For inference, cluster_assignment_samples will be passed in
# This bit of code is for when we use the likelihood in the predictive mode
if cluster_assignment_samples is None:
cluster_assignment_samples = pyro.sample(
self.name_prefix + ".cluster_logits", self._cluster_dist(self.variational_cluster_logits)
)
# Now we return the observational distribution, based on the function_samples and cluster_assignment_samples
res = pyro.distributions.Normal(
loc=(function_samples.unsqueeze(-2) * cluster_assignment_samples).sum(-1),
scale=torch.nn.functional.softplus(self.raw_noise).sqrt()
).to_event(1)
return res
```
## Constructing the PyroGP model
The PyroGP model is essentially the same as the model we used in the simple example, except for two changes
- We now will use our more complicated `ClusterGaussianLikelihood`
- The latent function should be vector valued to correspond to the `k` latent functions. As a result, we will learn a batched variational distribution, and use a `IndependentMultitaskVariationalStrategy` to convert the batched variational distribution into a `MultitaskMultivariateNormal` distribution.
```
class ClusterMultitaskGPModel(gpytorch.models.pyro.PyroGP):
def __init__(self, train_x, train_y, num_functions=2, reparam=False):
num_data = train_y.size(-2)
# Define all the variational stuff
inducing_points = torch.linspace(0, 1, 64).unsqueeze(-1)
variational_distribution = gpytorch.variational.CholeskyVariationalDistribution(
num_inducing_points=inducing_points.size(-2),
batch_shape=torch.Size([num_functions])
)
# Here we're using a IndependentMultitaskVariationalStrategy - so that the output of the
# GP latent function is a MultitaskMultivariateNormal
variational_strategy = gpytorch.variational.IndependentMultitaskVariationalStrategy(
gpytorch.variational.VariationalStrategy(self, inducing_points, variational_distribution),
num_tasks=num_functions,
)
# Standard initializtation
likelihood = ClusterGaussianLikelihood(train_y.size(-1), num_functions)
super().__init__(variational_strategy, likelihood, num_data=num_data, name_prefix=str(time.time()))
self.likelihood = likelihood
self.num_functions = num_functions
# Mean, covar
self.mean_module = gpytorch.means.ZeroMean()
self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
res = gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
return res
```
This model can now be used to perform inference on cluster assignments, as well as make predictions using the inferred cluster assignments!
| true |
code
| 0.802536 | null | null | null | null |
|
# PyQtGraph
## Fast Online Plotting in Python
---------------------------------------------
"PyQtGraph is a pure-python graphics and GUI library built on PyQt4 / PySide and numpy. It is intended for use in mathematics / scientific / engineering applications. Despite being written entirely in python, the library is very fast due to its heavy leverage of numpy for number crunching and Qt's GraphicsView framework for fast display." - http://www.pyqtgraph.org/
## PyQtGraph or Matplotlib?
If you just need to make neat publication-quality plots/figures, then Matplotlib should be your first choice. However, if you are interested in making fast plot updates (> 50 updates per sec), then PyQtGraph is probably the best library to use.
### Prerequisites for this notebook:
* Numpy
* (optional) Basics of PyQt
This notebook covers a few basic features of the library that are sufficient to get you started.
The main topics covered here are:
* Animate data stored in numpy arrays (~ a video).
* How to style your plots.
* How to setup a grid layout.
Refer to the examples provided in the package to learn different features of PyQtGraph. These examples can be accessed via a GUI by running the following in a python shell:
```
import pyqtgraph.examples
pyqtgraph.examples.run()
```
## Animate Numpy Arrays
```
import pyqtgraph as pg # pg is often used as the shorthand notation
from pyqtgraph.Qt import QtCore # import QtCore from the Qt library
```
pyqtgraph.Qt links to the PyQt library. We wish to use the timer() function of the pyqt library in our example. The timer function can be used if you want someething to happen “in a while” or “every once in a while”.
```
app = pg.QtGui.QApplication([]) # init QApplication
```
Here, app refers to an instance of the Qt's QApplication class.
QApplication manages the GUI-application's control flow, where all events from the window system and other sources are processed and dispatched. There can only be one QApplication object defined for all your plots created.
```
x = np.random.rand(500,50,50) # create a random numpy array to display - 500 images of size 50x50
pg.setConfigOptions(antialias=True) # enable antialiasing
view = pg.GraphicsView() # create a main graphics window
view.show() # show the window
```
When displaying images at a different resolution, setting antialias to True makes the graphics appear smooth without any artifacts. Antialiasing minimizes aliasing when representing a high-resolution image at a lower resolution. Other useful config options are 'background' and 'foreground' colors.
GraphicsView generates a main graphics window. The default size is (640,480). You can change this to the size of your choice by using the resize function, e.g, view.resize(50,50).
```
p = pg.PlotItem() # add a plotItem
view.setCentralItem(p) # add the plotItem to the graphicsWindow and set it as central
```
For a given graphics window, you can create multiple plots. Here, we created a single plot item and added it to the graphics window.
```
img = pg.ImageItem(border='w', levels=(x.min(),x.max())) # create an image object
p.addItem(img) # add the imageItem to the plotItem
```
Within each plot, you can define multiple drawing items (or artists). Here, we added an image item. Examples of other items are: PlotCurveItem, ArrowItem, etc.
```
# hide axis and set title
p.hideAxis('left'); p.hideAxis('bottom'); p.hideAxis('top'); p.hideAxis('right')
p.setTitle('Array Animation', size='25px', color='y')
# data update function
cnt=0
def animLoop():
global cnt
if cnt < x.shape[0]:
img.setImage(x[cnt])
cnt+=1
```
Here, we create a function to update the image item with new data. To this end, we use a counter to iterate over each image stored within x.
```
# setup and start the timer
timer = QtCore.QTimer()
timer.timeout.connect(animLoop)
timer.start(0)
```
The timer function is used to repeatedly call the animLoop with a delay of 0 between each call.
```
app.exec_() # execute the app
```
Finally, you need to execute the QApplication. Any PyQtGraph code must be wrapped between the app initialization and the app execution. Here is the code all put together (execute and check):
```
# Animate a 3D numpy array
import numpy as np
import pyqtgraph as pg
from pyqtgraph.Qt import QtCore
app = pg.QtGui.QApplication([])
x = np.random.rand(500,50,50)
pg.setConfigOptions(antialias=True)
# main graphics window
view = pg.GraphicsView()
# show the window
view.show()
# add a plotItem
p = pg.PlotItem()
# add the plotItem to the graphicsWindow and set it as central
view.setCentralItem(p)
# create an image object
img = pg.ImageItem(border='w', levels=(x.min(),x.max()))
# add the imageItem to the plotItem
p.addItem(img)
# hide axis and set title
p.hideAxis('left'); p.hideAxis('bottom'); p.hideAxis('top'); p.hideAxis('right')
p.setTitle('Array Animation', size='25px', color='y')
# data generator
cnt=0
def animLoop():
global cnt
if cnt < x.shape[0]:
img.setImage(x[cnt])
cnt+=1
timer = QtCore.QTimer()
timer.timeout.connect(animLoop)
timer.start(0)
app.exec_()
```
## Exercise 1
* Animate an RGB array.
* Animate a 2D array (sequence of line plots). Use pg.PlotCurveItem instead of pg.ImageItem and setData instead of setImage to update the data.
#Styling Plots
PyQtGraph provides a function called mkPen(args) to create a drawing pen that can be passed as an argument (pen = pg.mkPen()) to style while defining several plot items. A few examples of defining mkPen are:
* pg.mkPen('y', width=3, style=QtCore.Qt.DashLine) # Make a dashed yellow line 2px wide
* pg.mkPen(0.5) # Solid gray line 1px wide
* pg.mkPen(color=(200,200,255), style=QtCore.Qt.DotLine) # Dotted pale-blue line
##Exercise 2
Repeat Exercise 1 with a yellow dashed line plot animation.
#Plots Grid Layout
You can create a grid layout for your plots using the GraphicsLayout function. The layout can then be used as a placeholder for all your plots within the main graphics window. Here is an example with two plots placed next to each other beneath a wide text block:
```
# imports
import numpy as np
import pyqtgraph as pg
from pyqtgraph.Qt import QtCore
# init qApp
app = pg.QtGui.QApplication([])
# setup the main window
view = pg.GraphicsView()
view.resize(900,500)
view.setWindowTitle('Notebook')
view.show()
# main layout
layout = pg.GraphicsLayout(border='r') # with a red bordercolor
# set the layout as a central item
view.setCentralItem(layout)
# create a text block
label = pg.LabelItem('PyQtGraph Grid Layout Example', size='25px', color='y')
# create a plot with two random curves
p1 = pg.PlotItem()
curve11 = pg.PlotCurveItem(pen=pg.mkPen(color='g', width=1))
curve12 = pg.PlotCurveItem(pen=pg.mkPen(color='b', width=1, style=QtCore.Qt.DashLine))
p1.addItem(curve11); p1.addItem(curve12)
curve11.setData(np.random.rand(100))
curve12.setData(np.random.rand(100))
# create another plot with two random curves
p2 = pg.PlotItem()
curve21 = pg.PlotCurveItem(pen=pg.mkPen(color='w', width=1, style=QtCore.Qt.DotLine))
curve22 = pg.PlotCurveItem(pen=pg.mkPen(color='c', width=1, style=QtCore.Qt.DashLine))
p2.addItem(curve21); p2.addItem(curve22)
curve21.setData(np.random.rand(100))
curve22.setData(np.random.rand(100))
# Finally organize the layout
layout.addItem(label, row=0, col=0, colspan=2)
layout.addItem(p1, row=1, col=0)
layout.addItem(p2, row=1, col=1)
app.exec_()
```
The above example also shows how to draw multiple curves within the same plot.
##Exercise 3
* Create a grid layout like the example above and animate one of the curves in the left plot.
* Animate both curves within the left plot.
# Summary
In this notebook, we have covered the basics of the PyQtGraph library to make fast animations in Python. We suggest you next to have a look at the main documentation of the library and also the examples provided within the library. Enjoy animating plots!
| true |
code
| 0.419945 | null | null | null | null |
|
<a id="title_ID"></a>
# Using Kepler Data to Plot a Light Curve
<br>This notebook tutorial demonstrates the process of loading and extracting information from Kepler light curve FITS files to plot a light curve and display the photometric aperture.
<img style="float: right;" src="./light_curve_tres2.png" alt="light_curve_tres2" width="800px"/>
### Table of Contents
<div style="text-align: left"> <br> [Introduction](#intro_ID) <br> [Imports](#imports_ID) <br> [Getting the Data](#data_ID) <br> [Reading FITS Extensions](#header_ID) <br> [Plotting a Light Curve](#lightcurve_ID) <br> [The Aperture Extension](#aperture_ID) <br> [Additional Resources](#resources_ID) <br> [About this Notebook](#about_ID) </div>
***
<a id="intro_ID"></a>
## Introduction
**Light curve background:**
A light curve is a plot of flux versus time that shows the variability of light output from an object. This is one way to find planets periodically transitting a star. The light curves made here will plot the corrected and uncorrected fluxes from Kepler data of object KIC 11446443 (TRES-2).
**Some notes about the file:** kplr_011446443-2009131110544_slc.fits
<br>The filename contains phrases for identification, where
- kplr = Kepler
- 011446443 = Kepler ID number
- 2009131110544 = year 2009, day 131, time 11:05:44
- slc = short cadence
**Defining some terms:**
- **Cadence:** the frequency with which summed data are read out. Files are either short cadence (a 1 minute sum) or long cadence (a 30 minute sum).
- **SAP Flux:** Simple Aperture Photometry flux; flux after summing the calibrated pixels within the optimal aperture
- **PDCSAP Flux:** Pre-search Data Conditioned Simple Aperture Photometry; these are the flux values nominally corrected for instrumental variations.
- **BJD:** Barycentric Julian Day; this is the Julian Date that has been corrected for differences in the Earth's position with respect to the Solar System Barycentre (center of mass of the Solar System).
- **HDU:** Header Data Unit; a FITS file is made up of Header or Data units that contain information, data, and metadata relating to the file. The first HDU is called the primary, and anything that follows is considered an extension.
For more information about the Kepler mission and collected data, visit the [Kepler archive page](https://archive.stsci.edu/kepler/). To read more details about light curves and relevant data terms, look in the [Kepler archive manual](https://archive.stsci.edu/kepler/manuals/archive_manual.pdf#page=16).
[Top of Page](#title_ID)
***
<a id="imports_ID"></a>
## Imports
Let's start by importing some libraries to the environment:
- *matplotlib notebook* for creating interactive plots
- *astropy.io fits* for accessing FITS files
- *astropy.table Table* for creating tidy tables of the data
- *matplotlib* for plotting data
```
%matplotlib notebook
from astropy.io import fits
from astropy.table import Table
import matplotlib.pyplot as plt
```
[Top of Page](#title_ID)
***
<a id="data_ID"></a>
## Getting the Data
Start by importing libraries from Astroquery. For a longer, more detailed description using of Astroquery, please visit this [tutorial](https://github.com/spacetelescope/MAST-API-Notebooks/blob/master/MUG2018_APITutorial_Astroquery.ipynb) or read the Astroquery [documentation](https://astroquery.readthedocs.io/en/latest/#).
```
from astroquery.mast import Mast
from astroquery.mast import Observations
```
<br>Next, we need to find the data file. This is similar to searching for the data using the [MAST Portal](https://mast.stsci.edu/portal/Mashup/Clients/Mast/Portal.html) in that we will be using certain keywords to find the file. The target name of the object we are looking for is kplr011446443, collected by the Kepler spacecraft.
```
keplerObs = Observations.query_criteria(target_name='kplr011446443', obs_collection='Kepler')
keplerProds = Observations.get_product_list(keplerObs[1])
yourProd = Observations.filter_products(keplerProds, extension='kplr011446443-2009131110544_slc.fits',
mrp_only=False)
yourProd
```
<br>Now that we've found the data file, we can download it using the reults shown in the table above:
```
Observations.download_products(yourProd, mrp_only = False, cache = False)
```
<br>Click on the blue URL above to download the file. You are now ready to complete the rest of the notebook.
[Top of Page](#title_ID)
***
<a id="header_ID"></a>
## Reading FITS Extensions
<br>Now that we have the file, we can start working with the data. We will begin by assigning a shorter name to the file to make it easier to use. Then, using the info function from astropy.io.fits, we can see some information about the FITS Header Data Units:
```
filename = "./mastDownload/Kepler/kplr011446443_sc_Q113313330333033302/kplr011446443-2009131110544_slc.fits"
fits.info(filename)
```
- **No. 0 (Primary): **
<br>This HDU contains meta-data related to the entire file.
- **No. 1 (Light curve): **
<br>This HDU contains a binary table that holds data like flux measurements and times. We will extract information from here when we define the parameters for the light curve plot.
- **No. 2 (Aperture): **
<br>This HDU contains the image extension with data collected from the aperture. We will also use this to display a bitmask plot that visually represents the optimal aperture used to create the SAP_FLUX column in HDU1.
For more detailed information about header extensions, look [here](https://archive.stsci.edu/kepler/manuals/archive_manual.pdf#page=17).
<br>Let's say we wanted to see more information about the extensions than what the fits.info command gave us. For example, we can access information stored in the header of the Binary Table extension (No. 1, LIGHTCURVE). The following line opens the FITS file, writes the first HDU extension into header1, and then closes the file. Only 24 columns are displayed here but you can view them all by adjusting the range:
```
with fits.open(filename) as hdulist:
header1 = hdulist[1].header
print(repr(header1[0:24])) #repr() prints the info into neat columns
```
<br> We can also view a table of the data from the Binary Table extension. This is where we can find the flux and time columns to be plotted later. Here only the first four rows of the table are displayed:
```
with fits.open(filename) as hdulist:
binaryext = hdulist[1].data
binarytable = Table(binaryext)
binarytable[1:5]
```
[Top of Page](#title_ID)
***
<a id="lightcurve_ID"></a>
## Plotting a Light Curve
<br>Now that we have seen and accessed the data, we can begin to plot a light curve:
1. Open the file using command fits.open. This will allow the program to read and store the data we will manipulate to be plotted. Here we've also renamed the file with a phrase that is easier to handle (see line 1).
<br>
<br>
2. Start by calibrating the time. Because the Kepler data is in BKJD (Kepler Barycentric Julian Day) we need to convert it to time in Julian Days (BJD) if we want to be able to compare it to other outside data. For a more detailed explanation about time conversions, visit the [page 13](https://archive.stsci.edu/kepler/manuals/archive_manual.pdf#page=13) or [page 17](https://archive.stsci.edu/kepler/manuals/archive_manual.pdf#page=17) of the Kepler Archive Manual.
<br>
- Read in the BJDREF times, both the integer (BJDREFI) and the floating point (BJDREFF). These are found as columns of data in the *binary extension* of the header.
<br>
<br>
3. Read in the columns of times and fluxes (both uncorrected and corrected) from the data.
```
with fits.open(filename, mode="readonly") as hdulist:
# Read in the "BJDREF" which is the time offset of the time array.
bjdrefi = hdulist[1].header['BJDREFI']
bjdreff = hdulist[1].header['BJDREFF']
# Read in the columns of data.
times = hdulist[1].data['time']
sap_fluxes = hdulist[1].data['SAP_FLUX']
pdcsap_fluxes = hdulist[1].data['PDCSAP_FLUX']
```
4. Now that the appropriate data has been read and stored, convert the times to BJDS by adding the BJDREF times to the data of times.
<br>
<br>
5. Finally, we can plot the fluxes against time. We can also set a title and add a legend to the plot. We can label our fluxes accordingly and assign them colors and styles ("-k" for a black line, "-b" for a blue line).
```
# Convert the time array to full BJD by adding the offset back in.
bjds = times + bjdrefi + bjdreff
plt.figure(figsize=(9,4))
# Plot the time, uncorrected and corrected fluxes.
plt.plot(bjds, sap_fluxes, '-k', label='SAP Flux')
plt.plot(bjds, pdcsap_fluxes, '-b', label='PDCSAP Flux')
plt.title('Kepler Light Curve')
plt.legend()
plt.xlabel('Time (days)')
plt.ylabel('Flux (electrons/second)')
plt.show()
```
[Top of Page](#title_ID)
***
<a id="aperture_ID"></a>
## The Aperture Extension
<br>We can also make a plot of the third HDU; the image extension (No. 2, APERTURE). This data is stored as an array of integers that encodes which pixels were collected from the spacecraft and which were used in the optimal aperture (look here for more information on the [aperture extension](https://archive.stsci.edu/kepler/manuals/archive_manual.pdf#page=20)).
<br>
<br>First, we need to re-open the FITS file and access the header. Next, we read in the image extension and print it as an array:
```
with fits.open(filename) as hdulist:
imgdata = hdulist[2].data
print(imgdata)
```
We can also show the data in a plot:
```
plt.figure(2)
plt.title('Kepler Aperture')
plt.imshow(imgdata, cmap=plt.cm.YlGnBu_r)
plt.xlabel('Column')
plt.ylabel('Row')
plt.colorbar()
```
[Top of Page](#title_ID)
***
<a id="resources_ID"></a>
## Additional Resources
For more information about the MAST archive and details about mission data:
<br>
<br>[MAST API](https://mast.stsci.edu/api/v0/index.html)
<br>[Kepler Archive Page (MAST)](https://archive.stsci.edu/kepler/)
<br>[Kepler Archive Manual](https://archive.stsci.edu/kepler/manuals/archive_manual.pdf)
<br>[Exo.MAST website](https://exo.mast.stsci.edu/exo/ExoMast/html/exomast.html)
***
<a id="about_ID"></a>
## About this Notebook
**Author:** Josie Bunnell, STScI SASP Intern
<br>**Updated On:** 08/10/2018
***
[Top of Page](#title_ID)
<img style="float: right;" src="https://raw.githubusercontent.com/spacetelescope/notebooks/master/assets/stsci_pri_combo_mark_horizonal_white_bkgd.png" alt="STScI logo" width="200px"/>
| true |
code
| 0.62395 | null | null | null | null |
|
# Statistical Independence
The word “independence” generaly means free from external control or influence, but it also has a lot of connotations in US culture, as it probably does throughout the world. We will apply the concept of independence to many random phenomena, and the implication of independence is generally the same as the definition above: phenomena that are independent cannot influence each other.
In fact, we have already been applying the concept of independence throughout this book when we assume that the outcome of a coin flip, die roll, or simulation does not depend on the values seen in other trials of the same type of experiment. However, now we have the mathematical tools to define the concept of independence precisely.
## Conditional probabilities and independence
Based on the discussion above, try to answer the following question about what independence should mean for conditional probabilities. (Don't worry if you don't intuitively know the answer -- you can keep trying if you don't get it right at first!)
```
from jupyterquiz import display_quiz
git_path="https://raw.githubusercontent.com/jmshea/Foundations-of-Data-Science-with-Python/main/questions/"
#display_quiz("../questions/si-conditional.json")
display_quiz(git_path + "si-conditional.json")
```
Click the “+” sign to reveal the answer and discussion -->
```{toggle}
If $B$ is independent of $A$, then knowledge of $A$ occurring should not change the probability of $B$ occurring. I.e., if we are *given* that $A$ occurred, then the conditional probability of $B$ occurring should equal the unconditional probability:
$$
P(B|A) = P(B)
$$
Let's see the implications of this by substituting the formula for $P(B|A)$ from the definition:
<!--
\begin{align}\frac{P(A \cap B)}{P(A)} &= P(B) \\
\Rightarrow P(A \cap B) &= P(A)P(B)
\end{align}
-->
$$
\frac{P(A \cap B)}{P(A)} &= P(B) \\
\Rightarrow P(A \cap B) &= P(A)P(B)
$$ (p-b-given-a)
Now we might ask: if $B$ is independent of $A$, does that imply that $A$ is independent of $B$? Let's assume that {eq}`p-b-given-a` holds and apply the result to the definition for $P(A|B$), assuming that $P(B)>0$:
\begin{align*}
P(A|B) & =\frac{ P(A \cap B) } {P(B) } \\
& = \frac{ P(A) P( B) } {P(B) } \\
& = P(A)
\end{align*}
So if $P(B|A) = P(B)$, then $P(A|B)=P(A)$.
```
## Formal definition of statistically independent events
A simple definition for conditional probability of events that satisfies all the forms of independence discussed above and that can deal with events with probability zero is as follows:
```{panels}
:column: col-9
DEFINITION
^^^
statistically independent (two events)
: Given a probability space $S, \mathcal{F}, P$ and two events $A\in \mathcal{F}$ and $B \in \mathcal{F}$, $A$ and $B$ are {\it statistically independent} if and only if (iff)
$$
P(A \cap B) = P(A)P(B).
$$
```
If the context is clear, we will often just write “independent” instead of “statistically independent” or write *s.i.*, which is a commonly used abbreviation.
````{note}
Please take time to study the definition of *statistically independent* carefully. In particular, note the following:
* **Events** can be statistically independent or not
* Probabilities **are not** something that are statistically independent or not
* The “if and only if” statement means that the definition applies in both directions:
* If events $A$ and $B$ are statistically independent, then the probability of the intersection of the events factors as the product of the individual events, $P(A \cap B) = P(A)P(B)$.
* If we have events $A$ and $B$ for which $P(A \cap B) = P(A)P(B)$, then $A$ and $B$ are statistically independent.
````
## When can we assume independence?
Statistical independence is often assumed for many types of events. However, it is important to be careful when applying such a strong assumption because events can be coupled in ways that are subtle. For example, consider the Magician's Coin example. Many people assume that the event of getting Heads on the second flip of the chosen coin will be independent of the outcome of the first flip of the coin. However, we have seen that this assumption is wrong! So, when can we assume that events will be independent?
**Events can be assumed to be statistically independent if they arise from completely separate random phenomena.**
In the case of the Magician's Coin, this assumption is violated in a subtle way. If we knew that the two-headed coin was in use, then we would know the results completely. What is subtle is the fact that observing the outcome of the first flip may give some information about which coin is in use (although we won't be able to show this for observing heads on the first flip until Chapter 6).
Examples that are assumed to result from separate random phenomena are extensive:
* **Devices to generate randomness in games:** Independence can usually be assume dfor different flips of a fair coin, rolls of a fair die, or card hards drawn from shuffled decks.
* **Failures of different devices in systems:** mechanical and electrical devices fail ar random, and the failures at different devices are often assumed to be independent; examples include light bulbs in a building or computers in a lab.
* **Characteristics of people unrelated to any grouping of those people:** for example, for a group of people at a meeting, having a March birthday would generally be independent events across any two people.
Let's apply this concept to find a simpler way to solve a problem that was introduced in {doc}`../04-probability1/axiomatic-prob`:
**Example**
**(Take 3)** A fair six-sided die is rolled twice. What is the probability that either of the rolls is a value less than 3?
As before, let $E_i$ be the event that the top face on roll $i$ is less than 3, for $i=1,2$.
We assume that different different rolls of the die are independent, so $E_1$ and $E_2$ are independent.
As in {doc}`../04/probability1/corollaries`, we can use Corollary 5 of the Axioms of Probability to write
$$
P(E_1 \cup E_2) = P(E_1) + P(E_2) - P(E_1 \cap E_2)
$$
Before, we had to enumerate $E_1 \cap E_2$ over the sample space for the combine roll of the dice to determine $P(E_1 \cap E_2)$. Now, we can just apply statistical independence to write $P(E_1 \cap E_2) = P(E_1)P(E_2)$, yielding
\begin{align*}
P(E_1 \cup E_2) &= P(E_1) + P(E_2) - P(E_1)P(E_2) \\
&= \frac{1}{3} + \frac{1}{3} - \left(\frac{1}{3}\right)\left(\frac{1}{3} \right) \\
&= \frac 5 9 .
\end{align*}
**Exercises**
Answer these questions to practice this form of statistical independence:
```
#display_quiz("../questions/si1.json")
display_quiz(git_path + "si1.json")
```
If $A$ and $B$ are s.i. events, then the following pairs of events are also s.i.:
* $A$ and $\overline{B}$
* $\overline{A}$ and $B$
* $\overline{A}$ and $\overline{B}$
I.e., if the probability of an event $A$ occurring does not depend on whether some event $B$ occurs, then it cannot depend on whether the event $B$ does not occur. This probably matches your intuition. However, we should verify it. Let's check the first example. We need to evaluate $P(A \cap \overline{B})$ to see if it factors as $P(A)P(\overline{B})$. Referring to the Venn diagram below, we can see that $A$ consists of the union of the mutually exclusive parts, $A \cap B$ and $A \cap \overline{B}$. So we can write $P\left(A \cap \overline{B} \right)= P(A) - P(A \cap B)$.
<img src="figs/si-intersection.png" alt="Venn Diagram Showing Relation of $A$, $A \cap \overline{B}$, and $A \cap B$" width="400px" style="margin-left:auto;margin-right:auto;">
Then by utilizing the fact that $A$ and $B$ are s.i., we have
\begin{align}
P\left(A \cap \overline{B} \right) &= P(A) - P(A \cap B) \\
&= P(A) - P(A) P(B) \\
&= P(A) \left[ 1- P\left(B\right) \right] \\
&= P(A) P\left( \overline{B} \right)
\end{align}
So, if $A$ and $B$ are s.i., so are $A$ and $\overline{B}$. The other expressions follow through similar manipulation. This is important because we often use this fact to simplify solving problems. We start with a simple example to demonstrate the basic technique:
**Example**
**(Take 4)** A fair six-sided die is rolled twice. What is the probability that either of the rolls is a value less than 3?
As before, let $E_i$ be the event that the top face on roll $i$ is less than 3, for $i=1,2$, and $E_1$ and $E_2$ are s.i. then
\begin{align}
P(E_1 \cup E_2) &= 1 - P\left(\overline{E_1 \cup E_2}\right) \\
&= 1 - P\left( \overline{E_1} \cap \overline{E_2} \right) \\
&= 1 - P\left( \overline{E_1} \right) P\left( \overline{E_2} \right) \\
&= 1 - \left[ 1 - P\left( {E_1} \right)\right]
\left[ 1- P\left( {E_2} \right) \right]\\
&= 1- \left[ 1 - \left( \frac 2 6 \right) \right] \left[ 1 - \left( \frac 2 6 \right) \right] \\
&= \frac 5 9
\end{align}
Of course for this simple example, it is easiest to directly compute $P\left(\overline{E_1} \right)$, but the full approach shown here is a template that is encountered often when dealing with unions of s.i. events.
To see the power of this method, we first need to define s.i. for more than two events:
````{panels}
:column: col-9
DEFINITION
^^^
statistically independent (for any number of events)
: Given a probability space $S, \mathcal{F}, P$, a collection of events $E_0, E_1, \ldots E_{n-1}$ in $\mathcal{F}$ are {\it statistically independent} if and only if (iff)
\begin{align}
P(E_i \cap E_j) &= P(E_i) P(E_j), ~~ \forall i \ne j \\
P(E_i \cap E_j \cap E_k) &= P(E_i) P(E_j) P(E_k), ~~ \forall i \ne j \ne k \\
\ldots
P(E_0 \cap E_1 \cap \ldots \cap E_{n-1}) &= P(E_0) P(E_1) \cdots P(E_{n-1}), \\
\end{align}
````
It is not sufficient to just check that the probability of every pair of events factors as the product of the probabilities of the individual events. That defines a weaker form of independence:
````{panels}
:column: col-9
DEFINITION
^^^
pairwise statistically independent
: Given a probability space $S, \mathcal{F}, P$, a collection of events $E_0, E_1, \ldots E_{n-1}$ in $\mathcal{F}$ are {\it pairwise statistically independent} if and only if (iff)
\begin{align}
P(E_i \cap E_j) &= P(E_i) P(E_j), ~~ \forall i \ne j
\end{align}
````
We want to use complements to convert the unions to intersections and the resulting general form looks like
\begin{align}
P\left( \bigcup_i E_i \right) &=
1- \prod_i \left[ 1- P\left( E_i \right) \right].
\end{align}
It may be helpful to interpret this as follows: The complement of any of a collection of events occurring is that none of those events occurs; thus the probability that any of a collection of events occurs is one minus the probability that none of those events occurs.
Compare the simplicity of this approach to the form for directly solving for the probability of unions of events (Corrolary 7 from {doc}`../04-probability1/corollaries`):
\begin{eqnarray*}
P\left( \bigcup_{k=1}^{n} A_k \right) &=&
\sum_{k=1}^{n} P\left(A_j\right)
-\sum_{j<k} P \left( A_j \cap A_k \right) + \cdots \\
&& +
(-1)^{(n+1) } P\left(A_1 \cap A_2 \cap \cdots \cap A_n \right)
\end{eqnarray*}
Now apply this approach to solve the following practice problems:
```
from jupyterquiz import display_quiz
git_path="https://raw.githubusercontent.com/jmshea/Foundations-of-Data-Science-with-Python/main/"
#display_quiz("quiz/si-unions.json")
display_quiz(git_path + "06-conditional-prob/quiz/si-unions.json")
```
## Relating Statistical Independent and Mutually Exclusive Events
```
git_path1="https://raw.githubusercontent.com/jmshea/Foundations-of-Data-Science-with-Python/main/06-conditional-prob/quiz/"
#display_quiz("quiz/si-me.json")
display_quiz(git_path1 + "si-me.json")
```
Click the “+” sign to reveal the discussion -->
```{toggle}
Suppose $A$ and $B$ are events that are both mutually exclusive and statistically independent.
Since $A$ and $B$ are m.e., $A \cap B = \emptyset$, which further implies $P(A \cap B) = P(\emptyset) =0$.
Since $A$ and $B$ are s.i., $P(A \cap B) = P(A) P(B)$.
Combining these, we have that $P(A \cap B) = P(A)P(B) = 0$, which can only occur if either or both of $P(A)=0$ or $P(B)=0$.
Thus, events **cannot be both statistically independent and mutually exclusive unless at least one of the events has probability zero**.
To gain some further insight into this, consider further the m.e. condition, $A \cap B = \emptyset$. This condition implies that if $A$ occurs, then $B$ cannot have occurred, and vice versa. Thus, knowing that either $A$ or $B$ occurred provides a lot of information about the other event. Thus, $A$ and $B$ cannot be independent if they are m.e., except in the special case already identified.
```
## Terminology Review
Use the flashcards below to help you review the terminology introduced in this section.
```
from jupytercards import display_flashcards
#display_flashcards('flashcards/'+'independence.json')
github='https://raw.githubusercontent.com/jmshea/Foundations-of-Data-Science-with-Python/main/'
github+='06-conditional-prob/flashcards/'
display_flashcards(github+'independence.json')
```
| true |
code
| 0.757621 | null | null | null | null |
|
```
import numpy as np
import pandas as pd
import patsy as pt
import seaborn as sns
import matplotlib.pyplot as plt
import statsmodels.formula.api as smf
import statsmodels.api as sm
from sklearn.preprocessing import PolynomialFeatures
from sklearn import linear_model
import warnings
warnings.filterwarnings('ignore')
```
## 7.8 Lab: Non-Linear Modelling
Load wage dataset
```
wage_df = pd.read_csv('./data/Wage.csv')
wage_df = wage_df.drop(wage_df.columns[0], axis=1)
wage_df['education'] = wage_df['education'].map({'1. < HS Grad': 1.0,
'2. HS Grad': 2.0,
'3. Some College': 3.0,
'4. College Grad': 4.0,
'5. Advanced Degree': 5.0
})
wage_df.head()
```
### Polynomial regression
```
# Derive 4 degree polynomial features of age
degree = 4
f = ' + '.join(['np.power(age, {})'.format(i) for i in np.arange(1, degree+1)])
X = pt.dmatrix(f, wage_df)
y = np.asarray(wage_df['wage'])
# Fit linear model
model = sm.OLS(y, X).fit()
y_hat = model.predict(X)
model.summary()
# STATS
# ----------------------------------
# Reference: https://stats.stackexchange.com/questions/44838/how-are-the-standard-errors-of-coefficients-calculated-in-a-regression
# Covariance of coefficient estimates
mse = np.sum(np.square(y_hat - y)) / y.size
cov = mse * np.linalg.inv(X.T @ X)
# ...or alternatively this stat is provided by stats models:
#cov = model.cov_params()
# Calculate variance of f(x)
var_f = np.diagonal((X @ cov) @ X.T)
# Derive standard error of f(x) from variance
se = np.sqrt(var_f)
conf_int = 2*se
# PLOT
# ----------------------------------
# Setup axes
fig, ax = plt.subplots(figsize=(10,10))
# Plot datapoints
sns.scatterplot(x='age', y='wage',
color='tab:gray',
alpha=0.2,
ax=ax,
data=pd.concat([wage_df['age'], wage_df['wage']], axis=1));
# Plot estimated f(x)
sns.lineplot(x=X[:, 1], y=y_hat, ax=ax, color='blue');
# Plot confidence intervals
sns.lineplot(x=X[:, 1], y=y_hat+conf_int, color='blue');
sns.lineplot(x=X[:, 1], y=y_hat-conf_int, color='blue');
# dash confidnece int
ax.lines[1].set_linestyle("--")
ax.lines[2].set_linestyle("--")
```
### Selecting degrees of freedom for polynomial regression with ANOVA
**ISL Authors:** In performing a polynomial regression we must decide on the degree of the polynomial to use. One way to do this is by using hypothesis tests. We now fit models ranging from linear to a degree-5 polynomial and seek to determine the simplest model which is sufficient to explain the relationship between wage and age.
```
# Derive 5 degree polynomial features of age
degree = 5
f = ' + '.join(['np.power(age, {})'.format(i) for i in np.arange(1, degree+1)])
X = pt.dmatrix(f, wage_df)
y = np.asarray(wage_df['wage'])
# Get models of increasing degrees
model_1 = sm.OLS(y, X[:, 0:2]).fit()
model_2 = sm.OLS(y, X[:, 0:3]).fit()
model_3 = sm.OLS(y, X[:, 0:4]).fit()
model_4 = sm.OLS(y, X[:, 0:5]).fit()
model_5 = sm.OLS(y, X[:, 0:6]).fit()
# Compare models with ANOVA
display(sm.stats.anova_lm(model_1, model_2, model_3, model_4, model_5))
```
**ISL Authors:** The p-value comparing the linear Model 1 to the quadratic Model 2 is essentially zero (<10−15), indicating that a linear fit is not sufficient. Sim- ilarly the p-value comparing the quadratic Model 2 to the cubic Model 3 is very low (0.0017), so the quadratic fit is also insufficient. The p-value comparing the cubic and degree-4 polynomials, Model 3 and Model 4, is ap- proximately 5 % while the degree-5 polynomial Model 5 seems unnecessary because its p-value is 0.37. Hence, either a cubic or a quartic polynomial appear to provide a reasonable fit to the data, but lower- or higher-order models are not justified.
```
model_5.pvalues
```
**Revision note:** ISL suggests that the above results should be same as for annova pvalues, but that isn;t observed here us statsmodels. Why?
**ISL Authors:** However, the ANOVA method works whether or not we used orthogonal polynomials; it also works when we have other terms in the model as well. For example, we can use anova() to compare these three models:
```
# Derive 5 degree polynomial features of age
degree = 3
f = 'education +' + ' + '.join(['np.power(age, {})'.format(i) for i in np.arange(1, degree+1)])
X = pt.dmatrix(f, wage_df)
y = np.asarray(wage_df['wage'])
# Get models of increasing degrees
model_1 = sm.OLS(y, X[:, 0:3]).fit()
model_2 = sm.OLS(y, X[:, 0:4]).fit()
model_3 = sm.OLS(y, X[:, 0:5]).fit()
# Compare models with ANOVA
display(sm.stats.anova_lm(model_1, model_2, model_3))
```
### Polynomial logistic regression with bootstrapped confidence intervals
```
# Create logistic repsonse for wage > 250
wage_df['wage_above_250'] = (wage_df['wage'] > 250).astype(np.float64)
wage_df.head()
def logit_boot(df, idx):
# Derive 4 degree polynomial features of age
degree = 4
f = ' + '.join(['np.power(age, {})'.format(i) for i in np.arange(1, degree+1)])
X = pt.dmatrix(f, df.loc[idx])
y = np.asarray(df['wage_above_250'].loc[idx])
# Some sample for predictions observations
x1_test = np.arange(20,81)
X_test = np.array([np.ones(len(x1)), x1, np.power(x1, 2), np.power(x1, 3), np.power(x1, 4)]).T
# Fit logistic regression model
model = sm.Logit(y, X).fit(disp=0)
y_hat = model.predict(X_test)
return y_hat
def tenth_percentile(df, idx):
Z = np.array(df.loc[idx])
return np.percentile(Z, 10)
def boot_idx(n):
"""Return index for bootstrap sample of size n
e.g. generate array in range 0 to n, with replacement"""
return np.random.randint(low=0, high=n, size=n)
def boot(fn, data_df, samples):
"""Perform bootstrap for B number of samples"""
results = []
for s in range(samples):
Z = fn(data_df, boot_idx(data_df.shape[0]))
results += [Z]
return np.array(results)
# Get y_hat for B number of bootstrap samples
B = 1000
boot_obs = boot(logit_boot, wage_df, samples=B)
SE_pred = np.std(boot_obs, axis=0)
# Calculate 5% and 95% percentiles of y_hat across all bootstrap samples
upper = np.percentile(boot_obs, 95, axis=0)
lower = np.percentile(boot_obs, 5, axis=0)
# Derive 4 degree polynomial features of age
degree = 4
f = ' + '.join(['np.power(age, {})'.format(i) for i in np.arange(1, degree+1)])
X = pt.dmatrix(f, wage_df)
y = np.asarray(wage_df['wage_above_250'])
# Some test observations
x1_test = np.arange(20,81)
X_test = np.array([np.ones(len(x1)), x1, np.power(x1, 2), np.power(x1, 3), np.power(x1, 4)]).T
# Fit logistic regression model
model = sm.Logit(y, X).fit(disp=0)
y_hat = model.predict(X_test)
# Setup axes
fig, ax = plt.subplots(figsize=(10,10))
plot_df = pd.DataFrame({'Age': x1_test, 'Pr(Wage>250 | Age)': y_hat})
sns.lineplot(x='Age', y='Pr(Wage>250 | Age)', data=plot_df, color='red')
sns.lineplot(x=x1_test, y=upper, color='blue');
sns.lineplot(x=x1_test, y=lower, color='blue');
# Plot all f(x) estimations
for b in boot_obs:
#plot_df = pd.DataFrame({'Age': boot_obs[0][:, 0], 'Pr(Wage>250 | Age)': boot_obs[0][:, 1]})
sns.lineplot(x=x1_test, y=b, alpha=0.05)
```
Here I've used the bootstrap sampling method to get estimates of f(x) for 1000 samples of the dataset. The 5th and 95th percentile of these estimates are shown in blue. The estimate for f(x) using the full dataset is shown in red.
**Revision note:** I expected the 5th and 95th percentiles to correspond to the confidence intervals reported by the ISL authors. They are largely similar except for the higher bound for high values of age which tends to zero here but for the ISL authors tends to 1.
### Step function
```
### Step function
steps = 6
# Segment data into 4 segments by age
cuts = pd.cut(wage_df['age'], steps)
X = np.asarray(pd.get_dummies(cuts))
y = np.asarray(wage_df['wage'])
# Fit logistic regression model
model = sm.OLS(y, X).fit(disp=0)
y_hat = model.predict(X)
# PLOT
# ----------------------------------
# Setup axes
fig, ax = plt.subplots(figsize=(10,10))
# Plot datapoints
sns.scatterplot(x='age', y='wage',
color='tab:gray',
alpha=0.2,
ax=ax,
data=pd.concat([wage_df['age'], wage_df['wage']], axis=1));
# Plot estimated f(x)
sns.lineplot(x=wage_df['age'], y=y_hat, ax=ax, color='blue');
```
## 7.8.2 Splines
```
# Putting confidence interval calcs into function for convenience.
def confidence_interval(X, y, y_hat):
"""Compute 5% confidence interval for linear regression"""
# STATS
# ----------------------------------
# Reference: https://stats.stackexchange.com/questions/44838/how-are-the-standard-errors-of-coefficients-calculated-in-a-regression
# Covariance of coefficient estimates
mse = np.sum(np.square(y_hat - y)) / y.size
cov = mse * np.linalg.inv(X.T @ X)
# ...or alternatively this stat is provided by stats models:
#cov = model.cov_params()
# Calculate variance of f(x)
var_f = np.diagonal((X @ cov) @ X.T)
# Derive standard error of f(x) from variance
se = np.sqrt(var_f)
conf_int = 2*se
return conf_int
# Fit spline with 6 degrees of freedom
# Use patsy to generate entire matrix of basis functions
X = pt.dmatrix('bs(age, df=7, degree=3, include_intercept=True)', wage_df)
y = np.asarray(wage_df['wage'])
# Fit logistic regression model
model = sm.OLS(y, X).fit(disp=0)
y_hat = model.predict(X)
conf_int = confidence_interval(X, y, y_hat)
# PLOT
# ----------------------------------
# Setup axes
fig, ax = plt.subplots(figsize=(10,10))
# Plot datapoints
sns.scatterplot(x='age', y='wage',
color='tab:gray',
alpha=0.2,
ax=ax,
data=pd.concat([wage_df['age'], wage_df['wage']], axis=1));
# Plot estimated f(x)
sns.lineplot(x=wage_df['age'], y=y_hat, ax=ax, color='blue');
# Plot confidence intervals
sns.lineplot(x=wage_df['age'], y=y_hat+conf_int, color='blue');
sns.lineplot(x=wage_df['age'], y=y_hat-conf_int, color='blue');
# dash confidnece int
ax.lines[1].set_linestyle("--")
ax.lines[2].set_linestyle("--")
# Fit a natural spline with seven degrees of freedom
# Use patsy to generate entire matrix of basis functions
X = pt.dmatrix('cr(age, df=7)', wage_df) # REVISION NOTE: Something funky happens when df=6
y = np.asarray(wage_df['wage'])
# Fit logistic regression model
model = sm.OLS(y, X).fit(disp=0)
y_hat = model.predict(X)
conf_int = confidence_interval(X, y, y_hat)
# PLOT
# ----------------------------------
# Setup axes
fig, ax = plt.subplots(figsize=(10,10))
# Plot datapoints
sns.scatterplot(x='age', y='wage',
color='tab:gray',
alpha=0.2,
ax=ax,
data=pd.concat([wage_df['age'], wage_df['wage']], axis=1));
# Plot estimated f(x)
sns.lineplot(x=wage_df['age'], y=y_hat, ax=ax, color='blue');
# Plot confidence intervals
sns.lineplot(x=wage_df['age'], y=y_hat+conf_int, color='blue');
sns.lineplot(x=wage_df['age'], y=y_hat-conf_int, color='blue');
# dash confidnece int
ax.lines[1].set_linestyle("--")
ax.lines[2].set_linestyle("--")
```
Comparing the above two plots we can see the increased linearity of the natural spline at the boundaries of age. This seems to yield a slight increase in confidence at the extremes of age.
The ISLR authors cover smoothing splines in addition to the above. Smoothing splines seem to be poorly supported in python, I could only find `scipy.interpolate.UnivariateSpline`.
### 7.8.3 GAMs
**ISL Authors:** We now fit a GAM to predict wage using natural spline functions of year and age, treating education as a qualitative predictor, as in (7.16). Since this is just a big linear regression model using an appropriate choice of basis functions, we can simply do this using the lm() function.
```
# Use patsy to generate entire matrix of basis functions
X = pt.dmatrix('cr(year, df=4)+cr(age, df=5) + education', wage_df)
y = np.asarray(wage_df['wage'])
# Fit logistic regression model
model = sm.OLS(y, X).fit(disp=0)
y_hat = model.predict(X)
conf_int = confidence_interval(X, y, y_hat)
# Plot estimated f(year)
sns.lineplot(x=wage_df['year'], y=y_hat);
# Plot estimated f(age)
sns.lineplot(x=wage_df['age'], y=y_hat);
# Plot estimated f(education)
sns.boxplot(x=wage_df['education'], y=y_hat);
```
Not quite the same as plots achived by ISL authors using R, but gives similar insight.
### Comparing GAM configurations with ANOVA
```
# Model 1
X = pt.dmatrix('cr(age, df=5) + education', wage_df)
y = np.asarray(wage_df['wage'])
model1 = sm.OLS(y, X).fit(disp=0)
# Model 2
X = pt.dmatrix('year+cr(age, df=5) + education', wage_df)
y = np.asarray(wage_df['wage'])
model2 = sm.OLS(y, X).fit(disp=0)
# Model 3
X = pt.dmatrix('cr(year, df=4)+cr(age, df=5) + education', wage_df)
y = np.asarray(wage_df['wage'])
model3 = sm.OLS(y, X).fit(disp=0)
# Compare models with ANOVA
display(sm.stats.anova_lm(model1, model2, model3))
```
The `Pr(>F)` of 0.000174 for `Model 2` suggests that it is significantly better than model 1 whereas with a pvalue > 0.05 model 3 does not seem to be significantly better than model 2.
We condlude that inclusion of a linear year feature improves the model, but there is no evidence that a non-linear function of year improves the model.
```
display(model3.summary())
```
Inspecting the pvalues for model 3 features we note a pvalue >0.05 for x9 which correspondes to the 5th degree of freedom for age.
**Revision note:** The ISL authors report high pvalues for year features, which would reinforce the above ANOVA result, but we can't see that here. Perhaps the OLS `.summary()` is not equivalent to R's `summary(gam)`
### Local Regression GAM
```
x = np.asarray(wage_df['age'])
y = np.asarray(wage_df['wage'])
# Create lowess feature for age
wage_df['age_lowess'] = sm.nonparametric.lowess(y, x, frac=.7, return_sorted=False)
# Fit logistic regression model
X = pt.dmatrix('cr(year, df=4)+ age_lowess + education', wage_df)
y = np.asarray(wage_df['wage'])
model = sm.OLS(y, X).fit(disp=0)
model.summary()
```
| true |
code
| 0.6957 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/imiled/DeepLearningMaster/blob/master/Tensorflow_Utils.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!apt-get update > /dev/null 2>&1
!apt-get install cmake > /dev/null 2>&1
!pip install --upgrade setuptools > /dev/null 2>&1
!pip install tensorflow-gpu==2.0.0 > /dev/null 2>&1
import tensorflow as tf
import numpy as np
```
Let's try to fit a parabollic function using
```
f = lambda x: 2*x**2 + x +1
x_train = np.linspace(-100,100,1000)
y_train = f(x_train)
x_test = np.linspace(-110,-100.01,10)
y_test = f(x_test)
```
# Model Definition
### Sequential API
```
sequential_model = tf.keras.models.Sequential()
sequential_model.add(tf.keras.layers.Dense(64, input_shape=(1,), activation='relu'))
sequential_model.add(tf.keras.layers.Dense(32, activation='relu'))
sequential_model.add(tf.keras.layers.Dense(1))
sequential_model.summary()
sequential_model.compile(optimizer=tf.keras.optimizers.Adam(), loss=tf.keras.losses.mean_squared_error)
sequential_model.fit(x_train, y_train, batch_size=8, epochs=10, validation_split=.2)
sequential_model.predict(x_test)
```
### Functional API
```
x = tf.keras.layers.Input(shape=(1,))
dense_relu_64 = tf.keras.layers.Dense(64, activation='relu')(x)
dense_relu_32 = tf.keras.layers.Dense(32, activation='relu')(dense_relu_64)
y = tf.keras.layers.Dense(1)(dense_relu_32)
functional_model = tf.keras.Model(x, y)
functional_model.summary()
functional_model.compile(optimizer=tf.keras.optimizers.Adam(), loss=tf.keras.losses.mean_squared_error)
functional_model.fit(x_train, y_train, batch_size=8, epochs=10, validation_split=.2)
functional_model.predict(x_test)
```
### Model Subclassing
```
class NN(tf.keras.Model):
def __init__(self):
super(NN, self).__init__()
self.dense_relu_64 = tf.keras.layers.Dense(64, activation='relu')
self.dense_relu_32 = tf.keras.layers.Dense(32, activation='relu')
self.dense_linear_1 = tf.keras.layers.Dense(1)
def call(self, inputs):
x = self.dense_relu_64(inputs)
x = self.dense_relu_32(x)
x = self.dense_linear_1(x)
return x
subclassing = NN()
x_test_sub = np.expand_dims(x_test, axis=1)
print(subclassing(x_test_sub))
x_test.shape
```
# Training Model Subclassing
### Fit
```
subclassing.compile(optimizer=tf.keras.optimizers.Adam(), loss=tf.keras.losses.mean_squared_error)
subclassing.fit(x_train, y_train, batch_size=8, epochs=10, validation_split=.2)
subclassing.predict(x_test)
```
### tf.GradientTape
```
def optimize(model, x, y):
with tf.GradientTape() as tape: # save the cpst function
pred = model(x)
loss = tf.reduce_mean(tf.keras.losses.MSE(pred, y))
grads = tape.gradient(loss, model.trainable_weights)
optimizer = tf.keras.optimizers.Adam()
optimizer.apply_gradients(zip(grads, model.trainable_weights))
return model, loss
subclassing = NN()
x_test_sub = np.expand_dims(x_test, axis=1)
epochs = 10
for i in range(epochs):
subclassing, loss = optimize(subclassing, x_test_sub, y_test)
print(i, loss)
```
| true |
code
| 0.791902 | null | null | null | null |
|
```
import numpy as np
from numpy.random import normal, uniform
from scipy.stats import multivariate_normal as mv_norm
from collections import OrderedDict
import matplotlib.pyplot as plt
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
from mpl_toolkits import mplot3d
%matplotlib inline
```
## Functions to Generate the Training and Test Datasets
#### Details of target function generation
The target function at each node is generated as follows:
$T = \mathbf{a}^T\phi(\mathbf{X}) + Z$, where
$\mathbf{X} = [X_1, X_2, \ldots, X_N]^T$ denotes the random data point,
$\phi(\mathbf{X}) = [1, X_1, X_2, \ldots, X_N]^T$ denotes the feature vector obtained from data point,
$\mathbf{a} = [a_0, a_1, \ldots, a_N]^T$ denotes the weight vector,
$Z$ denotes Gaussian noise with zero mean and $T$ denotes the target value.
For simplicity we assume $Z \sim \mathcal{N}(0, \beta^{-1})$, where $\beta$ denotes the precision. Hence the target values $T \sim \mathcal{N}(\mathbf{a}^T\phi(\mathbf{X}), \beta^{-1})$
Therefore the likelihood of $T = t$ given $\mathbf{X} = \mathbf{x}$ denoted by $p(t|\mathbf{x}, \mathbf{a})$ has the Gaussian distribution $\mathcal{N}(\mathbf{a}^T\phi(\mathbf{x}), \beta^{-1})$ whose likelihood is given by $G(t, \mathbf{a}^T\phi(\mathbf{x}), \beta^{-1})$
```
# x_vec = [x1, x2, ... , xi] and xi is available to node i only
def real_function(a_vec, noise_sigma, X):
N = X.shape[0]
N_samples = X.shape[1]
#Evaluates the real function
f_value = a_vec[0]
for i in range(0, N):
f_value += a_vec[i+1]*X[i,:]
if noise_sigma==0:
# Recovers the true function
return f_value
else:
return f_value + normal(0, noise_sigma, N_samples)
```
#### Details of data points generation across the network
Data point $\mathbf{X} = [X_1, X_2, \ldots, X_N]^T$ is an $N$ dimensional vector, where each $X_i \sim Unif[l_i, u_i]$.
```
# generate training set for each node
def generate_training_set(l_vec, u_vec, a_vec, noise_sigma, N_samples):
# generates N_samples copies of X which are uniformly distributed over [l,u]
N = len(l_vec)
X = np.zeros((N, N_samples), dtype=float)
for i in range(0,N):
X[i, :] = uniform(l_vec[i], u_vec[i], N_samples)
# Evaluate the real function for training example inputs
t = real_function(a_vec, noise_sigma, X)
return X, t
```
## Training and Testing Procedure
### Training at each node without cooperation
We consider a network of $N$ nodes. We generate $N$ datasets network wide.
For node $i$:
Each node $i$'s local and private dataset is denoted by $\mathcal{D}_i = \{(\mathbf{X}_i^{(j)}, t^{(j)}), j \in \{1,2, \ldots, N_{0}\}\}$, where each $\mathbf{X}_i^{(j)}$ is an $N$ dimensional data point.
Using the given dataset $\mathcal{D}_i$ at node $i$, we want to able to predict $t$ given a new input $\mathbf{x}$, i.e, make a prediction based the following predictive distribution
\begin{align}
p(t|\mathbf{x}, \mathcal{D}_i)
\end{align}
The predictive distribution can be obtained as follows
\begin{align}
p(t|\mathbf{x}, \mathcal{D}_i) &= \int p(t, \mathbf{a}|\mathbf{x}, \mathcal{D}_i)d\mathbf{a} \\
& = \int p(t|\mathbf{x}, \mathbf{a}, \mathcal{D}_i)p(\mathbf{a}|\mathcal{D}_i)d\mathbf{a} \\
& = \int p(t|\mathbf{x}, \mathbf{a})p(\mathbf{a}|\mathcal{D}_i)d\mathbf{a}
\end{align}
We train each node using the dataset $\mathcal{D}_i$ to obtain $p(\mathbf{a}|\mathcal{D}_i)$. We obtain the posterior distribution on weight vector $\mathbf{a}$ is a Bayesian fashion, i.e., we start with a prior on $\mathbf{a}$ given by
\begin{align}
p(\mathbf{a}) = G(\mathbf{a}, \boldsymbol{\mu}_0, \boldsymbol{\Sigma}_0)
\end{align}
For simplicity we consider $\boldsymbol{\mu}_0 = 0$ and $\boldsymbol{\Sigma}_0 = \alpha^{-1}I$.
We update the posterior distribution on $\mathbf{a}$ in an online fashion or sequential fashion as we observe the data. Let $\boldsymbol{\mu}^{(k)}_i$ and $\boldsymbol{\Sigma}^{(k)}_i$ denote the mean and covariance matrix of the posterior distribution after observing $k$ samples from $\mathcal{D}_i$. Then, after observing $k+1$th point $(\mathbf{x}_i^{(k+1)}, t_i^{(k+1)})$ we use Bayes rule (for more details on Bayesian linear regression please refer to Bishop's treatment of the Bayesian approach to linear regression.) to obtain $\boldsymbol{\mu}^{(k+1)}_i$ and $\boldsymbol{\Sigma}^{(k+1)}_i$ as follows
\begin{align}
(\boldsymbol{\Sigma}^{(k+1)}_i)^{-1}
&= (\boldsymbol{\Sigma}^{(k)}_i)^{-1} + \beta \phi(\mathbf{x}_i^{(k+1)})^T\phi(\mathbf{x}_i^{(k+1)})
\\
\boldsymbol{\mu}^{(k+1)}_i
&= \boldsymbol{\Sigma}^{(k+1)}_i\left((\boldsymbol{\Sigma}^{(k)}_i)^{-1} \boldsymbol{\mu}_i^{(k)} + \beta \phi(\mathbf{x}_i^{(k+1)})^T t_i^{(k+1)} \right)
\end{align}
Update using the above equations until we have looped through the entire local datasets.
### Training at each node with peer-to-peer cooperation
Again we want to train each node using the dataset $\mathcal{D}_i$ and cooperation with neighbors in the graph given by social interaction matrix $\mathbf{W}$ to obtain $p^{(k)}(\mathbf{a})$ after each node has observed $k$ training samples.
We obtain the posterior distribution on weight vector $\mathbf{a}$ is a Bayesian fashion, i.e., we start with a prior on $\mathbf{a}$ given by
\begin{align}
p^{(0)}(\mathbf{a}) = G(\mathbf{a}, \boldsymbol{\mu}_0, \boldsymbol{\Sigma}_0)
\end{align}
For simplicity we consider $\boldsymbol{\mu}_0 = 0$ and $\boldsymbol{\Sigma}_0 = \alpha^{-1}I$.
$\underline{\text{Local Bayesian Update Step:}}$
We update the posterior distribution on $\mathbf{a}$ in an online fashion or sequential fashion as we observe the data. Let $\boldsymbol{\mu}^{(k)}_i$ and $\boldsymbol{\Sigma}^{(k)}_i$ denote the mean and covariance matrix of the posterior distribution after observing $k$ samples from $\mathcal{D}_i$. Then, after observing $k+1$th point $(\mathbf{x}_i^{(k+1)}, t_i^{(k+1)})$ we use Bayesian update to obtain $\boldsymbol{\mu}^{(k+1)}_i$ and $\boldsymbol{\Sigma}^{(k+1)}_i$ as follows
\begin{align}
(\boldsymbol{\Sigma}^{(k+1)}_i)^{-1}
&= (\boldsymbol{\Sigma}^{(k)}_i)^{-1} + \beta \phi(\mathbf{x}_i^{(k+1)})^T\phi(\mathbf{x}_i^{(k+1)})
\\
\boldsymbol{\mu}^{(k+1)}_i
&= \boldsymbol{\Sigma}^{(k+1)}_i\left((\boldsymbol{\Sigma}^{(k)}_i)^{-1} \boldsymbol{\mu}_i^{(k)} + \beta \phi(\mathbf{x}_i^{(k+1)})^T t_i^{(k+1)} \right)
\end{align}
$\underline{\text{Consensus Step:}}$
The merged covariance matrix $\overline{\boldsymbol{\Sigma}}^{(k+1)}_i$ for node $i$ is given as
\begin{align}
(\overline{\boldsymbol{\Sigma}}^{(k+1)}_i)^{-1} = \sum_{j = 1}^N W_{ij}(\boldsymbol{\Sigma}_j^{(k+1)})^{-1}.
\end{align}
The merged mean value for node $i$ is given as
\begin{align}
\overline{\boldsymbol{\mu}}^{(k+1)}_i = \overline{\boldsymbol{\Sigma}}^{(k+1)}_i \sum_{j=1}^N W_{ij}(\boldsymbol{\Sigma}_j^{(k+1)})^{-1}\mu_j .
\end{align}
Update using the above equations until we have looped through the entire local datasets.
### Prediction on the test dataset at each node
The predictive distribution on plugging in the values gives us
\begin{align}
p(t| \mathbf{x}) &= \int p(t| \mathbf{x}, \mathbf{a}) p^{(N_0)}(\mathbf{a})d\mathbf{a}
\\
& = \int G(t, \mathbf{a}^T\phi(\mathbf{x}), \beta^{-1}) G(\mathbf{a}, \overline{\boldsymbol{\mu}}^{(N_0)}_i, \overline{\boldsymbol{\Sigma}}^{(N_0)}_i) d\mathbf{a}
\\
& = G(t, (\overline{\boldsymbol{\mu}}^{(N_0)}_i)^T\phi(\mathbf{x}), \overline{\boldsymbol{\Sigma}}^{\ast}_i),
\end{align}
where
\begin{align}
\overline{\boldsymbol{\Sigma}}^{\ast}_i = \beta^{-1} + \phi(\mathbf{x})^T\overline{\boldsymbol{\Sigma}}^{(N_0)}_i \phi(\mathbf{x})
\end{align}
## Initialize the Linear Bayes Class Object
#### Details of each node and its posterior distribution
Each node has access to $\mathbf{X}_i = [X_{1i}, X_{2i}, \ldots, X_{iN}]$ which an $N$ dimensional data point. However $\mathbf{X}_i \in \mathcal{X}_i \subset \mathbb{R}^N$, where $\mathcal{X}_i$ denotes the local data space.
```
class LinearSeqBayes(object):
"""
A class that holds parameter prior/posterior and handles
the hyper-parameter updates with new data
Note: variables starting with "_vec" indicate Nx1 dimensional
column vectors, those starting with "_mat" indicate
matrices, and those starting with "_arr" indicate
1xN dimensional arrays.
Args:
meam0_arr (np.array): prior mean vector of size 1xM
covar0_mat (np.ndarray): prior covariance matrix of size MxM
beta (float): known real-data noise precision
"""
def __init__(self, mean0_arr, covar0_mat, beta):
self.prior = mv_norm(mean=mean0_arr, cov=covar0_mat)
self.meanPrev_vec = mean0_arr.reshape(mean0_arr.shape + (1,)) #reshape to column vector
self.covarPrev_mat = covar0_mat
self.beta = beta
self.meanCurrent_vec = self.meanPrev_vec
self.covarCurrent_mat = self.covarPrev_mat
self.posterior = self.prior
self.prediction = self.prior
def get_phi_mat(self, X):
N = X.shape[0]
phi_mat = np.ones((X.shape[0]+1, X.shape[1]))
for i in range(0,N):
phi_mat[i,:] = X[i,:]
return phi_mat
def get_phi(self, x_vec):
"""
Note that the other terms in x_vec are not from other nodes
in the network. These are local N dimensional data points
If some dimensions are not seen at node i they are set to zero
"""
N = len(x_vec)
phi_vec = np.ones((1, N+1))
for i in range(0,N):
phi_vec[:, i] = x_vec[i]
return phi_vec
def set_posterior(self, x_vec, t):
"""
Updates current mean vec and covariance matrix given x and t value
"""
phi_vec = self.get_phi(x_vec)
self.covarCurrent_mat = np.linalg.inv(np.linalg.inv(self.covarPrev_mat) + self.beta*phi_vec.T.dot(phi_vec))
self.meanCurrent_vec = self.covarCurrent_mat.dot(np.linalg.inv(self.covarPrev_mat).dot(self.meanPrev_vec)) + \
self.covarCurrent_mat.dot(self.beta*phi_vec.T.dot(t))
self.posterior = mv_norm(mean=self.meanCurrent_vec.flatten(), cov=self.covarCurrent_mat)
def merge_PosteriorParams(self, W_vec, meanCurrent_dict, covarCurrent_mat_dict):
N = len(W_vec)
dummy_mean = np.zeros((N+1,1), dtype = float)
dummy_covar = np.zeros((N+1,N+1), dtype = float)
for i in range(0,N):
dummy_mean += np.linalg.inv(covarCurrent_mat_dict[i]).dot(meanCurrent_dict[i])*W_vec[i]
dummy_covar += np.linalg.inv(covarCurrent_mat_dict[i])*W_vec[i]
self.covarCurrent_mat = np.linalg.inv(dummy_covar)
self.meanCurrent_vec = self.covarCurrent_mat.dot(dummy_mean)
def update_prevPosteriorParams(self):
# update the previous mean and covariance to new updated one using one sample (x_vec,t)
self.covarPrev_mat = self.covarCurrent_mat
self.meanPrev_vec = self.meanCurrent_vec
def predict_test_set(self,X):
N_samples = X.shape[1]
x_mat = self.get_phi_mat(X)
predictions = []
for idx in range(0,N_samples):
x = x_mat[:,idx]
sig_sq_x = 1/self.beta + x.T.dot(self.covarCurrent_mat.dot(x))
mean_x = self.meanCurrent_vec.T.dot(x)
predictions.append(normal(mean_x.flatten(), np.sqrt(sig_sq_x)))
return np.array(predictions)
def compute_mse(self, t, predictions):
N = len(t)
err = np.array(t-predictions)
err = np.square(err)
return sum(err)/N
def make_scatter(self, x1_arr, x2_arr, t_arr, real_parms, samples=None, stdevs=None):
"""
A helper function to plot noisy data, the true function,
and optionally a set of lines specified by the nested array of
weights of size NxM where N is number of lines, M is 2 for
this simple model
"""
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x1_arr, x2_arr, t_arr, alpha=0.5)
ax.set_xlabel('x_1')
ax.set_ylabel('x_2')
ax.set_zlabel('t')
x1, x2 = np.mgrid[-1:1:.01, -1.5:1.5:.01]
x = np.stack((x1,x2))
ax.plot_surface(x1, x2, real_function(a_vec, 0, x), cmap=cm.coolwarm)
_ = plt.title('Real Data from Noisy Linear Function')
```
### Bayesian Linear Regression for single node
```
# Real function parameters
N_train = 500
a_0 = -0.3
a_1 = 0.5
a_2 = 0.8
a_vec = np.array([a_0, a_1, a_2])
l1 = -1
u1 = 1
l2 = -1.5
u2 = 1.5
l_vec = np.array([l1, l2])
u_vec = np.array([u1, u2])
noise_sigma = 0.8
beta = 1/noise_sigma**2
# Generate input features from uniform distribution
np.random.seed(20) # Set the seed so we can get reproducible results
# generates N training samples
[X_train_mat, t_train_vec] = generate_training_set(l_vec, u_vec, a_vec, noise_sigma, N_train)
N_test = int(N_train/5)
[X_test_mat,t_test_vec] = generate_training_set(l_vec, u_vec, a_vec, noise_sigma, N_test)
mse_vec = np.zeros((N_train), dtype = float)
alpha = 2.0
mean0_vec = np.array([0., 0., 0.])
covar0_mat = 1/alpha*np.identity(3)
linbayes = LinearSeqBayes(mean0_vec, covar0_mat, beta)
linbayes.make_scatter(X_train_mat[0,:], X_train_mat[1,:], t_train_vec, real_parms = [a_0, a_1, a_2])
```
#### Main Training loop: Training averaged over multiple sample paths
```
max_runs = 500
avg_mse_vec = np.zeros((N_train), dtype = float)
for t in range(0, max_runs):
# generates N training samples
[X_train_mat, t_train_vec] = generate_training_set(l_vec, u_vec, a_vec, noise_sigma, N_train)
N_test = int(N_train/5)
[X_test_mat, t_test_vec] = generate_training_set(l_vec, u_vec, a_vec, noise_sigma, N_test)
mse_vec = np.zeros((N_train), dtype = float)
alpha = 2.0
mean0_vec = np.array([0., 0., 0.])
covar0_mat = 1/alpha*np.identity(3)
linbayes = LinearSeqBayes(mean0_vec, covar0_mat, beta)
for n in range(0, N_train):
linbayes.set_posterior(X_train_mat[:,n], t_train_vec[n])
linbayes.update_prevPosteriorParams()
predictions_vec = linbayes.predict_test_set(X_test_mat)
mse_vec[n] = linbayes.compute_mse(t_test_vec, predictions_vec.flatten())
avg_mse_vec += mse_vec
avg_mse_vec = avg_mse_vec/max_runs
avg_mse_vec_1node = avg_mse_vec
plt.plot(np.linspace(0, N_train, num=N_train), avg_mse_vec_1node,'k', label='Mean Squared Error for Central Node')
plt.xlabel(r'Epoch', fontsize = 12)
plt.ylabel(r'MSE', fontsize = 12)
plt.legend()
plt.ylim([0.8, 3.2])
#plt.xlim([0,500])
plt.savefig('MSEVsIter_1node_LearningGlobal.eps', dpi = 450)
plt.show()
```
### Bayesian Linear Regression for two nodes without cooperation
```
# Real function parameters
N_train = 500
a_0 = -0.3
a_1 = 0.5
a_2 = 0.5
a_vec = np.array([a_0, a_1, a_2])
l1 = -1
u1 = 1
l2 = -1.5
u2 = 1.5
l_vec = np.array([l1, l2])
u_vec = np.array([u1, u2])
l1_vec = np.array([l1, 0])
u1_vec = np.array([u1, 0])
l2_vec = np.array([0, l2])
u2_vec = np.array([0, u2])
noise_sigma = 0.8
beta = 1/noise_sigma**2
# Generate input features from uniform distribution
np.random.seed(20) # Set the seed so we can get reproducible results
# generates N training samples for node 1
[X1_train_mat, t1_train_vec] = generate_training_set(l1_vec, u1_vec, a_vec, noise_sigma, N_train)
# generates N training samples for node 2
[X2_train_mat, t2_train_vec] = generate_training_set(l2_vec, u2_vec, a_vec, noise_sigma, N_train)
# common test set
N_test = int(N_train/5)
[X_test_mat, t_test_vec] = generate_training_set(l_vec, u_vec, a_vec, noise_sigma, N_test)
mse_vec_node1 = np.zeros((N_train), dtype = float)
mse_vec_node2 = np.zeros((N_train), dtype = float)
alpha = 2.0
mean0_vec = np.array([0., 0., 0.])
covar0_mat = 1/alpha*np.identity(3)
linbayes_node1 = LinearSeqBayes(mean0_vec, covar0_mat, beta)
linbayes_node2 = LinearSeqBayes(mean0_vec, covar0_mat, beta)
linbayes_node1.make_scatter(X1_train_mat[0,:], X1_train_mat[1,:], t1_train_vec, real_parms = [a_0, a_1, a_2])
linbayes_node2.make_scatter(X2_train_mat[0,:], X2_train_mat[1,:], t2_train_vec, real_parms = [a_0, a_1, a_2])
```
#### Main Training loop: Training averaged over multiple sample paths
```
max_runs = 500
avg_mse_vec_node1 = np.zeros((N_train), dtype = float)
avg_mse_vec_node2 = np.zeros((N_train), dtype = float)
for t in range(0, max_runs):
# generates N training samples for node 1
[X1_train_mat, t1_train_vec] = generate_training_set(l1_vec, u1_vec, a_vec, noise_sigma, N_train)
# generates N training samples for node 2
[X2_train_mat, t2_train_vec] = generate_training_set(l2_vec, u2_vec, a_vec, noise_sigma, N_train)
# common test set
N_test = int(N_train/5)
[X_test_mat, t_test_vec] = generate_training_set(l_vec, u_vec, a_vec, noise_sigma, N_test)
mse_vec_node1 = np.zeros((N_train), dtype = float)
mse_vec_node2 = np.zeros((N_train), dtype = float)
alpha = 2.0
mean0_vec = np.array([0., 0., 0.])
covar0_mat = 1/alpha*np.identity(3)
linbayes_node1 = LinearSeqBayes(mean0_vec, covar0_mat, beta)
linbayes_node2 = LinearSeqBayes(mean0_vec, covar0_mat, beta)
for n in range(0, N_train):
linbayes_node1.set_posterior(X1_train_mat[:,n], t1_train_vec[n])
linbayes_node1.update_prevPosteriorParams()
predictions_vec_node1 = linbayes_node1.predict_test_set(X_test_mat)
mse_vec_node1[n] = linbayes_node1.compute_mse(t_test_vec, predictions_vec_node1.flatten())
linbayes_node2.set_posterior(X2_train_mat[:,n], t2_train_vec[n])
linbayes_node2.update_prevPosteriorParams()
predictions_vec_node2 = linbayes_node2.predict_test_set(X_test_mat)
mse_vec_node2[n] = linbayes_node2.compute_mse(t_test_vec, predictions_vec_node2.flatten())
avg_mse_vec_node1 += mse_vec_node1
avg_mse_vec_node2 += mse_vec_node2
avg_mse_vec_node1 = avg_mse_vec_node1/max_runs
avg_mse_vec_node2 = avg_mse_vec_node2/max_runs
avg_mse_vec_node1_NoCoop = avg_mse_vec_node1
avg_mse_vec_node2_NoCoop = avg_mse_vec_node2
mse_central, = plt.plot(np.linspace(0, N_train, num=N_train), 1.27821171*np.ones((N_train), dtype = float), linestyle= '--', color = [0, 0,0],label='Mean Squared Error at Central Node')
mse_node1, = plt.plot(np.linspace(0, N_train, num=N_train), avg_mse_vec_node1_NoCoop, color = '#e41a1c',label='Mean Squared Error at Node 1')
mse_node2, = plt.plot(np.linspace(0, N_train, num=N_train), avg_mse_vec_node2_NoCoop, color = '#377eb8', label='Mean Squared Error at Node 2')
plt.xlabel(r'Number of communication rounds', fontsize=12)
plt.ylabel(r'MSE', fontsize=12)
plt.legend(fontsize=12)
plt.ylim([0.8, 3.2])
plt.savefig('MSEVsIter_2nodes_LearningNoCooperation_centralNode.eps', dpi = 450)
plt.show()
```
### Bayesian Linear Regression for two nodes with cooperation
```
# Real function parameters
N_train = 500
N = 2
W = np.array([np.array([0.9, 0.1]), np.array([0.6, 0.4])])
a_0 = -0.3
a_1 = 0.5
a_2 = 0.5
a_vec = np.array([a_0, a_1, a_2])
l1 = -1
u1 = 1
l2 = -1.5
u2 = 1.5
l_vec = np.array([l1, l2])
u_vec = np.array([u1, u2])
l1_vec = np.array([l1, 0])
u1_vec = np.array([u1, 0])
l2_vec = np.array([0, l2])
u2_vec = np.array([0, u2])
noise_sigma = 0.8
beta = 1/noise_sigma**2
# Generate input features from uniform distribution
np.random.seed(20) # Set the seed so we can get reproducible results
```
#### Main Training Loop: Training averaged over multiple sample paths
```
max_runs = 500
avg_mse_vec_node1 = np.zeros((N_train), dtype = float)
avg_mse_vec_node2 = np.zeros((N_train), dtype = float)
for t in range(0, max_runs):
# generates N training samples for node 1
[X1_train_mat, t1_train_vec] = generate_training_set(l1_vec, u1_vec, a_vec, noise_sigma, N_train)
# generates N training samples for node 2
[X2_train_mat, t2_train_vec] = generate_training_set(l2_vec, u2_vec, a_vec, noise_sigma, N_train)
# common test set
N_test = int(N_train/5)
[X_test_mat, t_test_vec] = generate_training_set(l_vec, u_vec, a_vec, noise_sigma, N_test)
mse_vec_node1 = np.zeros((N_train), dtype = float)
mse_vec_node2 = np.zeros((N_train), dtype = float)
alpha = 2.0
mean0_vec = np.array([0., 0., 0.])
covar0_mat = 1/alpha*np.identity(3)
linbayes_node1 = LinearSeqBayes(mean0_vec, covar0_mat, beta)
linbayes_node2 = LinearSeqBayes(mean0_vec, covar0_mat, beta)
for n in range(0, N_train):
# perform local bayesian update at each node
linbayes_node1.set_posterior(X1_train_mat[:,n], t1_train_vec[n])
linbayes_node2.set_posterior(X2_train_mat[:,n], t2_train_vec[n])
# initialize the dictionaries with current posterior parameters
mean_dict[0] = linbayes_node1.meanCurrent_vec
mean_dict[1] = linbayes_node2.meanCurrent_vec
covar_mat_dict[0] = linbayes_node1.covarCurrent_mat
covar_mat_dict[1] = linbayes_node2.covarCurrent_mat
# perform the consensus step
linbayes_node1.merge_PosteriorParams(W[0], mean_dict, covar_mat_dict)
linbayes_node2.merge_PosteriorParams(W[1], mean_dict, covar_mat_dict)
# update the local posteriors with merged posteriors
linbayes_node1.update_prevPosteriorParams()
linbayes_node2.update_prevPosteriorParams()
# evaluate on the test dataset
predictions_vec_node1 = linbayes_node1.predict_test_set(X_test_mat)
mse_vec_node1[n] = linbayes_node1.compute_mse(t_test_vec, predictions_vec_node1.flatten())
predictions_vec_node2 = linbayes_node2.predict_test_set(X_test_mat)
mse_vec_node2[n] = linbayes_node2.compute_mse(t_test_vec, predictions_vec_node2.flatten())
avg_mse_vec_node1 += mse_vec_node1
avg_mse_vec_node2 += mse_vec_node2
avg_mse_vec_node1 = avg_mse_vec_node1/max_runs
avg_mse_vec_node2 = avg_mse_vec_node2/max_runs
mse_central, = plt.plot(np.linspace(0, N_train, num=N_train), 1.27821171*np.ones((N_train), dtype = float), linestyle= '--', color = [0, 0,0],label='Mean Squared Error at Central Node')
mse_node1, = plt.plot(np.linspace(0, N_train, num=N_train), avg_mse_vec_node1, color = '#e41a1c', label='Mean Squared Error at Node 1')
mse_node2, = plt.plot(np.linspace(0, N_train, num=N_train), avg_mse_vec_node2, color = '#377eb8', label='Mean Squared Error at Node 2')
plt.xlabel(r'Number of communication rounds', fontsize=12)
plt.ylabel(r'MSE', fontsize=12)
plt.legend(fontsize=12)
plt.ylim([0.8, 3.2])
plt.savefig('MSEVsIter_2nodes_LearningWithCoop_centralNode.eps', dpi = 450)
plt.show()
```
| true |
code
| 0.780317 | null | null | null | null |
|
```
# default_exp callback.PredictionDynamics
```
# PredictionDynamics
> Callback used to visualize model predictions during training.
This is an implementation created by Ignacio Oguiza ([email protected]) based on a [blog post](http://localhost:8888/?token=83bca9180c34e1c8991886445942499ee8c1e003bc0491d0) by Andrej Karpathy I read some time ago that I really liked. One of the things he mentioned was this:
>"**visualize prediction dynamics**. I like to visualize model predictions on a fixed test batch during the course of training. The “dynamics” of how these predictions move will give you incredibly good intuition for how the training progresses. Many times it is possible to feel the network “struggle” to fit your data if it wiggles too much in some way, revealing instabilities. Very low or very high learning rates are also easily noticeable in the amount of jitter." A. Karpathy
```
#export
from fastai.callback.all import *
from tsai.imports import *
# export
class PredictionDynamics(Callback):
order, run_valid = 65, True
def __init__(self, show_perc=1., figsize=(6, 6), alpha=.3, size=30, color='lime', cmap='gist_rainbow'):
"""
Args:
show_perc: percent of samples from the valid set that will be displayed. Default: 1 (all).
You can reduce it if the number is too high and the chart is too busy.
alpha: level of transparency. Default:.3. 1 means no transparency.
figsize: size of the chart. You may want to expand it if too many classes.
size: size of each sample in the chart. Default:30. You may need to decrease it a bit if too many classes/ samples.
color: color used in regression plots.
cmap: color map used in classification plots.
The red line in classification tasks indicate the average probability of true class.
"""
store_attr("show_perc,figsize,alpha,size,color,cmap")
def before_fit(self):
self.run = not hasattr(self.learn, 'lr_finder') and not hasattr(self, "gather_preds")
if not self.run:
return
self.cat = True if (hasattr(self.dls, "c") and self.dls.c > 1) else False
if self.show_perc != 1:
valid_size = len(self.dls.valid.dataset)
self.show_idxs = np.random.choice(valid_size, int(round(self.show_perc * valid_size)), replace=False)
# Prepare ground truth container
self.y_true = []
def before_epoch(self):
# Prepare empty pred container in every epoch
self.y_pred = []
def after_pred(self):
if self.training:
return
# Get y_true in epoch 0
if self.epoch == 0:
self.y_true.extend(self.y.cpu().flatten().numpy())
# Gather y_pred for every batch
if self.cat:
y_pred = torch.gather(F.softmax(self.pred.detach().cpu(), 1), -1, self.y.cpu().reshape(-1, 1).long())
else:
y_pred = self.pred.detach().cpu()
self.y_pred.extend(y_pred.flatten().numpy())
def after_epoch(self):
# Ground truth
if self.epoch == 0:
self.y_true = np.array(self.y_true)
if self.show_perc != 1:
self.y_true = self.y_true[self.show_idxs]
self.y_bounds = (np.min(self.y_true), np.max(self.y_true))
self.min_x_bounds, self.max_x_bounds = np.min(self.y_true), np.max(self.y_true)
self.y_pred = np.array(self.y_pred)
if self.show_perc != 1:
self.y_pred = self.y_pred[self.show_idxs]
if self.cat:
self.update_graph(self.y_pred, self.y_true)
else:
# Adjust bounds during validation
self.min_x_bounds = min(self.min_x_bounds, np.min(self.y_pred))
self.max_x_bounds = max(self.max_x_bounds, np.max(self.y_pred))
x_bounds = (self.min_x_bounds, self.max_x_bounds)
self.update_graph(self.y_pred, self.y_true, x_bounds=x_bounds, y_bounds=self.y_bounds)
def after_fit(self):
plt.close(self.graph_ax.figure)
def update_graph(self, y_pred, y_true, x_bounds=None, y_bounds=None):
if not hasattr(self, 'graph_fig'):
self.df_out = display("", display_id=True)
if self.cat:
self._cl_names = self.dls.vocab
self._classes = L(self.dls.vocab.o2i.values())
self._n_classes = len(self._classes)
self._h_vals = np.linspace(-.5, self._n_classes - .5, self._n_classes + 1)[::-1]
_cm = plt.get_cmap(self.cmap)
self._color = [_cm(1. * c/self._n_classes) for c in range(1, self._n_classes + 1)][::-1]
self._rand = []
for i, c in enumerate(self._classes):
self._rand.append(.5 * (np.random.rand(np.sum(y_true == c)) - .5))
self.graph_fig, self.graph_ax = plt.subplots(1, figsize=self.figsize)
self.graph_out = display("", display_id=True)
self.graph_ax.clear()
if self.cat:
for i, c in enumerate(self._classes):
self.graph_ax.scatter(y_pred[y_true == c], y_true[y_true == c] + self._rand[i], color=self._color[i],
edgecolor='black', alpha=self.alpha, linewidth=.5, s=self.size)
self.graph_ax.vlines(np.mean(y_pred[y_true == c]), i - .5, i + .5, color='r')
self.graph_ax.vlines(.5, min(self._h_vals), max(self._h_vals), linewidth=.5)
self.graph_ax.hlines(self._h_vals, 0, 1, linewidth=.5)
self.graph_ax.set_xlim(0, 1)
self.graph_ax.set_ylim(min(self._h_vals), max(self._h_vals))
self.graph_ax.set_xticks(np.linspace(0, 1, 11))
self.graph_ax.set_yticks(self._classes)
self.graph_ax.set_yticklabels(self._cl_names)
self.graph_ax.set_xlabel('probability of true class', fontsize=12)
self.graph_ax.set_ylabel('true class', fontsize=12)
self.graph_ax.grid(axis='x', color='gainsboro', linewidth=.2)
else:
self.graph_ax.scatter(y_pred, y_true, lw=1, color=self.color,
edgecolor='black', alpha=self.alpha, linewidth=.5, s=self.size)
self.graph_ax.set_xlim(*x_bounds)
self.graph_ax.set_ylim(*y_bounds)
self.graph_ax.plot([*x_bounds], [*x_bounds], color='gainsboro')
self.graph_ax.set_xlabel('y_pred', fontsize=12)
self.graph_ax.set_ylabel('y_true', fontsize=12)
self.graph_ax.grid(color='gainsboro', linewidth=.2)
self.graph_ax.set_title(f'Prediction Dynamics \nepoch: {self.epoch +1}/{self.n_epoch}')
self.df_out.update(pd.DataFrame(np.stack(self.learn.recorder.values)[-1].reshape(1,-1),
columns=self.learn.recorder.metric_names[1:-1], index=[self.epoch]))
self.graph_out.update(self.graph_ax.figure)
from fastai.data.all import *
from fastai.metrics import *
from tsai.data.all import *
from tsai.models.utils import *
from tsai.learner import *
from tsai.models.InceptionTimePlus import *
dsid = 'NATOPS'
X, y, splits = get_UCR_data(dsid, split_data=False)
check_data(X, y, splits, False)
tfms = [None, [Categorize()]]
batch_tfms = [TSStandardize(by_var=True)]
dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms)
learn = ts_learner(dls, InceptionTimePlus, metrics=accuracy, cbs=PredictionDynamics())
learn.fit_one_cycle(2, 3e-3)
# hide
from tsai.imports import *
out = create_scripts(); beep(out)
```
| true |
code
| 0.671417 | null | null | null | null |
|
# Visualization: Trading Session
```
import pandas as pd
import numpy as np
import altair as alt
import seaborn as sns
```
### 1. Define parameters and Load model
```
from trading_bot.agent import Agent
model_name = 'model_GOOG_50'
test_stock = 'data/GOOG_2019.csv'
window_size = 10
debug = True
agent = Agent(window_size, pretrained=True, model_name=model_name)
```
### 2. Load test data
```
# read csv into dataframe
df = pd.read_csv(test_stock)
# filter out the desired features
df = df[['Date', 'Adj Close']]
# rename feature column names
df = df.rename(columns={'Adj Close': 'actual', 'Date': 'date'})
# convert dates from object to DateTime type
dates = df['date']
dates = pd.to_datetime(dates, infer_datetime_format=True)
df['date'] = dates
df.head()
```
### 3. Running Eval
```
import logging
import coloredlogs
from trading_bot.utils import show_eval_result, switch_k_backend_device, get_stock_data
from trading_bot.methods import evaluate_model
coloredlogs.install(level='DEBUG')
switch_k_backend_device()
test_data = get_stock_data(test_stock)
initial_offset = test_data[1] - test_data[0]
test_result, history = evaluate_model(agent, test_data, window_size, debug)
show_eval_result(model_name, test_result, initial_offset)
```
### 4. Visualize
```
def visualize(df, history, title="trading session"):
# add history to dataframe
position = [history[0][0]] + [x[0] for x in history]
actions = ['HOLD'] + [x[1] for x in history]
df['position'] = position
df['action'] = actions
# specify y-axis scale for stock prices
scale = alt.Scale(domain=(min(min(df['actual']), min(df['position'])) - 50, max(max(df['actual']), max(df['position'])) + 50), clamp=True)
# plot a line chart for stock positions
actual = alt.Chart(df).mark_line(
color='green',
opacity=0.5
).encode(
x='date:T',
y=alt.Y('position', axis=alt.Axis(format='$.2f', title='Price'), scale=scale)
).interactive(
bind_y=False
)
# plot the BUY and SELL actions as points
points = alt.Chart(df).transform_filter(
alt.datum.action != 'HOLD'
).mark_point(
filled=True
).encode(
x=alt.X('date:T', axis=alt.Axis(title='Date')),
y=alt.Y('position', axis=alt.Axis(format='$.2f', title='Price'), scale=scale),
color='action'
).interactive(bind_y=False)
# merge the two charts
chart = alt.layer(actual, points, title=title).properties(height=300, width=1000)
return chart
chart = visualize(df, history, title=test_stock)
chart
```
| true |
code
| 0.513729 | null | null | null | null |
|
```
cc.VerificationHandler.close_browser()
```
## Time to crack in and find some more mother elements
#### Dont let complexity ruin tempo
```
% run contactsScraper.py
orgsForToday = ['National Association for Multi-Ethnicity In Communications (NAMIC)',
'Association for Women in Science',
'Brain Injury Association of America',
'American Society of Home Inspectors',
'NAADAC, the Association for Addiction Professionals',
'American Public Transportation Association',
'Indiana Soybean Alliance',
'Associated Builders and Contractors (ABC)',
'National Association of Social Workers',
'American Marketing Association (AMA)']
org = orgsForToday[9]
vh = cc.MotherSetVerifier(org)
pointers = vh.verifiedPointers
len(pointers)
cc.VerificationHandler.orgRecords.orgSessionStatusCheck()
import numpy as np
np.matrix([pointers, pointers])
## Grandmother Finding Algorithm
gmElements = []
gmMatrix = []
for i in range(len(pointers)):
igmElements = []
for j in range(i):
## Check to see if the Any Mother element is a Big Momma or "Bertha" Element
if pointers[i].get_mother_element() is pointers[j].get_mother_element():
gm = pointers[i].get_mother_element()
else:
gm = pointers[i].common_parent(pointers[j])
# Append Match to Grand Mother Matrix
igmElements.append(gm)
# Check to see if this is a new grand mother element,
# if so append to the gmElements list of unique grandmother elements
if gm not in gmElements:
gmElements.append(gm)
# Append Matrix Row
gmMatrix.append(igmElements)
grandMotherMatrix = np.matrix(gmMatrix)
grandMotherMatrix
```
## Just what was Expexted, 1 grandmother element
```
len(gmElements)
type(gmElements[0])
```
## Find other Mother elements with the same attributes within the found GrandMother
```
a = pointers[1].get_mother_element()
b = pointers[0].get_mother_element()
gm = gmElements[0]
a.parent is gm
a.parent
print(gm.prettify())
b.attrs
a.attrs == b.attrs
a.name
b.name
gm = gmElements[0]
finds = gm.contents
len(finds)
findsSib = gm.find_all("h2")
findsSib
```
## There are verified pointers and there are elements that mimic them
```
gm
mothers = pointers
mothers[0].tom_here()
mothers[0].tom
mothers[0].mary_here()
mothers[0].tom.parent.parent is mothers[0].mary
mothers[0].tom.parent.attrs
mothers[0].tom.parent.contents
mothers[0].tom.parent['toms'] = 0
mothers[0].nathan_here()
mothers[0].nathan
mothers[0].nathan.parent['nathans'] = 0
mothers[0].nathan.parent.parent is mothers[0].get_mother_element()
## Tag elements with atributes up the ancestrial chain from tom all the way to the mother element
def tag_nathans(pt):
## Precondition: The name pointer for this verified pointer is a nathan
return parent_cycle_up(pt.get_mother_element(), pt.nathan.parent, 'nathans', 0)
def tag_toms(pt):
return parent_cycle_up(pt.get_mother_element(), pt.tom.parent, 'toms', 0)
def parent_cycle_up(motherElement, element, atr, num):
if element is motherElement:
return
else:
element[atr] = num
return parent_cycle(motherElement, element.parent, atr, num + 1)
def get_nathan(fnd, taggedPt):
## Lean fnd from a mother
## get from the root to the foot
## precondition fnd is a found mother element
return parent_cycle_down(fnd.children, taggedPt.get_mother_element().children, 'nathans')
def get_tom(fnd, taggedPt):
## Learn a find from a mother
## get tom from the root to the froot
## precondition fnd is a found mother element
return parent_cycle_down(fnd.children, taggedPt.get_mother_element().children, 'toms')
def parent_cycle_down(fi, mi, atr):
## Loop accoss both found and mother iterators
## Precondition the 'atr' is an atribute of at least one elment in mi
for f, s in zip(fi, mi):
## look for attr
print('foundTrunk: ' + f.name + str(f.attrs) + ' motherTrunk: ' + s.name + str(s.attrs))
if atr in s.attrs:
if s[atr] == 0: ## Tag enclosing the pointer
## Return String inside, thats all!
return f.string
else:
return parent_cycle_down(f.children, s.children, atr)
tag_nathans(mothers[1])
tag_toms(mothers[1])
```
## Walking the Tree of a verified pointer
```
mother1 = mothers[1].get_mother_element()
mi = mother1.children
s = next(mi)
s
'nathans' in s.attrs
si = s.children
s = next(si)
s
s.string
mothers[0].get_mother_element
get_tom(mothers[1].get_mother_element(), mothers[0])
get_tom(mothers[0].get_mother_element(), mothers[1])
mothers[1].tom
mothers[0].tom
mothers[1].nathan
get_nathan(mothers[1].get_mother_element(), mothers[0])
mothers[0].nathan
get_nathan(mothers[0].get_mother_element(), mothers[1])
```
## Bring it all together
#### For all verified pointers tag the nathans and toms
#### Test each tagged verfied pointer against each found mother element to identifiy nathans and toms!
#### reunite the estranged family!
```
import pandas as pd
## For all verfied pointers tag the nathans and toms
for mother in mothers:
tag_nathans(mother)
tag_toms(mother)
tomSet = pd.DataFrame([{mother.tom:get_tom(find, mother) for mother in mothers} for find in finds])
nathanSet = pd.DataFrame([{mother.nathan:get_nathan(find, mother) for mother in mothers} for find in finds])
len(finds)
tomSet
nathanSet
```
| true |
code
| 0.213213 | null | null | null | null |
|
# Code for capsule_layers.py
```
"""
Some key layers used for constructing a Capsule Network. These layers can used to construct CapsNet on other dataset,
not just MNIST.
*NOTE*: Some functions may be implemented in multiple ways, I keep all of them. You can try them for youself just by
uncommenting them and commenting their counterparts.
"""
import keras.backend as K
import tensorflow as tf
from keras import initializers, layers
def squash(vectors, axis=-1):
"""
The non-linear activation used in Capsule. It drives the length of a large vector to near 1 and small vector to 0
:param vectors: some vectors to be squashed, N-dim tensor
:param axis: the axis to squash
:return: a Tensor with same shape as input vectors
"""
s_squared_norm = K.sum(k.square(vectors), axis=axis, keepdims=True)
scale = s_squared_norm / (1+s_squared_norm) / K.sqrt(s_squared_norm+K.epsilon())
return scale*vectors
class CapsuleLayer(layers.Layer):
def primaryCap(inputs, dim_capsule, n_channels, kernel_size, strides, padding):
"""
Apply Conv2D `n_channels` times and concatenate all capsules
:param inputs: 4D tensor, shape=[None, width, height, channels]
:param dim_capsule: the dim of the output vector of capsule
:param n_channels: the number of types of capsules
:return: output tensor, shape = [None, num_capsule, dim_capsule]
"""
output = layers.Conv2D(filters=dim_capsule*n_channels, kernel_size = kernel_size, strides=strides, padding=padding,
name='primarycap_conv2d')(inputs)
outputs = layer.Reshape(target_shape=[-1, dim_capsule], name='primarycap_reshape')(output)
return layers.Lambda(squash, name='primarycap_squash')(outputs)
```
# Code for capsule_net.py
```
import numpy as np
from keras import backend as K
from keras import layers, models, optimizers
from keras.utils import to_categorical
import matplotlib.pyplot as plt
from PIL import Image
K.set_image_data_format("channels_last")
def CapsNet(input_shape, n_class, routings):
"""
A capsule network on fashion MNIST
:param input_shape: data shape, 3d, [width, height, channels]
:param n_class: number of classes
:routings: number of routing iterations
:return: Two Keras Models, the first one used for training, and the second one for evaluation.
`eval_model` can also be used for training
"""
x = layers.Input(shape=input_shape)
# Layer 1: just a convolutional Conv2D layer
conv1 = layers.Conv2D(filters=256, kernel_size=9, strides=1, padding='valid', activation='relu', name='conv1')(x)
# Layer 2: Conv2D layer with `squash` activation, then reshape to [None, num_capsule, dim_capsule]
primarycaps = PrimaryCap(conv1, dim_capsule=8, n_channels=32, kernel_size = 9, strides=2, padding='valid')
# Layer 3: Capsule layer. Routing algorithm works here
digitcaps = CapsuleLayer(num_capsule=n_class, dim_capsule=16, routings=routings, name='digitcaps')(primarycaps)
# Layer 4: This is auxilary layer to replace each capsule with its length. Just to match the true label's shape.
# If using TensorFlow, this will not be necessary. :)
out_caps = Length(name='capsnet')(digitcaps)
# Decoder network.
y = layers.Input(shape=(n_class,))
masked_by_y = Mask()([digitcaps, y]) # The true label is used to mask the output of capsule layer. (for training)
masked = Mask()(digitcaps) # Mask using the capsule with maximum length. (for prediction)
# Shared Decoder Model in training and prediction
decoder = models.Sequential(name='decoder')
decoder.add(layers.Dense(512, activation='relu', input_dim=16*n_class))
decoder.add(layers.Dense(1024, activation='relu'))
decoder.add(layers.Dense(np.prod(input_shape), activation='sigmoid'))
decoder.add(layers.Reshape(target_shape=input_shape, name='out_recon'))
# Models for training and evaluation (prediction)
train_model = models.Model([x,y], [out_caps, decoder(masked_by_y)])
eval_model = models.Model(x, [out_caps, decoder(masked)])
# manipulate model
noise = layer.Input(shape=(nclass, 16))
noised_digitcaps = layers.Add()([digitcaps, noise])
masked_noised_y = Mask()([noised_digitcaps, noise])
manipulate_model = models.Model([x, y, noise], decoder(masked_noised_y))
return train_model, eval_model, manipulate_model
def load_fashion_mnist():
# the data, shuffled and split between train and test sets
from keras.datasets import fashion_mnist
(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()
x_train = x_train.reshape(-1, 28, 28, 1).astype('float32') / 255.0
x_test = x_test.reshape(-1, 28, 28, 1).astype('float32') / 255.0
y_train = to_categorical(y_train.astype('float32'))
y_test = to_categorical(y_test.astype('float32'))
return (x_train, y_train), (x_test, y_test)
import os
import argparse
from keras.preprocessing.image import ImageDataGenerator
from keras import callbacks
# setting the hyper parameters
parser = argparse.ArgumentParser(description="Capsule network on Fashion MNIST")
parser.add_argument('--epochs', default=50, type=int)
parser.add_argument('--batch_size', default=100, type=int)
parser.add_argument('--lr', default=0.001, type=float, help="Initial learning rate")
parser.add_argument('--lr_decay', default=0.9, type=float, help="The value multiplied by lr at each epoch. Set a larger value for larger epochs")
parser.add_argument('--lam_recon', default=0.392, type=float, help="The cofficient for the loss of decoder")
parser.add_argument('-r', '--routings', default=3, type=int, help="Number of iterations used in routing algorithm. Should > 0")
parser.add_argument('--shift_fraction', default=0.1, type=float, help="Faction of pixels to shift at most in each direction.")
parser.add_argument('--debug', action='store_true', help="Save weights by TensorBoard")
parser.add_argument('--save_dir', default='./result')
parser.add_argument('-t', '--testing', action='store_true', help="Test the trained model on testing dataset")
parser.add_argument('--digit', default=5, type=int, help="Digit to manipulate")
parser.add_argument('-w', '--weights', default=None, help="The path of the saved weights. Should be specified when testing.")
args = parser.parse_args(["--epochs", "2"])
print(args)
if not os.path.exists(args.save_dir):
os.makedirs(args.save_dir)
# load the data
(x_train, y_train), (x_test, y_test) = load_fashion_mnist()
# define the model
model, eval_model, manipulate_model = CapsNet(input_shape=x_train.shape[1:],
n_class=len(np.unique(np.argmax(y_train, 1))),
routings=args.routings)
model.summary()
if args.weights is not None: # init the model weights with provided one
model.load_weights(args.weights)
if not args.testing:
train(model=model, data=((x_train, y_train), (x_test, y_test)), args=args)
else:
if args.weights is None:
print("No weights provided. Will test using random initialized weights.")
manipulate_latent(manipulate_model, (x_test, y_test), args)
test(model=eval_model, data=(x_test, y_test), args=args)
```
| true |
code
| 0.915997 | null | null | null | null |
|
# HiddenLayer Graph Demo - TensorFlow
```
import os
import tensorflow as tf
import tensorflow.contrib.slim.nets as nets
import hiddenlayer as hl
import hiddenlayer.transforms as ht
# Hide GPUs. Not needed for this demo.
os.environ["CUDA_VISIBLE_DEVICES"] = ""
```
## VGG 16
```
with tf.Session() as sess:
with tf.Graph().as_default() as tf_graph:
# Setup input placeholder
inputs = tf.placeholder(tf.float32, shape=(1, 224, 224, 3))
# Build model
predictions, _ = nets.vgg.vgg_16(inputs)
# Build HiddenLayer graph
hl_graph = hl.build_graph(tf_graph)
# Display graph
# Jupyter Notebook renders it automatically
hl_graph
```
# Alexnet v2
```
with tf.Session() as sess:
with tf.Graph().as_default() as tf_graph:
# Setup input placeholder
inputs = tf.placeholder(tf.float32, shape=(1, 224, 224, 3))
# Build model
predictions, _ = nets.alexnet.alexnet_v2(inputs)
# Build layout
hl_graph = hl.build_graph(tf_graph)
# Use a different color theme
hl_graph.theme = hl.graph.THEMES["blue"].copy() # Two options: basic and blue
# Display
hl_graph
```
# Inception v1
```
with tf.Session() as sess:
with tf.Graph().as_default() as tf_graph:
# Setup input placeholder
inputs = tf.placeholder(tf.float32, shape=(1, 224, 224, 3))
# Build model
predictions, _ = nets.inception.inception_v1(inputs)
# Build layout
hl_graph = hl.build_graph(tf_graph)
# Display
hl_graph
```
## Transforms and Graph Expressions
A Graph Expression is like a Regular Expression for graphs. It simplifies searching for nodes that fit a particular pattern. For example, the graph expression `Conv > Relu` will find Conv layers that are followed by RELU layers. And the expressions `Conv | MaxPool` will match any Conv and MaxPool layers that are in parallel branches (i.e. have the same parent node). See examples of more complex graph expressions below.
Once the graph expression finds the nodes, we use Transforms to modify them. For example, if we want to delete all nodes of type `Const`, we'll use the transform `Prune("Const")`. The graph expression here is simple, `Const`, which matches any node with operation of type Const. And the Prune() transform deletes the node.
See more examples below. And, also, check `SIMPLICITY_TRANSFORMS` in `transforms.py`.
# Inception v1 with Simplified Inception Modules
```
# Define custom transforms to replice the default ones
transforms = [
# Fold inception blocks into one node
ht.Fold("""
( (MaxPool > Conv > Relu) |
(Conv > Relu > Conv > Relu) |
(Conv > Relu > Conv > Relu) |
(Conv > Relu)
) > Concat
""", "Inception", "Inception Module"),
# Fold Conv and Relu together if they come together
ht.Fold("Conv > Relu", "ConvRelu"),
# Fold repeated nodes
ht.FoldDuplicates(),
]
with tf.Session() as sess:
with tf.Graph().as_default() as tf_graph:
# Setup input placeholder
inputs = tf.placeholder(tf.float32, shape=(1, 224, 224, 3))
# Build model
predictions, _ = nets.inception.inception_v1(inputs)
# Build layout
hl_graph = hl.build_graph(tf_graph, transforms=transforms)
# Display
hl_graph.theme = hl.graph.THEMES["blue"].copy()
hl_graph
```
## ResNet v1 50
```
# Custom transforms to group nodes of residual and bottleneck blocks
transforms = [
# Fold Pad into the Conv that follows it
ht.Fold("Pad > Conv", "__last__"),
# Fold Conv/Relu
ht.Fold("Conv > Relu", "ConvRelu"),
# Fold bottleneck blocks
hl.transforms.Fold("""
((ConvRelu > ConvRelu > Conv) | Conv) > Add > Relu
""", "BottleneckBlock", "Bottleneck Block"),
# Fold residual blocks
hl.transforms.Fold("""ConvRelu > ConvRelu > Conv > Add > Relu""",
"ResBlock", "Residual Block"),
]
# Build TensorFlow graph
with tf.Session() as sess:
with tf.Graph().as_default() as tf_graph:
# Setup input placeholder
inputs = tf.placeholder(tf.float32, shape=(1, 224, 224, 3))
# Build model
predictions, _ = nets.resnet_v1.resnet_v1_50(inputs)
# Build HiddenLayer graph
hl_graph = hl.build_graph(tf_graph, transforms=transforms)
# Customize the theme. The theme is a simple dict defined in graph.py
hl_graph.theme.update({
"fill_color": "#789263",
"outline_color": "#789263",
"font_color": "#FFFFFF",
})
# Display
hl_graph
```
# Overfeat
```
with tf.Session() as sess:
with tf.Graph().as_default() as tf_graph:
# Setup input placeholder
inputs = tf.placeholder(tf.float32, shape=(1, 231, 231, 3))
# Build model
predictions, _ = nets.overfeat.overfeat(inputs)
# Build layout
hl_graph = hl.build_graph(tf_graph)
# Display
hl_graph
```
| true |
code
| 0.61451 | null | null | null | null |
|
# Overlap matrices
This notebook will look at different ways of plotting overlap matrices and making them visually appealing.
One way to guarantee right color choices for color blind poeple is using this tool: https://davidmathlogic.com/colorblind
```
%pylab inline
import pandas as pd
import seaborn as sbn
sbn.set_style("ticks")
sbn.set_context("notebook", font_scale = 1.5)
data = np.loadtxt('raw_matrices_review.dat')
good = (data[:9][:])
bad = data[-9:][:]
ugly = data[9:18][:]
# Your Standard plot
fig =figsize(8,8)
ax = sbn.heatmap(bad,annot=True, fmt='.2f', linewidths=.3, annot_kws={"size": 14},square=True,robust=True,cmap=sbn.light_palette((210, 90, 60), input="husl") )
ax.set_xlabel(r'$\lambda$ index')
ax.set_ylabel(r'$\lambda$ index')
# Changing the colour map
from matplotlib import colors
from matplotlib.colors import LogNorm
#cmap = colors.ListedColormap(['#FBE8EB','#88CCEE','#78C592', '#117733'])
cmap = colors.ListedColormap(['#117733','#88CCEE', '#FBE8EB'])
bounds=[0.0, 0.025, 0.1, 0.8]
norm = colors.BoundaryNorm(bounds, cmap.N, clip=False)
cbar_kws=dict(ticks=[0.2, 0.4, 0.6, 0.8 ,1.0])
#ax = sbn.heatmap(ugly,annot=True, fmt='.2f', linewidths=.3, annot_kws={"size": 14},square=True,robust=True,cmap=cmap, norm=norm,cbar_kws=cbar_kws )
ax = sbn.heatmap(ugly,annot=True, fmt='.2f', linewidths=0, linecolor='white', annot_kws={"size": 14},square=True,robust=True,cmap='bone_r', vmin=0, vmax=1 )
ax.xaxis.tick_top()
ax.xaxis.set_label_position('top')
ax.set_xlabel(r'$\lambda$ index')
ax.set_ylabel(r'$\lambda$ index')
for _, spine in ax.spines.items():
spine.set_visible(True)
show_annot_array = ugly >= 0.0001
for text, show_annot in zip(ax.texts, (element for row in show_annot_array for element in row)):
text.set_visible(show_annot)
# Changing the colour map
from matplotlib import colors
from matplotlib.colors import LogNorm
#cmap = colors.ListedColormap(['#FBE8EB','#88CCEE','#78C592', '#117733'])
cmap = colors.ListedColormap(['#117733','#88CCEE', '#FBE8EB'])
bounds=[0.0, 0.025, 0.1, 0.8]
norm = colors.BoundaryNorm(bounds, cmap.N, clip=False)
cbar_kws=dict(ticks=[0.2, 0.4, 0.6, 0.8 ,1.0])
#ax = sbn.heatmap(ugly,annot=True, fmt='.2f', linewidths=.3, annot_kws={"size": 14},square=True,robust=True,cmap=cmap, norm=norm,cbar_kws=cbar_kws )
ax = sbn.heatmap(good,annot=True, fmt='.2f', linewidths=0, linecolor='black', annot_kws={"size": 14},square=True,robust=True,cmap='bone_r',vmin=0, vmax=1 )
ax.xaxis.tick_top()
ax.xaxis.set_label_position('top')
ax.set_xlabel(r'$\lambda$ index')
ax.set_ylabel(r'$\lambda$ index')
for _, spine in ax.spines.items():
spine.set_visible(True)
show_annot_array = good >= 0.001
for text, show_annot in zip(ax.texts, (element for row in show_annot_array for element in row)):
text.set_visible(show_annot)
# Changing the colour map
from matplotlib import colors
from matplotlib.colors import LogNorm
#cmap = colors.ListedColormap(['#FBE8EB','#88CCEE','#78C592', '#117733'])
cmap = colors.ListedColormap(['#117733','#88CCEE', '#FBE8EB'])
bounds=[0.0, 0.025, 0.1, 0.8]
norm = colors.BoundaryNorm(bounds, cmap.N, clip=False)
cbar_kws=dict(ticks=[0.2, 0.4, 0.6, 0.8 ,1.0])
#ax = sbn.heatmap(ugly,annot=True, fmt='.2f', linewidths=.3, annot_kws={"size": 14},square=True,robust=True,cmap=cmap, norm=norm,cbar_kws=cbar_kws )
ax = sbn.heatmap(bad,annot=True, fmt='.2f', linewidths=0, linecolor='black', annot_kws={"size": 14},square=True,robust=True,cmap='bone_r',vmin=0, vmax=1 )
ax.xaxis.tick_top()
ax.xaxis.set_label_position('top')
ax.set_xlabel(r'$\lambda$ index')
ax.set_ylabel(r'$\lambda$ index')
for _, spine in ax.spines.items():
spine.set_visible(True)
show_annot_array = bad >= 0.01
for text, show_annot in zip(ax.texts, (element for row in show_annot_array for element in row)):
text.set_visible(show_annot)
# Changing the colour map
from matplotlib import colors
#cmap = colors.ListedColormap(['#FBE8EB','#88CCEE','#78C592', '#117733'])
cmap = colors.ListedColormap(['#FBE8EB','#88CCEE','#78C592', '#117733'])
bounds=[0.0, 0.025, 0.1, 0.3,0.8]
norm = colors.BoundaryNorm(bounds, cmap.N, clip=False)
cbar_kws=dict(ticks=[.025, .1, .3,0.8])
ax = sbn.heatmap(ugly,annot=True, fmt='.2f', linewidths=.3, annot_kws={"size": 14},square=True,robust=True,cmap=cmap, norm=norm,cbar_kws=cbar_kws )
ax.xaxis.tick_top()
ax.xaxis.set_label_position('top')
ax.set_xlabel(r'$\lambda$ index')
ax.set_ylabel(r'$\lambda$ index')
cmap = colors.ListedColormap(['#FBE8EB','#88CCEE','#78C592', '#117733'])
bounds=[0.0, 0.025, 0.1, 0.3,0.8]
norm = colors.BoundaryNorm(bounds, cmap.N, clip=False)
cbar_kws=dict(ticks=[.025, .1, .3,0.8])
ax = sbn.heatmap(bad,annot=True, fmt='.2f', linewidths=.3, annot_kws={"size": 14},square=True,robust=True,cmap=cmap, norm=norm, cbar_kws=cbar_kws )
ax.set_xlabel(r'$\lambda$ index')
ax.set_ylabel(r'$\lambda$ index')
ax.xaxis.tick_top()
ax.xaxis.set_label_position('top')
ax.set_xlabel(r'$\lambda$ index')
ax.set_ylabel(r'$\lambda$ index')
cmap = colors.ListedColormap(['#FBE8EB','#88CCEE','#78C592', '#117733'])
bounds=[0.0, 0.025, 0.1, 0.3,0.8]
norm = colors.BoundaryNorm(bounds, cmap.N, clip=False)
cbar_kws=dict(ticks=[.025, .1, .3,0.8])
ax = sbn.heatmap(good,annot=True, fmt='.2f', linewidths=.3, annot_kws={"size": 14},square=True,robust=True, cmap=cmap, norm=norm,vmin=0,vmax=1,cbar_kws=cbar_kws )
ax.set_xlabel(r'$\lambda$ index')
ax.set_ylabel(r'$\lambda$ index')
ax.xaxis.tick_top()
ax.xaxis.set_label_position('top')
ax.set_xlabel(r'$\lambda$ index')
ax.set_ylabel(r'$\lambda$ index')
cbar_kws={'ticks': '[0.0, 0.2, 0.4, 0.6, 0.8, 1.0]'}
# Playing with pandas and getting more exotic
df = pd.DataFrame(bad, columns=["1","2","3","4","5","6","7","8","9"])
#https://towardsdatascience.com/better-heatmaps-and-correlation-matrix-plots-in-python-41445d0f2bec
def heatmap(x, y, x1,y1, **kwargs):
if 'color' in kwargs:
color = kwargs['color']
else:
color = [1]*len(x)
if 'palette' in kwargs:
palette = kwargs['palette']
n_colors = len(palette)
else:
n_colors = 256 # Use 256 colors for the diverging color palette
palette = sbn.color_palette("Blues", n_colors)
if 'color_range' in kwargs:
color_min, color_max = kwargs['color_range']
else:
color_min, color_max = min(color), max(color) # Range of values that will be mapped to the palette, i.e. min and max possible correlation
def value_to_color(val):
if color_min == color_max:
return palette[-1]
else:
val_position = float((val - color_min)) / (color_max - color_min) # position of value in the input range, relative to the length of the input range
val_position = min(max(val_position, 0), 1) # bound the position betwen 0 and 1
ind = int(val_position * (n_colors - 1)) # target index in the color palette
return palette[ind]
if 'size' in kwargs:
size = kwargs['size']
else:
size = [1]*len(x)
if 'size_range' in kwargs:
size_min, size_max = kwargs['size_range'][0], kwargs['size_range'][1]
else:
size_min, size_max = min(size), max(size)
size_scale = kwargs.get('size_scale', 500)
def value_to_size(val):
if size_min == size_max:
return 1 * size_scale
else:
val_position = (val - size_min) * 0.99 / (size_max - size_min) + 0.01 # position of value in the input range, relative to the length of the input range
val_position = min(max(val_position, 0), 1) # bound the position betwen 0 and 1
return val_position * size_scale
if 'x_order' in kwargs:
x_names = [t for t in kwargs['x_order']]
else:
x_names = [t for t in sorted(set([v for v in x]))]
x_to_num = {p[1]:p[0] for p in enumerate(x_names)}
if 'y_order' in kwargs:
y_names = [t for t in kwargs['y_order']]
else:
y_names = [t for t in sorted(set([v for v in y]))]
y_to_num = {p[1]:p[0] for p in enumerate(y_names)}
plot_grid = plt.GridSpec(1, 15, hspace=0.2, wspace=0.1) # Setup a 1x10 grid
ax = plt.subplot(plot_grid[:,:-1]) # Use the left 14/15ths of the grid for the main plot
marker = kwargs.get('marker', 's')
kwargs_pass_on = {k:v for k,v in kwargs.items() if k not in [
'color', 'palette', 'color_range', 'size', 'size_range', 'size_scale', 'marker', 'x_order', 'y_order'
]}
print(x_names)
print(y_names)
print('here------------')
ax.scatter(
x=x1,
y=y1,
marker=marker,
s=[value_to_size(v) for v in size],
c=[value_to_color(v) for v in color],
**kwargs_pass_on
)
ax.set_xticks([v for k,v in x_to_num.items()])
ax.set_xticklabels([k for k in x_to_num], rotation=45, horizontalalignment='right')
ax.set_yticks([v for k,v in y_to_num.items()])
ax.set_yticklabels([k for k in y_to_num])
ax.grid(False, 'major')
ax.grid(True, 'minor')
ax.set_xticks([t + 0.5 for t in ax.get_xticks()], minor=True)
ax.set_yticks([t + 0.5 for t in ax.get_yticks()], minor=True)
ax.set_xlim([-0.5, max([v for v in x_to_num.values()]) + 0.5])
ax.set_ylim([-0.5, max([v for v in y_to_num.values()]) + 0.5])
ax.set_facecolor('#F1F1F1')
# Add color legend on the right side of the plot
if color_min < color_max:
ax = plt.subplot(plot_grid[:,-1]) # Use the rightmost column of the plot
col_x = [0]*len(palette) # Fixed x coordinate for the bars
bar_y=np.linspace(color_min, color_max, n_colors) # y coordinates for each of the n_colors bars
bar_height = bar_y[1] - bar_y[0]
ax.barh(
y=bar_y,
width=[5]*len(palette), # Make bars 5 units wide
left=col_x, # Make bars start at 0
height=bar_height,
color=palette,
linewidth=0
)
ax.set_xlim(1, 2) # Bars are going from 0 to 5, so lets crop the plot somewhere in the middle
ax.grid(False) # Hide grid
ax.set_facecolor('white') # Make background white
ax.set_xticks([]) # Remove horizontal ticks
ax.set_yticks(np.linspace(min(bar_y), max(bar_y), 3)) # Show vertical ticks for min, middle and max
ax.yaxis.tick_right() # Show vertical ticks on the right
def corrplot(data, size_scale=500, marker='s'):
corr = pd.melt(data.reset_index(), id_vars='index')
print(corr)
corr.columns = ['index', 'variable', 'value']
x_names = [t for t in sorted(set([v for v in corr['index']]))]
x_to_num = {p[1]:p[0] for p in enumerate(x_names)}
x=[x_to_num[v] for v in corr['index']]
y_names = [t for t in sorted(set([v for v in corr['index']]))]
y_to_num = {p[1]:p[0] for p in enumerate(y_names)}
y=[y_to_num[v] for v in corr['index']]
heatmap(
corr['index'], corr['value'],x1,y1,
color=corr['value'], color_range=[0, 1],
palette=sbn.diverging_palette(20, 220, n=256),
size=corr['value'].abs(), size_range=[0,1],
marker=marker,
x_order=data.columns,
y_order=data.columns[::-1],
size_scale=size_scale
)
corrplot(df)
corr = pd.melt(df.reset_index(), id_vars='index')
print(corr)
x_names = [t for t in sorted(set([v for v in corr['index']]))]
x_to_num = {p[1]:p[0] for p in enumerate(x_names)}
x1=[x_to_num[v] for v in corr['index']]
y_names = [t for t in sorted(set([v for v in corr['variable']]))]
y_to_num = {p[1]:p[0] for p in enumerate(y_names)}
y1=[y_to_num[v] for v in corr['variable']]
def value_to_size(val):
if size_min == size_max:
return 1 * size_scale
else:
val_position = (val - size_min) * 0.99 / (size_max - size_min) + 0.01 # position of value in the input range, relative to the length of the input range
val_position = min(max(val_position, 0), 1) # bound the position betwen 0 and 1
return val_position * size_scale
value_names = [t for t in sorted(set([v for v in corr['value']]))]
value = []
for v in corr['value']:
value.append(v)
for v in corr['value']:
print (v)
n_colors = 256 # Use 256 colors for the diverging color palette
palette = sbn.cubehelix_palette(n_colors)
mapping = linspace(0,1,256)
c_index = np.digitize(value, mapping)
plot_colors =[]
for i in c_index:
plot_colors.append(palette[i])
s =np.array(value)*4000
fig = figsize(10,10)
plot_grid = plt.GridSpec(1, 15, hspace=0.2, wspace=0.1) # Setup a 1x10 grid
ax = plt.subplot(plot_grid[:,:-1]) # Use the left 14/15ths of the grid for the main plot
ax.scatter(x1,y1,marker='s',s=s,c=plot_colors)
sbn.despine()
ax.grid(False, 'major')
ax.grid(True, 'minor', color='white')
ax.set_xticks([t + 0.5 for t in ax.get_xticks()], minor=True)
ax.set_yticks([t + 0.5 for t in ax.get_yticks()], minor=True)
ax.set_xlim([-0.5, max([v for v in x_to_num.values()]) + 0.5])
ax.set_ylim([-0.5, max([v for v in y_to_num.values()]) + 0.5])
ax.set_facecolor((0,0,0))
plt.gca().invert_yaxis()
ax.xaxis.tick_top()
ax.xaxis.set_label_position('top')
xlabel(r'$\lambda$ index')
ylabel(r'$\lambda$ index')
def value_to_size(val, vlaue):
size_scale = 500
size = [1]*len(value)
size_min, size_max = min(size), max(size)
if size_min == size_max:
return 1 * size_scale
else:
val_position = (val - size_min) * 0.99 / (size_max - size_min) + 0.01 # position of value in the input range, relative to the length of the input range
val_position = min(max(val_position, 0), 1) # bound the position betwen 0 and 1
return val_position * size_scale
heatmap2
value_to_size(value[5], value)
from biokit.viz import corrplot
c = corrplot.Corrplot(df)
c.plot()
def plot(index, columns):
values = "bad_status"
vmax = 0.10
cellsize_vmax = 10000
g_ratio = df.pivot_table(index=index, columns=columns, values=values, aggfunc="mean")
g_size = df.pivot_table(index=index, columns=columns, values=values, aggfunc="size")
annot = np.vectorize(lambda x: "" if np.isnan(x) else "{:.1f}%".format(x * 100))(g_ratio)
# adjust visual balance
figsize = (g_ratio.shape[1] * 0.8, g_ratio.shape[0] * 0.8)
cbar_width = 0.05 * 6.0 / figsize[0]
f, ax = plt.subplots(1, 1, figsize=figsize)
cbar_ax = f.add_axes([.91, 0.1, cbar_width, 0.8])
heatmap2(g_ratio, ax=ax, cbar_ax=cbar_ax,
vmax=vmax, cmap="PuRd", annot=annot, fmt="s", annot_kws={"fontsize":"small"},
cellsize=g_size, cellsize_vmax=cellsize_vmax,
square=True, ax_kws={"title": "{} x {}".format(index, columns)})
plt.show()
"""
This script is created by modifying seaborn matrix.py
in https://github.com/mwaskom/seaborn, by Michael L. Waskom
"""
from __future__ import division
import itertools
import matplotlib as mpl
from matplotlib.collections import LineCollection
import matplotlib.pyplot as plt
from matplotlib import gridspec
import matplotlib.patheffects as patheffects
import numpy as np
import pandas as pd
from scipy.cluster import hierarchy
import seaborn as sns
from seaborn import cm
from seaborn.axisgrid import Grid
from seaborn.utils import (despine, axis_ticklabels_overlap, relative_luminance, to_utf8)
from seaborn.external.six import string_types
def _index_to_label(index):
"""Convert a pandas index or multiindex to an axis label."""
if isinstance(index, pd.MultiIndex):
return "-".join(map(to_utf8, index.names))
else:
return index.name
def _index_to_ticklabels(index):
"""Convert a pandas index or multiindex into ticklabels."""
if isinstance(index, pd.MultiIndex):
return ["-".join(map(to_utf8, i)) for i in index.values]
else:
return index.values
def _matrix_mask(data, mask):
"""Ensure that data and mask are compatabile and add missing values.
Values will be plotted for cells where ``mask`` is ``False``.
``data`` is expected to be a DataFrame; ``mask`` can be an array or
a DataFrame.
"""
if mask is None:
mask = np.zeros(data.shape, np.bool)
if isinstance(mask, np.ndarray):
# For array masks, ensure that shape matches data then convert
if mask.shape != data.shape:
raise ValueError("Mask must have the same shape as data.")
mask = pd.DataFrame(mask,
index=data.index,
columns=data.columns,
dtype=np.bool)
elif isinstance(mask, pd.DataFrame):
# For DataFrame masks, ensure that semantic labels match data
if not mask.index.equals(data.index) \
and mask.columns.equals(data.columns):
err = "Mask must have the same index and columns as data."
raise ValueError(err)
# Add any cells with missing data to the mask
# This works around an issue where `plt.pcolormesh` doesn't represent
# missing data properly
mask = mask | pd.isnull(data)
return mask
class _HeatMapper2(object):
"""Draw a heatmap plot of a matrix with nice labels and colormaps."""
def __init__(self, data, vmin, vmax, cmap, center, robust, annot, fmt,
annot_kws, cellsize, cellsize_vmax,
cbar, cbar_kws,
xticklabels=True, yticklabels=True, mask=None, ax_kws=None, rect_kws=None):
"""Initialize the plotting object."""
# We always want to have a DataFrame with semantic information
# and an ndarray to pass to matplotlib
if isinstance(data, pd.DataFrame):
plot_data = data.values
else:
plot_data = np.asarray(data)
data = pd.DataFrame(plot_data)
# Validate the mask and convet to DataFrame
mask = _matrix_mask(data, mask)
plot_data = np.ma.masked_where(np.asarray(mask), plot_data)
# Get good names for the rows and columns
xtickevery = 1
if isinstance(xticklabels, int):
xtickevery = xticklabels
xticklabels = _index_to_ticklabels(data.columns)
elif xticklabels is True:
xticklabels = _index_to_ticklabels(data.columns)
elif xticklabels is False:
xticklabels = []
ytickevery = 1
if isinstance(yticklabels, int):
ytickevery = yticklabels
yticklabels = _index_to_ticklabels(data.index)
elif yticklabels is True:
yticklabels = _index_to_ticklabels(data.index)
elif yticklabels is False:
yticklabels = []
# Get the positions and used label for the ticks
nx, ny = data.T.shape
if not len(xticklabels):
self.xticks = []
self.xticklabels = []
elif isinstance(xticklabels, string_types) and xticklabels == "auto":
self.xticks = "auto"
self.xticklabels = _index_to_ticklabels(data.columns)
else:
self.xticks, self.xticklabels = self._skip_ticks(xticklabels,
xtickevery)
if not len(yticklabels):
self.yticks = []
self.yticklabels = []
elif isinstance(yticklabels, string_types) and yticklabels == "auto":
self.yticks = "auto"
self.yticklabels = _index_to_ticklabels(data.index)
else:
self.yticks, self.yticklabels = self._skip_ticks(yticklabels,
ytickevery)
# Get good names for the axis labels
xlabel = _index_to_label(data.columns)
ylabel = _index_to_label(data.index)
self.xlabel = xlabel if xlabel is not None else ""
self.ylabel = ylabel if ylabel is not None else ""
# Determine good default values for the colormapping
self._determine_cmap_params(plot_data, vmin, vmax,
cmap, center, robust)
# Determine good default values for cell size
self._determine_cellsize_params(plot_data, cellsize, cellsize_vmax)
# Sort out the annotations
if annot is None:
annot = False
annot_data = None
elif isinstance(annot, bool):
if annot:
annot_data = plot_data
else:
annot_data = None
else:
try:
annot_data = annot.values
except AttributeError:
annot_data = annot
if annot.shape != plot_data.shape:
raise ValueError('Data supplied to "annot" must be the same '
'shape as the data to plot.')
annot = True
# Save other attributes to the object
self.data = data
self.plot_data = plot_data
self.annot = annot
self.annot_data = annot_data
self.fmt = fmt
self.annot_kws = {} if annot_kws is None else annot_kws
#self.annot_kws.setdefault('color', "black")
self.annot_kws.setdefault('ha', "center")
self.annot_kws.setdefault('va', "center")
self.cbar = cbar
self.cbar_kws = {} if cbar_kws is None else cbar_kws
self.cbar_kws.setdefault('ticks', mpl.ticker.MaxNLocator(6))
self.ax_kws = {} if ax_kws is None else ax_kws
self.rect_kws = {} if rect_kws is None else rect_kws
# self.rect_kws.setdefault('edgecolor', "black")
def _determine_cmap_params(self, plot_data, vmin, vmax,
cmap, center, robust):
"""Use some heuristics to set good defaults for colorbar and range."""
calc_data = plot_data.data[~np.isnan(plot_data.data)]
if vmin is None:
vmin = np.percentile(calc_data, 2) if robust else calc_data.min()
if vmax is None:
vmax = np.percentile(calc_data, 98) if robust else calc_data.max()
self.vmin, self.vmax = vmin, vmax
# Choose default colormaps if not provided
if cmap is None:
if center is None:
self.cmap = cm.rocket
else:
self.cmap = cm.icefire
elif isinstance(cmap, string_types):
self.cmap = mpl.cm.get_cmap(cmap)
elif isinstance(cmap, list):
self.cmap = mpl.colors.ListedColormap(cmap)
else:
self.cmap = cmap
# Recenter a divergent colormap
if center is not None:
vrange = max(vmax - center, center - vmin)
normlize = mpl.colors.Normalize(center - vrange, center + vrange)
cmin, cmax = normlize([vmin, vmax])
cc = np.linspace(cmin, cmax, 256)
self.cmap = mpl.colors.ListedColormap(self.cmap(cc))
def _determine_cellsize_params(self, plot_data, cellsize, cellsize_vmax):
if cellsize is None:
self.cellsize = np.ones(plot_data.shape)
self.cellsize_vmax = 1.0
else:
if isinstance(cellsize, pd.DataFrame):
cellsize = cellsize.values
self.cellsize = cellsize
if cellsize_vmax is None:
cellsize_vmax = cellsize.max()
self.cellsize_vmax = cellsize_vmax
def _skip_ticks(self, labels, tickevery):
"""Return ticks and labels at evenly spaced intervals."""
n = len(labels)
if tickevery == 0:
ticks, labels = [], []
elif tickevery == 1:
ticks, labels = np.arange(n) + .5, labels
else:
start, end, step = 0, n, tickevery
ticks = np.arange(start, end, step) + .5
labels = labels[start:end:step]
return ticks, labels
def _auto_ticks(self, ax, labels, axis):
"""Determine ticks and ticklabels that minimize overlap."""
transform = ax.figure.dpi_scale_trans.inverted()
bbox = ax.get_window_extent().transformed(transform)
size = [bbox.width, bbox.height][axis]
axis = [ax.xaxis, ax.yaxis][axis]
tick, = axis.set_ticks([0])
fontsize = tick.label.get_size()
max_ticks = int(size // (fontsize / 72))
if max_ticks < 1:
return [], []
tick_every = len(labels) // max_ticks + 1
tick_every = 1 if tick_every == 0 else tick_every
ticks, labels = self._skip_ticks(labels, tick_every)
return ticks, labels
def plot(self, ax, cax):
"""Draw the heatmap on the provided Axes."""
# Remove all the Axes spines
#despine(ax=ax, left=True, bottom=True)
# Draw the heatmap and annotate
height, width = self.plot_data.shape
xpos, ypos = np.meshgrid(np.arange(width) + .5, np.arange(height) + .5)
data = self.plot_data.data
cellsize = self.cellsize
mask = self.plot_data.mask
if not isinstance(mask, np.ndarray) and not mask:
mask = np.zeros(self.plot_data.shape, np.bool)
annot_data = self.annot_data
if not self.annot:
annot_data = np.zeros(self.plot_data.shape)
# Draw rectangles instead of using pcolormesh
# Might be slower than original heatmap
for x, y, m, val, s, an_val in zip(xpos.flat, ypos.flat, mask.flat, data.flat, cellsize.flat, annot_data.flat):
if not m:
vv = (val - self.vmin) / (self.vmax - self.vmin)
size = np.clip(s / self.cellsize_vmax, 0.1, 1.0)
color = self.cmap(vv)
rect = plt.Rectangle([x - size / 2, y - size / 2], size, size, facecolor=color, **self.rect_kws)
ax.add_patch(rect)
if self.annot:
annotation = ("{:" + self.fmt + "}").format(an_val)
text = ax.text(x, y, annotation, **self.annot_kws)
print(text)
# add edge to text
text_luminance = relative_luminance(text.get_color())
text_edge_color = ".15" if text_luminance > .408 else "w"
text.set_path_effects([mpl.patheffects.withStroke(linewidth=1, foreground=text_edge_color)])
# Set the axis limits
ax.set(xlim=(0, self.data.shape[1]), ylim=(0, self.data.shape[0]))
# Set other attributes
ax.set(**self.ax_kws)
if self.cbar:
norm = mpl.colors.Normalize(vmin=self.vmin, vmax=self.vmax)
scalar_mappable = mpl.cm.ScalarMappable(cmap=self.cmap, norm=norm)
scalar_mappable.set_array(self.plot_data.data)
cb = ax.figure.colorbar(scalar_mappable, cax, ax, **self.cbar_kws)
cb.outline.set_linewidth(0)
# if kws.get('rasterized', False):
# cb.solids.set_rasterized(True)
# Add row and column labels
if isinstance(self.xticks, string_types) and self.xticks == "auto":
xticks, xticklabels = self._auto_ticks(ax, self.xticklabels, 0)
else:
xticks, xticklabels = self.xticks, self.xticklabels
if isinstance(self.yticks, string_types) and self.yticks == "auto":
yticks, yticklabels = self._auto_ticks(ax, self.yticklabels, 1)
else:
yticks, yticklabels = self.yticks, self.yticklabels
ax.set(xticks=xticks, yticks=yticks)
xtl = ax.set_xticklabels(xticklabels)
ytl = ax.set_yticklabels(yticklabels, rotation="vertical")
# Possibly rotate them if they overlap
ax.figure.draw(ax.figure.canvas.get_renderer())
if axis_ticklabels_overlap(xtl):
plt.setp(xtl, rotation="vertical")
if axis_ticklabels_overlap(ytl):
plt.setp(ytl, rotation="horizontal")
# Add the axis labels
ax.set(xlabel=self.xlabel, ylabel=self.ylabel)
# Invert the y axis to show the plot in matrix form
ax.invert_yaxis()
def heatmap2(data, vmin=None, vmax=None, cmap=None, center=None, robust=False,
annot=None, fmt=".2g", annot_kws=None,
cellsize=None, cellsize_vmax=None,
cbar=True, cbar_kws=None, cbar_ax=None,
square=False, xticklabels="auto", yticklabels="auto",
mask=None, ax=None, ax_kws=None, rect_kws=None):
# Initialize the plotter object
plotter = _HeatMapper2(data, vmin, vmax, cmap, center, robust,
annot, fmt, annot_kws,
cellsize, cellsize_vmax,
cbar, cbar_kws, xticklabels,
yticklabels, mask, ax_kws, rect_kws)
# Draw the plot and return the Axes
if ax is None:
ax = plt.gca()
if square:
ax.set_aspect("equal")
# delete grid
ax.grid(False)
plotter.plot(ax, cbar_ax)
return ax
fig =figsize(10,10)
ax = heatmap2(good,annot=True, fmt='.2f',cellsize=np.array(value),cellsize_vmax=1, annot_kws={"size": 13},square=True,robust=True,cmap='PiYG' )
ax.set_xlabel(r'$\lambda$ index')
ax.set_ylabel(r'$\lambda$ index')
ax.grid(False, 'major')
ax.grid(True, 'minor', color='black', alpha=0.3)
ax.set_xticks([t + 0.5 for t in ax.get_xticks()], minor=True)
ax.set_yticks([t + 0.5 for t in ax.get_yticks()], minor=True)
ax.xaxis.tick_top()
ax.xaxis.set_label_position('top')
fig =figsize(8,8)
ax = sbn.heatmap(good,annot=True, fmt='.2f', linewidths=.3, annot_kws={"size": 14},cmap=sbn.light_palette((210, 90, 60), input="husl") )
ax.set_xlabel(r'$\lambda$ index')
ax.set_ylabel(r'$\lambda$ index')
sbn.despine()
ax.grid(False, 'major')
ax.grid(True, 'minor', color='white')
ax.set_xticks([t + 0.5 for t in ax.get_xticks()], minor=True)
ax.set_yticks([t + 0.5 for t in ax.get_yticks()], minor=True)
text = ax.text(x, y, annotation, **self.annot_kws)
# add edge to text
text_luminance = relative_luminance(text.get_color())
text_edge_color = ".15" if text_luminance > .408 else "w"
text.set_path_effects([mpl.patheffects.withStroke(linewidth=1, foreground=text_edge_color)])
ax.text()
```
| true |
code
| 0.481271 | null | null | null | null |
|
```
import numpy as np
import pandas as pd
import tensorflow as tf
import pickle
```
## Data Preprocessing
```
# Loading formatted data
# I use format the data into pd dataframe
# See data_formatting.ipynb for details
train_data = pd.read_pickle("../dataset/train.pickle")
validate_data = pd.read_pickle("../dataset/validate.pickle")
test_data = pd.read_pickle("../dataset/test.pickle")
```
### Tokenize the source code
#### BoW
For data batching convenience, the paper trained only on functions with token length $10 \leq l \leq 500$, padded to the maximum length of **500**
The paper does not mention to pad the 0 at the end or at the beginning, so I assume they append the padding at the end (actually, this is not a big deal in CNN)
text_to_word_sequence does not work since it ask a single string
```
# train_tokenized = tf.keras.preprocessing.text.text_to_word_sequence(train_data[0])
# x_train = tf.keras.preprocessing.sequence.pad_sequences(train_tokenized, maxlen=500, padding="post")
# validate_tokenized = tf.keras.preprocessing.text.text_to_word_sequence(validate_data[0])
# x_validate = tf.keras.preprocessing.sequence.pad_sequences(validate_tokenized, maxlen=500, padding="post")
# test_tokenized = tf.keras.preprocessing.text.text_to_word_sequence(test_data[0])
# x_test = tf.keras.preprocessing.sequence.pad_sequences(test_tokenized, maxlen=500, padding="post")
```
#### Init the Tokenizer
#### BoW
```
# The paper does not declare the num of words to track, I am using 10000 here
tokenizer = tf.keras.preprocessing.text.Tokenizer(num_words=10000)
# Required before using texts_to_sequences
# Arguments; a list of strings
tokenizer.fit_on_texts(list(train_data[0]))
```
For data batching convenience, the paper trained only on functions with token length $10 \leq l \leq 500$, padded to the maximum length of **500**
The paper does not mention to pad the 0 at the end or at the beginning, so I assume they append the padding at the end (actually, this is not a big deal in CNN)
```
train_tokenized = tokenizer.texts_to_sequences(train_data[0])
x_train = tf.keras.preprocessing.sequence.pad_sequences(train_tokenized, maxlen=500, padding="post")
validate_tokenized = tokenizer.texts_to_sequences(validate_data[0])
x_validate = tf.keras.preprocessing.sequence.pad_sequences(validate_tokenized, maxlen=500, padding="post")
test_tokenized = tokenizer.texts_to_sequences(test_data[0])
x_test = tf.keras.preprocessing.sequence.pad_sequences(test_tokenized, maxlen=500, padding="post")
y_train = train_data[train_data.columns[2]].astype(int)
y_validate = validate_data[validate_data.columns[2]].astype(int)
y_test = test_data[test_data.columns[2]].astype(int)
```
## Model Design
This dataset is highly imbalanced, so I am working on adjusting the train weights
https://www.tensorflow.org/tutorials/structured_data/imbalanced_data
```
clear, vulnerable = (train_data[train_data.columns[2]]).value_counts()
total = vulnerable + clear
print("Total: {}\n Vulnerable: {} ({:.2f}% of total)\n".format(total, vulnerable, 100 * vulnerable / total))
weight_for_0 = (1 / clear)*(total)/2.0
weight_for_1 = (1 / vulnerable)*(total)/2.0
class_weight = {0: weight_for_0, 1: weight_for_1}
print('Weight for class 0: {:.2f}'.format(weight_for_0))
print('Weight for class 1: {:.2f}'.format(weight_for_1))
model = tf.keras.Sequential()
model.add(tf.keras.layers.Embedding(input_dim=10000, output_dim=13, input_length=500))
model.add(tf.keras.layers.Conv1D(filters=512, kernel_size=9, activation="relu"))
model.add(tf.keras.layers.MaxPool1D(pool_size=4))
model.add(tf.keras.layers.Dropout(rate=0.5))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(units=64, activation="relu"))
model.add(tf.keras.layers.Dense(units=16, activation="relu"))
# I am using the sigmoid rather than the softmax mentioned in the paper
model.add(tf.keras.layers.Dense(units=1, activation="sigmoid"))
# Adam Optimization with smaller learning rate
adam = tf.keras.optimizers.Adam(lr=0.001)
# Define the evaluation metrics
METRICS = [
tf.keras.metrics.TruePositives(name='tp'),
tf.keras.metrics.FalsePositives(name='fp'),
tf.keras.metrics.TrueNegatives(name='tn'),
tf.keras.metrics.FalseNegatives(name='fn'),
tf.keras.metrics.BinaryAccuracy(name='accuracy'),
tf.keras.metrics.Precision(name='precision'),
tf.keras.metrics.Recall(name='recall'),
tf.keras.metrics.AUC(name='auc'),
]
model.compile(optimizer=adam, loss="binary_crossentropy", metrics=METRICS)
model.summary()
history = model.fit(x=x_train, y=y_train, batch_size=128, epochs=10, verbose=1, class_weight=class_weight, validation_data=(x_validate, y_validate))
with open('CWE120_trainHistory', 'wb') as history_file:
pickle.dump(history.history, history_file)
model.save("Simple_CNN_CWE120")
results = model.evaluate(x_test, y_test, batch_size=128)
```
| true |
code
| 0.692655 | null | null | null | null |
|
[](https://colab.research.google.com/github/Rishit-dagli/Android-Stream-Day-2020/blob/master/Rock_Paper_Scissors.ipynb)
# Rock Paper Scissors with TF Model Maker
Model Maker library simplifies the process of adapting and converting a TensorFlow neural-network model to particular input data when deploying this model for on-device ML applications.
This notebook shows an end-to-end example that utilizes this Model Maker library to illustrate the adaption and conversion of a commonly-used image classification model to classify flowers on a mobile device.
This is a part of an example where I show how one can very easily do on-device ML with TensorFlow Lite Model Maker and ML Model Binding Plugin.
## Setup
We need to install serveral required packages, including Model Maker package that in github [repo](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker).
```
!pip install git+git://github.com/tensorflow/examples.git#egg=tensorflow-examples[model_maker]
```
Import the required packages.
```
import numpy as np
import tensorflow as tf
assert tf.__version__.startswith('2')
from tensorflow_examples.lite.model_maker.core.data_util.image_dataloader import ImageClassifierDataLoader
from tensorflow_examples.lite.model_maker.core.task import image_classifier
from tensorflow_examples.lite.model_maker.core.task.model_spec import mobilenet_v2_spec
from tensorflow_examples.lite.model_maker.core.task.model_spec import ImageModelSpec
import matplotlib.pyplot as plt
```
## Training the model
### Get the data path
Let's get some images to play with this simple end-to-end example. Hundreds of images is a good start for Model Maker while more data could achieve better accuracy.
```
!wget https://storage.googleapis.com/laurencemoroney-blog.appspot.com/rps.zip
!unzip rps.zip
image_path = "rps"
```
You could replace `image_path` with your own image folders. As for uploading data to colab, you could find the upload button in the left sidebar shown in the image below with the red rectangle. Just have a try to upload a zip file and unzip it. The root file path is the current path.
<img src="http://storage.rishit.tech/storage/Android-Stream-Day-2020/upload-to-colab.png" alt="Upload File" width="800" hspace="100">
If you prefer not to upload your images to the cloud, you could try to run the library locally following the [guide](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker) in github.
### Run the example
The example just consists of 4 lines of code as shown below, each of which representing one step of the overall process.
1. Load input data specific to an on-device ML app. Split it to training data and testing data.
```
data = ImageClassifierDataLoader.from_folder(image_path)
train_data, test_data = data.split(0.9)
```
2. Customize the TensorFlow model.
```
model = image_classifier.create(train_data)
```
3. Evaluate the model.
```
loss, accuracy = model.evaluate(test_data)
```
4. Export to TensorFlow Lite model.
You could download it in the left sidebar same as the uploading part for your own use.
```
model.export(export_dir='.', with_metadata=True)
```
5. Download the trained model by clicking on the folder icon on the left hand side. Right-click on "model.tflite" and select download. Or run the following code:
```
from google.colab import files
files.download('model.tflite')
```
| true |
code
| 0.652961 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/pg1992/IA025_2022S1/blob/main/ex05/pedro_moreira/solution.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
nome = "Pedro Guilherme Siqueira Moreira"
print(f'Meu nome é {nome}')
```
Este exercicío consiste em treinar no MNIST um modelo de umas camadas, sendo a primeira uma camada convolucional e a segunda uma camada linear de classificação.
Não podemos usar as funções torch.nn.Conv{1,2,3}d
## Importação das bibliotecas
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import random
import torch
import torchvision
from torchvision.datasets import MNIST
```
## Fixando as seeds
```
random.seed(123)
np.random.seed(123)
torch.manual_seed(123)
```
## Define pesos iniciais
```
in_channels = 1
out_channels = 2
kernel_size = 5
stride = 3
# Input image size
height_in = 28
width_in = 28
# Image size after the first convolutional layer.
height_out = (height_in - kernel_size) // stride + 1
width_out = (width_in - kernel_size) // stride + 1
initial_conv_weight = torch.FloatTensor(out_channels, in_channels, kernel_size, kernel_size).uniform_(-0.01, 0.01)
initial_conv_bias = torch.FloatTensor(out_channels,).uniform_(-0.01, 0.01)
initial_classification_weight = torch.FloatTensor(10, out_channels * height_out * width_out).uniform_(-0.01, 0.01)
initial_classification_bias = torch.FloatTensor(10,).uniform_(-0.01, 0.01)
```
## Dataset e dataloader
### Definição do tamanho do minibatch
```
batch_size = 50
```
### Carregamento, criação dataset e do dataloader
```
dataset_dir = '../data/'
dataset_train_full = MNIST(dataset_dir, train=True, download=True,
transform=torchvision.transforms.ToTensor())
print(dataset_train_full.data.shape)
print(dataset_train_full.targets.shape)
```
### Usando apenas 1000 amostras do MNIST
Neste exercício utilizaremos 1000 amostras de treinamento.
```
indices = torch.randperm(len(dataset_train_full))[:1000]
dataset_train = torch.utils.data.Subset(dataset_train_full, indices)
```
## Define os pesos iniciais
```
loader_train = torch.utils.data.DataLoader(dataset_train, batch_size=batch_size, shuffle=False)
print('Número de minibatches de trenamento:', len(loader_train))
x_train, y_train = next(iter(loader_train))
print("\nDimensões dos dados de um minibatch:", x_train.size())
print("Valores mínimo e máximo dos pixels: ", torch.min(x_train), torch.max(x_train))
print("Tipo dos dados das imagens: ", type(x_train))
print("Tipo das classes das imagens: ", type(y_train))
```
## Camada Convolucional
```
class MyConv2d(torch.nn.Module):
def __init__(self, in_channels: int, out_channels: int, kernel_size: int, stride: int):
super(MyConv2d, self).__init__()
self.in_channels = in_channels
self.out_channels = out_channels
self.kernel_size = kernel_size # The same for height and width.
self.stride = stride # The same for height and width.
self.weight = torch.nn.Parameter(torch.FloatTensor(out_channels, in_channels, kernel_size, kernel_size).uniform_(-0.01, 0.01))
self.bias = torch.nn.Parameter(torch.FloatTensor(out_channels,).uniform_(-0.01, 0.01))
def forward(self, x):
assert x.dim() == 4, f'x must have 4 dimensions: {x.shape}'
# Escreva seu código aqui.
return out
```
## Compare se sua implementação está igual à do pytorch usando um exemplo simples
```
in_channels_dummy = 1
out_channels_dummy = 1
kernel_size_dummy = 2
stride_dummy = 1
conv_layer = MyConv2d(in_channels=in_channels_dummy, out_channels=out_channels_dummy, kernel_size=kernel_size_dummy, stride=stride_dummy)
pytorch_conv_layer = torch.nn.Conv2d(in_channels=in_channels_dummy, out_channels=out_channels_dummy, kernel_size=kernel_size_dummy, stride=stride_dummy, padding=0)
# Usa os mesmos pesos para minha implementação e a do pytorch
initial_weights_dummy = torch.arange(in_channels_dummy * out_channels_dummy * kernel_size_dummy * kernel_size_dummy).float()
initial_weights_dummy = initial_weights_dummy.reshape(out_channels_dummy, in_channels_dummy, kernel_size_dummy, kernel_size_dummy)
initial_bias_dummy = torch.arange(out_channels_dummy,).float()
conv_layer.weight.data = initial_weights_dummy
conv_layer.bias.data = initial_bias_dummy
pytorch_conv_layer.load_state_dict(dict(weight=initial_weights_dummy, bias=initial_bias_dummy))
x = torch.arange(30).float().reshape(1, 1, 5, 6)
out = conv_layer(x)
target_out = pytorch_conv_layer(x)
assert torch.allclose(out, target_out, atol=1e-6)
```
## Compare se sua implementação está igual à do pytorch usando um exemplo aleatório
```
x = torch.rand(2, in_channels, height_in, width_in)
conv_layer = MyConv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, stride=stride)
pytorch_conv_layer = torch.nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, stride=stride, padding=0)
# Usa os mesmos pesos para minha implementação e a do pytorch
conv_layer.weight.data = initial_conv_weight
conv_layer.bias.data = initial_conv_bias
pytorch_conv_layer.load_state_dict(dict(weight=initial_conv_weight, bias=initial_conv_bias))
out = conv_layer(x)
target_out = pytorch_conv_layer(x)
assert torch.allclose(out, target_out, atol=1e-6)
```
## Modelo
```
class Net(torch.nn.Module):
def __init__(self, height_in: int, width_in: int, in_channels: int, out_channels: int, kernel_size: int, stride: int):
super(Net, self).__init__()
self.conv_layer = MyConv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, stride=stride)
height_out = (height_in - kernel_size) // stride + 1
width_out = (width_in - kernel_size) // stride + 1
self.classification_layer = torch.nn.Linear(out_channels * height_out * width_out, 10)
def forward(self, x):
hidden = self.conv_layer(x)
hidden = torch.nn.functional.relu(hidden)
hidden = hidden.reshape(x.shape[0], -1)
logits = self.classification_layer(hidden)
return logits
```
## Treinamento
### Definição dos hiperparâmetros
```
n_epochs = 50
lr = 0.1
```
### Laço de treinamento
```
model = Net(height_in=height_in, width_in=width_in, in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, stride=stride)
# Usa pesos iniciais pré-difinidos
model.classification_layer.load_state_dict(dict(weight=initial_classification_weight, bias=initial_classification_bias))
model.conv_layer.weight.data = initial_conv_weight
model.conv_layer.bias.data = initial_conv_bias
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr)
epochs = []
loss_history = []
loss_epoch_end = []
total_trained_samples = 0
for i in range(n_epochs):
for x_train, y_train in loader_train:
# predict da rede
outputs = model(x_train)
# calcula a perda
loss = criterion(outputs, y_train)
# zero, backpropagation, ajusta parâmetros pelo gradiente descendente
optimizer.zero_grad()
loss.backward()
optimizer.step()
total_trained_samples += x_train.size(0)
epochs.append(total_trained_samples / len(dataset_train))
loss_history.append(loss.item())
loss_epoch_end.append(loss.item())
print(f'Epoch: {i:d}/{n_epochs - 1:d} Loss: {loss.item()}')
```
### Visualização usual da perda, somente no final de cada minibatch
```
n_batches_train = len(loader_train)
plt.plot(epochs[::n_batches_train], loss_history[::n_batches_train])
plt.xlabel('época')
loss_epoch_end
# Assert do histórico de losses
target_loss_epoch_end = np.array([
2.303267478942871,
2.227701187133789,
1.0923893451690674,
0.5867354869842529,
0.5144089460372925,
0.45026642084121704,
0.4075140357017517,
0.37713879346847534,
0.3534485101699829,
0.3341451585292816,
0.3181140422821045,
0.30457887053489685,
0.29283496737480164,
0.2827608287334442,
0.2738332152366638,
0.2657742500305176,
0.2583288848400116,
0.25117507576942444,
0.24439716339111328,
0.23789969086647034,
0.23167723417282104,
0.22562651336193085,
0.21984536945819855,
0.2142913043498993,
0.20894232392311096,
0.203872948884964,
0.19903430342674255,
0.19439971446990967,
0.18994088470935822,
0.18563991785049438,
0.18147490918636322,
0.17744913697242737,
0.17347246408462524,
0.16947467625141144,
0.16547319293022156,
0.16150487959384918,
0.1574639081954956,
0.1534043848514557,
0.14926929771900177,
0.1452063024044037,
0.1412365883588791,
0.13712672889232635,
0.1331038922071457,
0.1291467249393463,
0.1251506358385086,
0.12116757035255432,
0.11731722950935364,
0.11364627629518509,
0.11001908034086227,
0.10655981302261353])
assert np.allclose(np.array(loss_epoch_end), target_loss_epoch_end, atol=1e-6)
```
| true |
code
| 0.717454 | null | null | null | null |
|
# Prediction: Beyond Simple Random Walks
The tracking algorithm, at its simplest level, takes each particle in the previous frame and tries to find it in the current frame. This requires knowing where to look for it; if we find an actual particle near that spot, it's probably a match. The basic algorithm (Crocker & Grier) was developed to track particles undergoing Brownian diffusion, which ideally means that a particle's velocity is uncorrelated from one frame to the next. Therefore, the best guess for where a particle is going is that it will be near its most recent location.
Let's formalize this guessing as *prediction*. Consider a function
$$P(t_1, t_0, \vec x(t_0))$$
that takes the particle at position $\vec x(t_0)$ and predicts its future position $\vec x(t_1)$. The optimal predictor for Brownian motion is
$$P(t_1, t_0, \vec x(t_0)) = \vec x(t_0)$$
which happily is also the easiest to implement.
The better our prediction about where to look in the next frame, the more likely we will find the one and only particle we seek. `trackpy` looks for the particle in a small region of radius `search_range`, centered on $P(t_1, t_0, \vec x(t_0))$. So to successfully track particle $i$ puts a limit on the error in our prediction:
$$\|P(t_1, t_0, \vec x_i(t_0)) - \vec x_i(t_1)\| \le \tt{search\_range}$$
This favors a generous `search_range`. However, if `search_range` is too big, then for each particle in the previous frame there will be many possible matches in the current frame, and so matching one frame to the next requires the computer to consider a mind-boggling set of possibilities. Tracking may become impossibly slow, and this causes `trackpy` to halt and raise a `SubnetOversizeException`, rather than keep you waiting forever. So for the Brownian $P$ above, `search_range` must be bigger than the largest particle displacement between frames, but smaller than the typical spacing between particles. If such a value cannot be found among the real numbers, then you have a problem.
However, if particle motion is not strictly Brownian, its velocity probably *is* correlated in time. We may be able to improve $P$. We will now do this with `trackpy`.
## Prescribed predictors
Let's start by demonstrating the mechanics of $P$ in `trackpy`. `trackpy`'s various `link_` functions accept a `predictor` argument, which is a Python function that implements $P$.
Before we see how, let's fake some data: a regular array of particles, translating with constant velocity.
```
%matplotlib inline
from pylab import * # not recommended usage, but we use it for brevity here
import numpy as np
import pandas
def fakeframe(t=0, Nside=4):
xg, yg = np.mgrid[:Nside,:Nside]
dx = 1 * t
dy = -1 * t
return pandas.DataFrame(
dict(x=xg.flatten() + dx, y=yg.flatten() + dy, frame=t))
```
Let's visualize 2 frames. In all of the plots below, the blue circles are the particles of the first frame and the green squares are the particles of the last frame.
```
f0 = fakeframe(0)
f1 = fakeframe(0.8)
plot(f0.x, f0.y, 'bo')
plot(f1.x, f1.y, 'gs')
axis('equal'); ylim(ymin=-1.0, ymax=3.5)
```
Track and visualize.
```
import trackpy
tr = pandas.concat(trackpy.link_df_iter((f0, f1), 0.5))
def trshow(tr, first_style='bo', last_style='gs', style='b.'):
frames = list(tr.groupby('frame'))
nframes = len(frames)
for i, (fnum, pts) in enumerate(frames):
if i == 0:
sty = first_style
elif i == nframes - 1:
sty = last_style
else:
sty = style
plot(pts.x, pts.y, sty)
trackpy.plot_traj(tr, colorby='frame', ax=gca())
axis('equal'); ylim(ymin=-1.0, ymax=3.5)
xlabel('x')
ylabel('y')
trshow(tr)
```
Obviously this is not what we wanted at all! Let's give `trackpy.link_df_iter()` a $P$ which reflects this constant velocity.
We define `predict()` for a single particle, and use the `trackpy.predict.predictor` decorator to let it make predictions for many particles at once. Then, we pass it to `link_df_iter()` via the `predictor` argument.
```
import trackpy.predict
@trackpy.predict.predictor
def predict(t1, particle):
velocity = np.array((1, -1)) # See fakeframe()
return particle.pos + velocity * (t1 - particle.t)
tr = pandas.concat(trackpy.link_df_iter((f0, f1), 0.5, predictor=predict))
trshow(tr)
```
Yay! Remember: Our predictor doesn't have to know exactly where the particle will be; it just has to bias the search enough that the correct identification will be made.
## Dynamic predictors
Of course, it's rare that you will know your particles' velocities ahead of time. It would be much better for the predictor to "learn" about the velocities, and allow different particles to have different velocities that can change over time. To accomplish this, we have to do more than just supply $P$: we have to know particles' most recent velocities.
$$P(t_1, t_0, \vec x_i(t_0)) = \vec x_i(t_0) + \frac{\vec x_i(t_0) - \vec x_i(t_{-1})}{t_0 - t_{-1}} (t_1 - t_0)$$
To implement this kind of prediction in `trackpy`, we use instances of the [`trackpy.predict.NearestVelocityPredict`](https://github.com/soft-matter/trackpy/blob/e468027d7bb6e96cbb9f2048530cbc6e8c7172d8/trackpy/predict.py#L145-L196) class.
There are a few caveats:
- Defining this new $P$ for particle $i$ specifically is problematic, because if a new particle is in frame $t_0$ but wasn't in $t_{-1}$, we won't know its velocity. So newly-appeared particles just borrow the velocity of the closest old particle.
- Velocities are undefined in the first frame of the movie, because there is no previous frame. The code falls back to an initial guess of $\vec v_0 = 0$. However, `NearestVelocityPredict`, and the other classes in `trackpy.predict`, allow one to instead specify an initial velocity profile, field, etc. See the docstring of each class.
- Even though particles may be in motion at the start of the movie, the default of $\vec v_0 = 0$ is not always so bad. In many cases, at least some of the particles are moving slowly enough that they can be tracked and their velocity can be obtained. Because particles with unknown velocity just borrow the nearest known velocity, as we just discussed, this may give the code a foothold to track more particles in later frames. Your mileage may vary.
OK, let's see this in action. We'll make a 3-frame movie that starts with small displacements (because of the $\vec v_0 = 0$ assumption) and then speeds up.
```
frames = (fakeframe(0), fakeframe(0.25), fakeframe(0.65))
```
Without prediction, linking of the particles in the top row can't even make it to the 3rd frame.
```
tr = pandas.concat(trackpy.link_df_iter(frames, 0.5))
trshow(tr)
```
`NearestVelocityPredict` objects work by watching the output of linking as it happens, and updating $P$ to use the latest velocities. These objects provide modified versions of trackpy's two main linking functions, `link_df_iter()` and `link_df()`, that work like their namesakes but add dynamic prediction.
First, we use `link_df_iter()` to link the frames with prediction:
```
pred = trackpy.predict.NearestVelocityPredict()
tr = pandas.concat(pred.link_df_iter(frames, 0.5))
trshow(tr)
```
Alternatively, we can use `link_df()`:
```
pred = trackpy.predict.NearestVelocityPredict()
tr = pred.link_df(pandas.concat(frames), 0.5)
trshow(tr)
```
We'll use `link_df_iter()` for the remaining examples, but `link_df()` is always available as well.
(*Note:* Unlike `link_df_iter()`, this `link_df()` is usually — but not always — a drop-in replacment for `trackpy.link_df()`. Consult the documentation or source code for details.)
### Channel flow prediction
There is one special case that is common enough to deserve a special $P$: channel flow, in which velocities are relatively uniform in one direction. For example, if the channel is in the $x$ (i.e. $\hat i$) direction, particle velocities are very well approximated as
$$\vec v = \hat i v_x(y)$$
where the velocity profile $v_x(y)$ is a smoothly-varying function defined across the channel.
This is implemented by the [`trackpy.predict.ChannelPredict`](https://github.com/soft-matter/trackpy/blob/e468027d7bb6e96cbb9f2048530cbc6e8c7172d8/trackpy/predict.py#L228-L328) class. When creating an instance, you must specify the size of the bins used to create the velocity profile. You can also specify the direction of flow; see the class's docstring for details.
Let's create some particles undergoing accelerating shear.
```
def fakeshear(t=0, Nside=4):
xg, yg = np.mgrid[:Nside,:Nside]
dx = 0.45 * t * yg
return pandas.DataFrame(
dict(x=(xg + dx).flatten(), y=yg.flatten(), frame=t))
```
When we attempt to track them, the algorithm fails for the top row of particles.
```
frames = (fakeshear(0), fakeshear(0.25), fakeshear(0.65))
tr = pandas.concat(trackpy.link_df_iter(frames, 0.5))
trshow(tr)
ylim(ymax=3.5);
```
Now, let's try it with prediction:
```
pred = trackpy.predict.ChannelPredict(0.5, 'x', minsamples=3)
tr = pandas.concat(pred.link_df_iter(frames, 0.5))
trshow(tr)
ylim(ymax=3.5);
```
Much better!
### Drift prediction
Finally, the most symmetric prediction class in `trackpy.predict` is [`DriftPredict`](https://github.com/soft-matter/trackpy/blob/e468027d7bb6e96cbb9f2048530cbc6e8c7172d8/trackpy/predict.py#L199-L225). This just makes predictions based on the average velocity of all particles. It is useful when you have some background convective flow. Note that this does *not* remove the flow from your results; to do that, use `trackpy.compute_drift` and `trackpy.subtract_drift`, as in the walkthrough tutorial.
| true |
code
| 0.35145 | null | null | null | null |
|
# Least Squares
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://licensebuttons.net/l/by/4.0/80x15.png" /></a><br />This notebook by Xiaozhou Li is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.
All code examples are also licensed under the [MIT license](http://opensource.org/licenses/MIT).
The concept of least squares dates uses permeates modern statistics and mathematical modeling. The key techniques of regression and
parameter estimation have become fundamental tools in the sciences and engineering.
```
import numpy as np
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from IPython.display import clear_output, display
```
## Polynomial Fitting
```
def poly_fit(x, y, n):
m = np.size(x)
A = np.zeros([n+1,n+1])
b = np.zeros(n+1)
A_tmp = np.zeros(2*n+1)
for i in range(2*n+1):
for j in range(m):
A_tmp[i] += x[j]**i
if (i < n+1):
b[i] += x[j]**i*y[j]
for i in range(n+1):
A[i] = A_tmp[i:i+n+1]
a = np.linalg.solve(A, b)
return a
def plot_fun(fun, a, b, c='k'):
num = 200
x = np.linspace(a, b, num+1)
y = np.zeros(num+1)
for i in range(num+1):
y[i] = fun(x[i])
plt.plot(x, y, c, linewidth=3)
```
__Example__ Fitting points $(1,2),(2,3),(3,5),(4,7),(5,11),(6,13),(7,17),(8,19),(9,23),(10,29)$ with polynomial
```
x = np.array([1,2,3,4,5,6,7,8,9,10])
y = np.array([2,3,5,7,11,13,17,19,23,29])
plt.plot(x, y, 'ro', markersize=12, linewidth=3)
a = poly_fit(x, y, 2)
print (a)
def fitting_fun(a, x):
n = np.size(a)
y = a[n-1]
for i in range(n-1):
y = y*x + a[n-2-i]
return y
#print (fitting_fun(a,0))
def fun(x):
return fitting_fun(a,x)
plot_fun(fun, 1, 10)
```
__Example__ Linear polynomial fitting: linear function with random perturbation
```
def fun1(x):
#return x**3 - x**2 + x
return 3.5*x
m = 20
x = np.linspace(-1,1,m)
y = np.zeros(m)
for i in range(m):
y[i] = fun1(x[i])
y = y + 0.1*np.random.rand(m)
plt.plot(x, y, 'ro', markersize=12, linewidth=3)
a = poly_fit(x, y, 1)
plot_fun(fun, -1, 1)
```
__Example__ Linear polynomial fitting for quadratic function
```
def fun2(t):
#return x**3 - x**2 + x
return 300*t - 4.9*t*t
m = 20
x = np.linspace(0,2,m)
y = np.zeros(m)
for i in range(m):
y[i] = fun2(x[i])
# y = y + 0.1*np.random.rand(m)
plt.plot(x, y, 'ro', markersize=12, linewidth=3)
a = poly_fit(x, y, 1)
plot_fun(fun, 0, 2)
# longer range
#t = 50
#plot_fun(fun, 0, t)
#x = np.linspace(0,t,200)
#plt.plot(x, fun2(x),'b')
```
__Example__ Fitting points $(1,2),(2,3),(4,7),(6,13),(7,17),(8,19)$ with polynomial
```
x = np.array([1,2,4,6,7,8])
y = np.array([2,3,7,13,17,19])
plt.plot(x, y, 'ro', markersize=12, linewidth=3)
a = poly_fit(x, y, 1)
print (a)
plt.plot(x, y, 'ro', markersize=12, linewidth=3)
a = poly_fit(x, y, 2)
print (a)
def fitting_fun(a, x):
n = np.size(a)
y = a[n-1]
for i in range(n-1):
y = y*x + a[n-2-i]
return y
#print (fitting_fun(a,0))
def fun(x):
return fitting_fun(a,x)
plot_fun(fun, 1, 10)
print(np.polyfit(x,y,1))
print(np.polyfit(x,y,2))
```
| true |
code
| 0.290716 | null | null | null | null |
|
# Predicting reaction performance in C–N cross-coupling using machine learning
DOI: 10.1126/science.aar5169
Ahneman, D. T.; Estrada, J. G.; Lin, S.; Dreher, S. D.; Doyle, A. G. *Science*, **2018**, *360*, 186-190.
Import schema and helper functions
```
import ord_schema
from datetime import datetime
from ord_schema.proto import reaction_pb2
from ord_schema.units import UnitResolver
from ord_schema import validations
from ord_schema import message_helpers
unit_resolver = UnitResolver()
```
# Define a single reaction
Single reaction from the SI to be used as a template for the remaining entries.
Start by writing a helper function for defining stock solutions.
```
# TODO(ccoley) Replace use of this helper class with the message_helpers.set_solute_moles
class stock_solution:
"""Helper class for defining stock solutions."""
def __init__(self, reaction, stock_name):
self.stock = reaction.inputs[stock_name]
self.concentration = 0.0
self.moles = 0.0
self.volume = 0.0
def add_solute(self, role, name, SMILES=None, is_limiting=False, preparation='NONE',
moles=0.0, volume_liters=0.0):
"""Add solute to solution. Keep track of moles of solute and total volume."""
# Solution volume is sum of solute and solvent volumes
self.moles += float(moles)
self.volume += float(volume_liters)
# Add solute and ID
self.solute = self.stock.components.add()
self.solute.reaction_role = reaction_pb2.ReactionRole.__dict__[role]
self.solute.identifiers.add(value=name, type='NAME')
if SMILES != None:
self.solute.identifiers.add(value=SMILES, type='SMILES')
# Other details
self.solute.preparations.add().type = reaction_pb2.CompoundPreparation.PreparationType.Value(preparation)
self.solute.is_limiting = is_limiting
def add_solvent(self, name, SMILES=None, preparation='NONE', volume_liters=0.0):
"""Add solvent to solution. Keep track of total volume."""
# Solution volume is sum of solute and solvent volumes
self.volume += float(volume_liters)
# Add solute and ID
self.solvent = self.stock.components.add()
self.solvent.reaction_role = reaction_pb2.ReactionRole.SOLVENT
self.solvent.identifiers.add(value=name, type='NAME')
if SMILES != None:
self.solvent.identifiers.add(value=SMILES, type='SMILES')
# Other details
self.solvent.preparations.add().type = reaction_pb2.CompoundPreparation.PreparationType.Value(preparation)
def mix(self, concentration_molar=0):
"""Mix function resolves moles and volume from availible information (concentration, moles, volume)"""
self.concentration = concentration_molar
# Resolve concentration
if self.moles > 0 and self.volume > 0:
self.solute.amount.moles.CopyFrom(unit_resolver.resolve(f'{self.moles*(10**6):16f} umol'))
self.solvent.amount.volume.CopyFrom(unit_resolver.resolve(f'{self.volume*(10**6):16f} uL'))
elif self.concentration > 0 and self.volume > 0:
self.moles = self.concentration * self.volume
self.solute.amount.moles.CopyFrom(unit_resolver.resolve(f'{self.moles*(10**6):16f} umol'))
self.solvent.amount.volume.CopyFrom(unit_resolver.resolve(f'{self.volume*(10**6):16f} uL'))
```
**Define reaction inputs**:
- Catalyst in DMSO (0.05 M)
- Electrophile in DMSO (0.50 M)
- Nucleophile in DMSO (0.50 M)
- Additive in DMSO (0.50 M)
- Base in DMSO (0.75 M)
- The SI does not indicate an order of addition
```
# Define Reaction
reaction = reaction_pb2.Reaction()
reaction.identifiers.add(value=r'Buchwald-Hartwig Amination', type='NAME')
# Catalyst stock solution
catalyst = stock_solution(reaction, r'Pd precatalyst in DMSO')
catalyst.add_solute('CATALYST', r'XPhos', SMILES=r'CC(C)C1=CC(C(C)C)=CC(C(C)C)=C1C2=C(P(C3CCCCC3)C4CCCCC4)C=CC=C2')
catalyst.add_solvent(r'DMSO', SMILES=r'O=S(C)C', volume_liters=200e-9)
catalyst.mix(concentration_molar=0.05)
# Electrophile stock solution
electrophile = stock_solution(reaction, r'Aryl halide in DMSO')
electrophile.add_solute('REACTANT', r'4-trifuloromethyl chlorobenzene', SMILES=r'ClC1=CC=C(C(F)(F)F)C=C1', is_limiting=True)
electrophile.add_solvent(r'DMSO', SMILES=r'O=S(C)C', volume_liters=200e-9)
electrophile.mix(concentration_molar=0.50)
# Nucleophile stock solution
nucleophile = stock_solution(reaction, r'Amine in DMSO')
nucleophile.add_solute('REACTANT', r'p-toluidine', SMILES=r'NC1=CC=C(C)C=C1')
nucleophile.add_solvent(r'DMSO', SMILES=r'O=S(C)C', volume_liters=200e-9)
nucleophile.mix(concentration_molar=0.50)
# Additive stock solution
additive = stock_solution(reaction, r'Additive in DMSO')
additive.add_solute('REAGENT', r'5-phenylisoxazole', SMILES=r'o1nccc1c2ccccc2')
additive.add_solvent(r'DMSO', SMILES=r'O=S(C)C', volume_liters=200e-9)
additive.mix(concentration_molar=0.50)
# Base stock solution
base = stock_solution(reaction, r'Base in DMSO')
base.add_solute('REAGENT', r'P2Et', SMILES=r'CN(C)P(N(C)C)(N(C)C)=NP(N(C)C)(N(C)C)=NCC')
base.add_solvent(r'DMSO', SMILES=r'O=S(C)C', volume_liters=200e-9)
base.mix(concentration_molar=0.75)
```
Define reaction setup & conditions
```
# Reactions performed in 1556 well plate
reaction.setup.vessel.CopyFrom(
reaction_pb2.Vessel(
type='WELL_PLATE',
material=dict(type='PLASTIC', details='polypropylene'),
volume=unit_resolver.resolve('12.5 uL')
)
)
reaction.setup.is_automated = True
reaction.setup.environment.type = reaction.setup.environment.GLOVE_BOX
# Heated - not specified how
t_conds = reaction.conditions.temperature
t_conds.setpoint.CopyFrom(reaction_pb2.Temperature(units='CELSIUS', value=60))
# Glove box work
p_conds = reaction.conditions.pressure
p_conds.control.type = p_conds.PressureControl.SEALED
p_conds.atmosphere.type = p_conds.Atmosphere.NITROGEN
p_conds.atmosphere.details = 'dry nitrogen'
# No safety notes
reaction.notes.safety_notes = ''
```
After 16 h, the plate was opened and the Mosquito was used to add internal standard to each well (3 µL of 0.0025 M di-tert-butylbiphenyl solution in DMSO). At that point, aliquots were sampled into 384-well plates and analyzed by UPLC.
```
# Standard stock solution
standard = stock_solution(reaction, r'External standard in DMSO')
standard.add_solute('INTERNAL_STANDARD', r'4,4\'-di-tert-butyl-1,1\'-biphenyl', SMILES=r'CC(C)(C)C1=CC=C(C2=CC=C(C(C)(C)C)C=C2)C=C1')
standard.add_solvent(r'DMSO', SMILES=r'O=S(C)C', volume_liters=3e-6)
standard.mix(concentration_molar=0.0025)
outcome = reaction.outcomes.add()
outcome.reaction_time.CopyFrom(unit_resolver.resolve('16 hrs'))
# Analyses: UPLC
# Note using LCMS because UPLC is not an option
outcome.analyses['UPLC analysis'].type = reaction_pb2.Analysis.LCMS
outcome.analyses['UPLC analysis'].details = ('UPLC using 3 µL of 0.0025 M di-tert-butylbiphenyl solution in DMSO external standard')
outcome.analyses['UPLC analysis'].instrument_manufacturer = 'Waters Acquity'
# Define product identity
prod_2a = outcome.products.add()
prod_2a.identifiers.add(value=r'FC(C1=CC=C(NC2=CC=C(C)C=C2)C=C1)(F)F', type='SMILES')
prod_2a.is_desired_product = True
prod_2a.reaction_role = reaction_pb2.ReactionRole.PRODUCT
# The UPLC analysis was used to confirm both identity and yield
prod_2a.measurements.add(type='IDENTITY', analysis_key='UPLC analysis')
prod_2a.measurements.add(type='YIELD', analysis_key='UPLC analysis', percentage=dict(value=10.65781182),
uses_internal_standard=True)
# Reaction provenance
reaction.provenance.city = r'Kenilworth, NJ'
reaction.provenance.doi = r'10.1126/science.aar5169'
reaction.provenance.publication_url = r'https://science.sciencemag.org/content/360/6385/186'
reaction.provenance.record_created.time.value = datetime.now().strftime("%m/%d/%Y, %H:%M:%S")
reaction.provenance.record_created.person.CopyFrom(reaction_pb2.Person(
name='Benjamin J. Shields', organization='Princeton University', email='[email protected]'))
```
Validate and examine this final prototypical reaction entry
```
outcome.products
validations.validate_message(reaction)
reaction
```
# Full HTE Data Set
```
# Get full set of reactions: I preprocessed this to have SMILES for each component.
# Note I am only including the data that was used for modeling - there are some
# controls and failed reactions in the SI (if we even want them?).
import pandas as pd
import os
if not os.path.isfile('experiment_index.csv'):
!wget https://github.com/Open-Reaction-Database/ord-schema/raw/main/examples/9_Ahneman_Science_CN_Coupling/experiment_index.csv
index = pd.read_csv('experiment_index.csv')
index
# I happened to have ID tables around so we can give the components names
def match_name(column, list_path):
"""Match names from csv files to SMILES."""
if not os.path.isfile(list_path):
!wget https://github.com/Open-Reaction-Database/ord-schema/raw/main/examples/9_Ahneman_Science_CN_Coupling/{list_path}
component_list = pd.read_csv(list_path)
# Get SMILES column
for col in component_list.columns.values:
if 'SMILES' in col:
smi_col = col
# Get name column
names = index[column].copy()
for i in range(len(component_list)):
names = names.replace(component_list[smi_col][i], component_list['name'][i])
return names.values
index['Aryl_halide_name'] = match_name('Aryl_halide_SMILES', 'aryl_halide-list.csv')
index['Additive_name'] = match_name('Additive_SMILES', 'additive-list.csv')
index['Base_name'] = match_name('Base_SMILES', 'base-list.csv')
index['Ligand_name'] = match_name('Ligand_SMILES', 'ligand-list.csv')
index.head()
# Products aren't listed - Use rdkit to get them
from rdkit import Chem
from rdkit.Chem import AllChem
def amination(aryl_halide):
"""Get product based on aryl halide identity."""
replace_with = Chem.MolFromSmiles('NC1=CC=C(C)C=C1')
pattern = Chem.MolFromSmarts('[Cl,Br,I]')
molecule = Chem.MolFromSmiles(aryl_halide)
product = AllChem.ReplaceSubstructs(molecule, pattern, replace_with)
return Chem.MolToSmiles(product[0])
index['Product_SMILES'] = [amination(aryl_halide) for aryl_halide in index['Aryl_halide_SMILES'].tolist()]
index.head()
# Reorder the dataframe
index = index[['Ligand_SMILES', 'Ligand_name',
'Aryl_halide_SMILES', 'Aryl_halide_name',
'Additive_SMILES', 'Additive_name',
'Base_SMILES', 'Base_name',
'Product_SMILES', 'yield']]
# Gonna time execution
import time
class timer:
"""
Returns wall clock-time
"""
def __init__(self, name):
self.start = time.time()
self.name = name
def stop(self):
self.end = time.time()
print(self.name + ': ' + str(self.end - self.start) + ' s')
```
The only aspects of reaction data that vary are: (1) ligand, (2) electrophile, (3) additive, and (4) base.
```
t = timer('3955 Entries')
reactions = []
for lig_s, lig_n, elec_s, elec_n, add_s, add_n, base_s, base_n, prod, y in index.values:
# Define Reaction
reaction = reaction_pb2.Reaction()
reaction.identifiers.add(value=r'Buchwald-Hartwig Amination', type='NAME')
# Catalyst stock solution
catalyst = stock_solution(reaction, r'Pd precatalyst in DMSO')
catalyst.add_solute('CATALYST', lig_n, SMILES=lig_s)
catalyst.add_solvent(r'DMSO', SMILES=r'O=S(C)C', volume_liters=200e-9)
catalyst.mix(concentration_molar=0.05)
# Electrophile stock solution
electrophile = stock_solution(reaction, r'Aryl halide in DMSO')
electrophile.add_solute('REACTANT', elec_n, SMILES=elec_s, is_limiting=True)
electrophile.add_solvent(r'DMSO', SMILES=r'O=S(C)C', volume_liters=200e-9)
electrophile.mix(concentration_molar=0.50)
# Nucleophile stock solution
nucleophile = stock_solution(reaction, r'Amine in DMSO')
nucleophile.add_solute('REACTANT', r'p-toluidine', SMILES=r'NC1=CC=C(C)C=C1')
nucleophile.add_solvent(r'DMSO', SMILES=r'O=S(C)C', volume_liters=200e-9)
nucleophile.mix(concentration_molar=0.50)
# Additive stock solution
additive = stock_solution(reaction, r'Additive in DMSO')
additive.add_solute('REAGENT', add_n, SMILES=add_s)
additive.add_solvent(r'DMSO', SMILES=r'O=S(C)C', volume_liters=200e-9)
additive.mix(concentration_molar=0.50)
# Base stock solution
base = stock_solution(reaction, r'Base in DMSO')
base.add_solute('REAGENT', base_n, SMILES=base_s)
base.add_solvent(r'DMSO', SMILES=r'O=S(C)C', volume_liters=200e-9)
base.mix(concentration_molar=0.75)
# Reactions performed in 1556 well plate
reaction.setup.vessel.CopyFrom(
reaction_pb2.Vessel(
type='WELL_PLATE',
material=dict(type='PLASTIC'),
volume=unit_resolver.resolve('12.5 uL')
)
)
reaction.setup.is_automated = True
reaction.setup.environment.type = reaction_pb2.ReactionSetup.ReactionEnvironment.GLOVE_BOX
# Heated - not specified how
t_conds = reaction.conditions.temperature
t_conds.setpoint.CopyFrom(reaction_pb2.Temperature(units='CELSIUS', value=60))
# Glove box work
p_conds = reaction.conditions.pressure
p_conds.control.type = p_conds.PressureControl.SEALED
p_conds.atmosphere.type = p_conds.Atmosphere.NITROGEN
p_conds.atmosphere.details = 'dry nitrogen'
# Notes
reaction.notes.safety_notes = ''
# TODO(ccoley) Stock solutions can be defined without using this custom function
# Standard stock solution
standard = stock_solution(reaction, r'External standard in DMSO')
standard.add_solute('INTERNAL_STANDARD', r'4,4\'-di-tert-butyl-1,1\'-biphenyl', SMILES=r'CC(C)(C)C1=CC=C(C2=CC=C(C(C)(C)C)C=C2)C=C1')
standard.add_solvent(r'DMSO', SMILES=r'O=S(C)C', volume_liters=3e-6)
standard.mix(concentration_molar=0.0025)
outcome = reaction.outcomes.add()
outcome.reaction_time.CopyFrom(unit_resolver.resolve('16 hrs'))
# Analyses: UPLC/MS
outcome.analyses['UPLC analysis'].type = reaction_pb2.Analysis.LCMS
outcome.analyses['UPLC analysis'].details = ('UPLC using 3 µL of 0.0025 M di-tert-butylbiphenyl solution in DMSO external standard')
outcome.analyses['UPLC analysis'].instrument_manufacturer = 'Waters Acquity'
# Define product identity
prod_2a = outcome.products.add()
prod_2a.identifiers.add(value=r'FC(C1=CC=C(NC2=CC=C(C)C=C2)C=C1)(F)F', type='SMILES')
prod_2a.is_desired_product = True
prod_2a.reaction_role = reaction_pb2.ReactionRole.PRODUCT
# The UPLC analysis was used to confirm both identity and yield
prod_2a.measurements.add(type='IDENTITY', analysis_key='UPLC analysis')
prod_2a.measurements.add(type='YIELD', analysis_key='UPLC analysis', percentage=dict(value=y),
uses_internal_standard=True)
# Reaction provenance
reaction.provenance.city = r'Kenilworth, NJ'
reaction.provenance.doi = r'10.1126/science.aar5169'
reaction.provenance.publication_url = r'https://science.sciencemag.org/content/360/6385/186'
reaction.provenance.record_created.time.value = datetime.now().strftime("%m/%d/%Y, %H:%M:%S")
reaction.provenance.record_created.person.CopyFrom(reaction_pb2.Person(
name='Benjamin J. Shields', organization='Princeton University', email='[email protected]')
)
# Validate
output = validations.validate_message(reaction)
for error in output.errors:
print(error)
# Append
reactions.append(reaction)
t.stop()
print(f'Generated {len(reactions)} reactions')
# Inspect random reaction from this set
reactions[15]
```
| true |
code
| 0.536009 | null | null | null | null |
|
# Investigating the effect of Company Announcements on their Share Price following COVID-19 (using the S&P 500)
A lot of company valuation speculation has come about since the C0rona-VIrus-Disease-2019 (COVID-19 or COVID for short) started to impact the stock market (estimated on the 20$^{\text{th}}$ of February 2020, 2020-02-20). Many investors tried to estimate the impact of the outbreak on businesses and trade accordingly as fast as possible. In this haste, it is possible that they miss-priced the effect of COVID on certain stocks. \
This article lays out a framework to investigate whether the Announcement of Financial Statements after COVID (*id est* (*i.e.*): after 2020-02-20) impacted the price of stocks in any specific industry sector. It will proceed simply by producing a graph of the **movement in average daily close prices for each industry - averaged from the time each company produced a Post COVID Announcement** (i.e.: after they first produced a Financial Statement after 2020-02-20). \
From there, one may stipulate that a profitable investment strategy could consist in going long in stocks of companies (i) that did not release an announcement since COVID yet (ii) within a sector that the framework bellow suggest will probably increase in price following from such an announcement.
## Pre-requisites:
Thomson Reuters Eikon with access to new Eikon Data APIs. \
Required Python Packages: [Refinitiv Eikon Python API](https://developers.refinitiv.com/eikon-apis/eikon-data-api), [Numpy](https://numpy.org/), [Pandas](https://pandas.pydata.org/) and [Matplotlib](https://matplotlib.org/). The Python built in modules [datetime](https://docs.python.org/3/library/datetime.html) and [dateutil](https://dateutil.readthedocs.io/en/stable/) are also required.
### Suplimentary:
[pickle](https://docs.python.org/3/library/pickle.html): If one wishes to copy and manipulate this code, 'pickling' data along the way should aid in making sure no data is lost when / in case there are kernel issues.
$ \\ $
## Import libraries
First we can use the library ' platform ' to show which version of Python we are using
```
# The ' from ... import ' structure here allows us to only import the module ' python_version ' from the library ' platform ':
from platform import python_version
print("This code runs on Python version " + python_version())
```
$$ \\ $$
We use **Refinitiv's [Eikon Python Application Programming Interface (API)](https://developers.refinitiv.com/eikon-apis/eikon-data-api)** to access financial data. We can access it via the Python library "eikon" that can be installed simply by using $\textit{pip install}$.
```
import eikon as ek
# The key is placed in a text file so that it may be used in this code without showing it itself:
eikon_key = open("eikon.txt","r")
ek.set_app_key(str(eikon_key.read()))
# It is best to close the files we opened in order to make sure that we don't stop any other services/programs from accessing them if they need to:
eikon_key.close()
```
$$ \\ $$
The following are Python-built-in modules/librarys, therefore they do not have specific version numbers.
```
# datetime will allow us to manipulate Western World dates
import datetime
# dateutil will allow us to manipulate dates in equations
import dateutil
```
$$ \\ $$
numpy is needed for datasets' statistical and mathematical manipulations
```
import numpy
print("The numpy library imported in this code is version: " + numpy.__version__)
```
$$ \\ $$
pandas will be needed to manipulate data sets
```
import pandas
# This line will ensure that all columns of our dataframes are always shown:
pandas.set_option('display.max_columns', None)
print("The pandas library imported in this code is version: " + pandas.__version__)
```
$$ \\ $$
matplotlib is needed to plot graphs of all kinds
```
import matplotlib
# the use of ' as ... ' (specifically here: ' as plt ') allows us to create a shorthand for a module (here: ' matplotlib.pyplot ')
import matplotlib.pyplot as plt
print("The matplotlib library imported in this code is version: " + matplotlib.__version__)
```
$$ \\ $$
## Defining Functions
$$ \\ $$
The cell below defines a function to plot data on one y axis (as opposed to two, one on the right and one on the left).
```
# Using an implicitly registered datetime converter for a matplotlib plotting method is no longer supported by matplotlib. Current versions of pandas requires explicitly registering matplotlib converters:
pandas.plotting.register_matplotlib_converters()
def plot1ax(dataset, ylabel = "", title = "", xlabel = "Year",
datasubset = [0], # datasubset needs to be a list of the number of each column within the dtaset that needs to be labelled on the left
datarange = False, # If wanting to plot graph from and to a specific point, make datarange a list of start and end date
linescolor = False, # This needs to be a list of the color of each vector to be plotted, in order they are shown in their dataframe from left to right
figuresize = (12,4), # This can be changed to give graphs of different proportions. It is defaulted to a 12 by 4 (ratioed) graph
facecolor="0.25",# This allows the user to change the background color as needed
grid = True, # This allows us to decide whether or not to include a grid in our graphs
time_index = [], time_index_step = 48, # These two variables allow us to dictate the frequency of the ticks on the x-axis of our graph
legend = True):
# The if statement bellow allows for manipulation of the date range that we would like to graph:
if datarange == False:
start_date = str(dataset.iloc[:,datasubset].index[0])
end_date = str(dataset.iloc[:,datasubset].index[-1])
else:
start_date = str(datarange[0])
# The if statement bellow allows us to graph to the end of the dataframe if wanted, whatever date that may be:
if datarange[-1] == -1:
end_date = str(dataset.iloc[:,datasubset].index[-1])
else:
end_date = str(datarange[-1])
fig, ax1 = plt.subplots(figsize=figuresize, facecolor=facecolor)
ax1.tick_params(axis = 'both', colors = 'w')
ax1.set_facecolor(facecolor)
fig.autofmt_xdate()
plt.ylabel(ylabel, color ='w')
ax1.set_xlabel(str(xlabel), color = 'w')
if linescolor == False:
for i in datasubset: # This is to label all the lines in order to allow matplot lib to create a legend
ax1.plot(dataset.iloc[:, i].loc[start_date : end_date],
label = str(dataset.columns[i]))
else:
for i in datasubset: # This is to label all the lines in order to allow matplot lib to create a legend
ax1.plot(dataset.iloc[:, i].loc[start_date : end_date],
label = str(dataset.columns[i]),
color = linescolor)
ax1.tick_params(axis='y')
if grid == True:
ax1.grid()
else:
pass
if len(time_index) != 0:
# locs, labels = plt.xticks()
plt.xticks(numpy.arange(len(dataset.iloc[:,datasubset]), step = time_index_step), [i for i in time_index[0::time_index_step]])
else:
pass
ax1.set_title(str(title) + " \n", color='w')
if legend == True:
plt.legend()
elif legend == "underneath":
ax1.legend(loc = 'upper center', bbox_to_anchor = (0.5, -0.3), fancybox = True, shadow = True, ncol = 5)
elif legend != False:
plt.legend().get_texts()[0].set_text(legend)
plt.show()
```
$$ \\ $$
The cell bellow defines a function that adds a series of daily close prices to the dataframe named 'daily_df' and plots it.
```
# Defining the ' daily_df ' variable before the ' Get_Daily_Close ' function
daily_df = pandas.DataFrame()
def Get_Daily_Close(instrument, # Name of the instrument in a list.
days_back, # Number of days from which to collect the data.
plot_title = False, # If ' = True ', then a graph of the data will be shown.
plot_time_index_step = 30 * 3, # This line dictates the index frequency on the graph/plot's x axis.
col = ""): # This can be changed to name the column of the merged dataframe.
# This instructs the function to use a pre-defined ' daily_df ' variable:
global daily_df
if col == "":
# If ' col ' is not defined, then the column name of the data will be replaced with its instrument abbreviated name followed by " Close Price".
col = str(instrument) + " Close Price"
else:
pass
# This allows for the function to programmatically ensure that all instruments' data are collected - regardless of potential server Timeout Errors.
worked = False
while worked != True:
try:
instrument, err = ek.get_data(instruments = instrument,
fields = [str("TR.CLOSEPRICE(SDate=-" + str(days_back) + ",EDate=0,Frq=D,CALCMETHOD=CLOSE).timestamp"),
str("TR.CLOSEPRICE(SDate=-" + str(days_back) + ",EDate=0,Frq=D,CALCMETHOD=CLOSE)")])
instrument.dropna()
worked = True
except:
# Note that this ' except ' is necessary
pass
instrument = pandas.DataFrame(list(instrument.iloc[:,2]), index = list(instrument.iloc[:,1]), columns = [col])
instrument.index = pandas.to_datetime(instrument.index, format = "%Y-%m-%d")
if plot_title != False:
plot1ax(dataset = instrument.dropna(), ylabel = "Close Price", title = str(plot_title), xlabel = "Year", # legend ="Close Price",
linescolor = "#ff9900", time_index_step = plot_time_index_step, time_index = instrument.dropna().index)
daily_df = pandas.merge(daily_df, instrument, how = "outer", left_index = True, right_index = True)
```
$$ \\ $$
The cell bellow sets up a function that gets Eikon recorded Company Announcement Data through time for any index (or instrument)
```
def Get_Announcement_For_Index(index_instrument, periods_back, show_df = False, show_list = False):
# This allows the function to collect a list of all constituents of the index
index_issuer_rating, err = ek.get_data(index_instrument, ["TR.IssuerRating"])
index_Announcement_list = []
for i in range(len(index_issuer_rating)):
# This allows for the function to programmatically ensure that all instruments' data are collected - regardless of potential server Timeout Errors.
worked = False
while worked != True:
try: # The ' u ' in ' index_issuer_rating_u ' is for 'unique' as it will be for each unique instrument
index_Announcement_u, err = ek.get_data(index_issuer_rating.iloc[i,0],
["TR.JPINCOriginalAnnouncementDate(SDate=-" + str(periods_back) + ",EDate=0,,Period=FI0,Frq=FI)",
"TR.JPCASOriginalAnnouncementDate(SDate=-" + str(periods_back) + ",EDate=0,,Period=FI0,Frq=FI)",
"TR.JPBALOriginalAnnouncementDate(SDate=-" + str(periods_back) + ",EDate=0,,Period=FI0,Frq=FI)"])
worked = True
except:
# Note that this ' except ' is necessary
pass
index_Announcement_list.append(index_Announcement_u)
index_Instrument = []
index_Income_Announcement = []
index_Cash_Announcement = []
index_Balance_Announcement = []
for i in range(len(index_Announcement_list)):
for j in range(len(index_Announcement_list[i])):
index_Instrument.append(index_Announcement_list[i].iloc[j,0])
index_Income_Announcement.append(index_Announcement_list[i].iloc[j,1])
index_Cash_Announcement.append(index_Announcement_list[i].iloc[j,2])
index_Balance_Announcement.append(index_Announcement_list[i].iloc[j,3])
index_Announcement_df = pandas.DataFrame(columns = ["Instrument",
"Income Statement Announcement Date",
"Cash Flos Statement Announcement Date",
"Balance Sheet Announcement Date"])
index_Announcement_df.iloc[:,0] = index_Instrument
index_Announcement_df.iloc[:,1] = pandas.to_datetime(index_Income_Announcement)
index_Announcement_df.iloc[:,2] = pandas.to_datetime(index_Cash_Announcement)
index_Announcement_df.iloc[:,3] = pandas.to_datetime(index_Balance_Announcement)
if show_df == True:
display(index_Announcement_df)
else:
pass
if show_list == True:
for i in range(len(index_Announcement_list)):
display(index_Announcement_list[i])
else:
pass
return index_Announcement_df, index_Announcement_list
```
$$ \\ $$
## Setting Up Dates
Before starting to investigate data pre- or post-COVID, we need to define the specific time when COVID affected stock markets: In this instance we chose "2020-02-20"
```
COVID_start_date = datetime.datetime.strptime("2020-02-20", '%Y-%m-%d').date()
days_since_COVID = (datetime.date.today() - COVID_start_date).days
```
$$ \\ $$
## Announcements
The bellow collects announcements of companies within the index of choice for the past 3 financial periods. In this article, the Standard & Poor's 500 Index (S&P500 or SPX for short) is used as an example. It can be used with indices such as FTSE or DJI instead of the SPX.
```
index_Announcement_df, index_Announcement_list = Get_Announcement_For_Index(index_instrument = ["0#.SPX"],
periods_back = 3,
show_df = False,
show_list = False)
```
Now we can choose only announcements post COVID.
```
Announcement_COVID_date = []
for k in (1,2,3):
index_Instruments_COVID_date = []
index_Announcement_post_COVID_list = []
for i in range(len(index_Announcement_list)):
index_Instrument_COVID_date = []
for j in reversed(index_Announcement_list[i].iloc[:,1]):
try: # Note that ' if (index_Announcement_list[i].iloc[1,1] - COVID_start_date).days >= 0: ' would not work
if (datetime.datetime.strptime(index_Announcement_list[i].iloc[:,1].iloc[-1], '%Y-%m-%d').date() - COVID_start_date).days >= 0:
while len(index_Instrument_COVID_date) == 0:
if (datetime.datetime.strptime(j, '%Y-%m-%d').date() - datetime.datetime.strptime("2020-02-20", '%Y-%m-%d').date()).days >= 0:
index_Instrument_COVID_date.append(j)
else:
index_Instrument_COVID_date.append("NaT")
except:
index_Instrument_COVID_date.append("NaT")
index_Instruments_COVID_date.append(index_Instrument_COVID_date[0])
Instruments_Announcement_COVID_date = pandas.DataFrame(index_Instruments_COVID_date, index = index_Announcement_df.Instrument.unique(), columns = ["Date"])
Instruments_Announcement_COVID_date.Date = pandas.to_datetime(Instruments_Announcement_COVID_date.Date)
Announcement_COVID_date.append(Instruments_Announcement_COVID_date)
Instruments_Income_Statement_Announcement_COVID_date = Announcement_COVID_date[0]
Instruments_Income_Statement_Announcement_COVID_date.columns = ["Date of the First Income Statement Announced after COVID"]
Instruments_Cash_Flow_Statement_Announcement_COVID_date = Announcement_COVID_date[1]
Instruments_Cash_Flow_Statement_Announcement_COVID_date.columns = ["Date of the First Cash Flow Statement Announced after COVID"]
Instruments_Balance_Sheet_COVID_date = Announcement_COVID_date[2]
Instruments_Balance_Sheet_COVID_date.columns = ["Date of the First Balance Sheet Announced after COVID"]
```
$$ \\ $$
## Daily Price
### Post COVID
The cell bellow collects Daily Close Prices for all relevant instruments in the index chosen.
```
for i in index_Announcement_df.iloc[:,0].unique():
Get_Daily_Close(i, days_back = days_since_COVID)
```
Some instruments might have been added to the index midway during out time period of choice. They are the ones bellow:
```
removing = [i.split()[0] + " Close Price" for i in daily_df.iloc[0,:][daily_df.iloc[0,:].isna() == True].index]
print("We will be removing " + removing + " from our dataframe")
```
The cell bellow will remove them to make sure that the do not skew our statistics later on in the code.
```
# This line removes instruments that wera added midway to the index
daily_df_no_na = daily_df.drop(removing, axis = 1).dropna()
```
Now we can focus on stock price movements alone.
```
daily_df_trend = pandas.DataFrame(columns = daily_df_no_na.columns)
for i in range(len(pandas.DataFrame.transpose(daily_df_no_na))):
daily_df_trend.iloc[:,i] = daily_df_no_na.iloc[:,i] - daily_df_no_na.iloc[0,i]
```
The following 3 cells display plots to visualise our data this far.
```
datasubset_list = []
for i in range(len(daily_df_no_na.columns)):
datasubset_list.append(i)
plot1ax(dataset = daily_df_no_na,
ylabel = "Close Price",
title = "Index Constituents' Close Prices",
xlabel = "Date",
legend = False,
datasubset = datasubset_list)
plot1ax(dataset = daily_df_trend, legend = False,
ylabel = "Normalised Close Price",
title = "Index Constituents' Change in Close Prices",
datasubset = datasubset_list, xlabel = "Date",)
```
The graph above shows the change in constituent companies' close prices since COVID.
$ \\ $
## Saving our data
The cell bellow saves variables to a 'pickle' file to quicken subsequent runs of this code if they are seen as necessary.
```
# pip install pickle-mixin
import pickle
pickle_out = open("SPX.pickle","wb")
pickl = (COVID_start_date, days_since_COVID,
index_Announcement_df, index_Announcement_list,
Announcement_COVID_date,
Instruments_Income_Statement_Announcement_COVID_date,
Instruments_Cash_Flow_Statement_Announcement_COVID_date,
Instruments_Balance_Sheet_COVID_date,
daily_df, daily_df_no_na,
daily_df_trend, datasubset_list)
pickle.dump(pickl, pickle_out)
pickle_out.close()
```
The cell bellow can be run to load these variables back into the kernel
```
# pickle_in = open("pickl.pickle","rb")
# COVID_start_date, days_since_COVID, index_Announcement_df, index_Announcement_list, Announcement_COVID_date, Instruments_Income_Statement_Announcement_COVID_date, Instruments_Cash_Flow_Statement_Announcement_COVID_date, Instruments_Balance_Sheet_COVID_date, daily_df, daily_df_no_na, daily_df_trend, datasubset_list = pickle.load(pickle_in)
```
$$ \\ $$
## Post-COVID-Announcement Price Insight
Now we can start investigating price changes after the first Post-COVID-Announcement of each company in our dataset.
```
# This is just to delimitate between the code before and after this point
daily_df2 = daily_df_no_na
```
The cell bellow formats the date-type of our data to enable us to apply them to simple algebra.
```
date_in_date_format = []
for k in range(len(daily_df2)):
date_in_date_format.append(daily_df2.index[k].date())
daily_df2.index = date_in_date_format
```
The cell bellow renames the columns of our dataset.
```
daily_df2_instruments = []
for i in daily_df2.columns:
daily_df2_instruments.append(str.split(i)[0])
```
Now: we collect daily prices only for dates after the first Post-COVID-Announcement of each instrument of interest
```
daily_df2_post_COVID_announcement = pandas.DataFrame()
for i,j in zip(daily_df2.columns, daily_df2_instruments):
daily_df2_post_COVID_announcement = pandas.merge(daily_df2_post_COVID_announcement,
daily_df2[i][daily_df2.index >= Instruments_Income_Statement_Announcement_COVID_date.loc[j].iloc[0].date()],
how = "outer", left_index = True, right_index = True) # Note that the following would not work: ' daily_df2_post_COVID_announcement[i] = daily_df2[i][daily_df2.index >= Instruments_Income_Statement_Announcement_COVID_date.loc[j].iloc[0].date()] '
```
Now we can focus on the trend/change in those prices
```
daily_df2_post_COVID_announcement_trend = pandas.DataFrame()
for i in daily_df2.columns:
try:
daily_df2_post_COVID_announcement_trend = pandas.merge(daily_df2_post_COVID_announcement_trend,
daily_df2_post_COVID_announcement.reset_index()[i].dropna().reset_index()[i] - daily_df2_post_COVID_announcement.reset_index()[i].dropna().iloc[0],
how = "outer", left_index = True, right_index = True)
except:
daily_df2_post_COVID_announcement_trend[i] = numpy.nan
```
And plot them
```
plot1ax(dataset = daily_df2_post_COVID_announcement_trend,
ylabel = "Normalised Close Price",
title = "Index Constituents' Trend In Close Prices From There First Income Statement Announcement Since COVID\n" +
"Only companies that announced an Income Statement since the start of COVID (i.e.:" + str(COVID_start_date) + ") will show",
xlabel = "Days since first Post-COVID-Announcement",
legend = False, # change to "underneath" to see list of all instruments and their respective colors as per this graph's legend.
datasubset = datasubset_list)
```
Some companies have lost and gained a great deal following from their first Post-COVID-Announcement, but most seem to have changed by less than 50 United States of america Dollars (USD).
$$ \\ $$
### Post COVID Announcement Price Change
The cell bellow simply gathers all stocks that decreased, increased or did not change in price since their first Post-COVID-Announcement in an easy to digest [pandas](https://pandas.pydata.org/) table. Note that is they haven't had a Post-COVID-Announcement yet, they will show as unchanged.
```
COVID_priced_in = [[],[],[]]
for i in daily_df2_post_COVID_announcement_trend.columns:
if str(sum(daily_df2_post_COVID_announcement_trend[i].dropna())) != "nan":
if numpy.mean(daily_df2_post_COVID_announcement_trend[i].dropna()) < 0:
COVID_priced_in[0].append(str.split(i)[0])
if numpy.mean(daily_df2_post_COVID_announcement_trend[i].dropna()) == 0:
COVID_priced_in[1].append(str.split(i)[0])
if numpy.mean(daily_df2_post_COVID_announcement_trend[i].dropna()) > 0:
COVID_priced_in[2].append(str.split(i)[0])
COVID_priced_in = pandas.DataFrame(COVID_priced_in, index = ["Did not have the negative impact of COVID priced in enough",
"Had the effects of COVID priced in (or didn't have time to react to new company announcements)",
"Had a price that overcompensated the negative impact of COVID"])
COVID_priced_in
```
$$ \\ $$
## Informative Powers of Announcements Per Sector
We will now investigate the insight behind our analysis per industry sector.
The 2 cells bellow allow us to see the movement in daily price of companies with Post-COVID-Announcements per sector
```
ESector, err = ek.get_data(instruments = [i.split()[0] for i in daily_df2_post_COVID_announcement_trend.dropna(axis = "columns", how = "all").columns],
fields = ["TR.TRBCEconSectorCode",
"TR.TRBCBusinessSectorCode",
"TR.TRBCIndustryGroupCode",
"TR.TRBCIndustryCode",
"TR.TRBCActivityCode"])
ESector["TRBC Economic Sector"] = numpy.nan
ESector_list = [[],[],[],[],[],[],[],[],[],[]]
Sectors_list = ["Energy", "Basic Materials", "Industrials", "Consumer Cyclicals",
"Consumer Non-Cyclicals", "Financials", "Healthcare",
"Technology", "Telecommunication Services", "Utilities"]
for i in range(len(ESector["TRBC Economic Sector Code"])):
for j,k in zip(range(0, 10), Sectors_list):
if ESector.iloc[i,1] == (50 + j):
ESector.iloc[i,6] = k
ESector_list[j].append(ESector.iloc[i,0])
ESector_df = numpy.transpose(pandas.DataFrame(data = [ESector_list[i] for i in range(len(ESector_list))],
index = Sectors_list))
ESector_df_by_Sector = []
for k in Sectors_list:
ESector_df_by_Sector.append(numpy.average([numpy.average(daily_df2_post_COVID_announcement_trend[i + " Close Price"].dropna()) for i in [j for j in ESector_df[k].dropna()]]))
ESector_average = pandas.DataFrame(data = ESector_df_by_Sector,
columns = ["Average of Close Prices Post COVID Announcement"],
index = Sectors_list)
ESector_average
```
The 'ESector_average' table above shows the Close Prices Post COVID-Announcement for each company averaged per sector
$$ \\ $$
$$ \\ $$
The cells bellow now allow us to visualise this trend in a graph on an industry sector basis
```
Sector_Average = []
for k in ESector_average.index:
Sector_Average1 = []
for j in range(len(pandas.DataFrame([daily_df2_post_COVID_announcement_trend[i + " Close Price"].dropna() for i in ESector_df[k].dropna()]).columns)):
Sector_Average1.append(numpy.average(pandas.DataFrame([daily_df2_post_COVID_announcement_trend[i + " Close Price"].dropna() for i in ESector_df[k].dropna()]).iloc[:,j].dropna()))
Sector_Average.append(Sector_Average1)
Sector_Average = numpy.transpose(pandas.DataFrame(Sector_Average, index = ESector_average.index))
```
This cell bellow in particular allows us to collect and save our data before continuing so that we don't have to ask for data from Eikon again were we to manipulate the same content later (just in case)
```
pickle_out = open("SPX2.pickle","wb")
pickl = (COVID_start_date, days_since_COVID,
index_Announcement_df, index_Announcement_list,
Announcement_COVID_date,
Instruments_Income_Statement_Announcement_COVID_date,
Instruments_Cash_Flow_Statement_Announcement_COVID_date,
Instruments_Balance_Sheet_COVID_date,
daily_df, daily_df_no_na,
daily_df_trend, datasubset_list)
pickle.dump(pickl, pickle_out)
pickle_out.close()
plot1ax(dataset = Sector_Average, ylabel = "Price Movement",
title = "Index Constituents' Trend In Close Prices From There First Income Statement Announcement Since COVID Sorted By Sector\n" +
"Only companies that announced an Income Statement since the start of COVID (i.e.:" + str(COVID_start_date) + ") will show",
xlabel = "Trading Day", legend = "underneath",
datasubset = [i for i in range(len(Sector_Average.columns))])
```
$$ \\ $$
# Conclusion
Using S&P 500 (i.e.: SPX) data, this last graph can provide a wholesome picture of industries in the United States of America (USA). We can see a great negative change in instruments’ daily close prices for stocks in the Consumer Cyclical, Utilities, Healthcare and Industrial markets. This is actually surprising because they are the industries that were suggested to be most hindered by COVID in the media before their financial statement announcements; investors thus ought to have priced the negative effects of the Disease on these market sectors appropriately. \
The graph suggests that it may be profitable to short companies within these sectors just before they are due to release their first post-COVID Financial Statements - but naturally does not account for future changes, trade costs or other such variants external to this investigation. \
Companies in the Financial sector seem to have performed adequately. Reasons for movements in this sector can be complex and numerous due to their exposure to all other sectors. \
Tech companies seem to have had the impact of COVID priced in prior to the release of their financial statements. One may postulate the impact of COVID on their share price was actually positive as people rush to online infrastructures they support during confinement. \
Companies dealing with Basic Material have performed relatively well. This may be an indication that investors are losing confidence in all but sectors that offer physical goods in supply chains (rather than in consumer goods) - a retreat to fundamentals in a time of uncertainty. \
**BUT** one must use both the ESector_average table and the last graph before coming to any conclusion. The ESector_average - though simple - can provide more depth to our analysis. Take the Healthcare sector for example: One may assume – based on the last graph alone – that this sector is performing badly when revealing information via Announcements; but the ESector_average shows a positive ‘Average of Close Prices Post COVID Announcement’. This is because only very few companies within the Healthcare sector published Announcements before May 2020, and the only ones that did performed badly, skewing the data negatively on the graph.
## References
You can find more detail regarding the Eikon Data API and related technologies for this notebook from the following resources:
* [Refinitiv Eikon Data API page](https://developers.refinitiv.com/eikon-apis/eikon-data-api) on the [Refinitiv Developer Community](https://developers.refinitiv.com/) web site.
* [Eikon Data API Quick Start Guide page](https://developers.refinitiv.com/eikon-apis/eikon-data-api/quick-start).
* [Eikon Data API Tutorial page](https://developers.refinitiv.com/eikon-apis/eikon-data-api/learning).
* [Python Quants Video Tutorial Series for Eikon API](https://community.developers.refinitiv.com/questions/37865/announcement-new-python-quants-video-tutorial-seri.html).
* [Eikon Data APY Python Reference Guide](https://docs-developers.refinitiv.com/1584688434238/14684/book/en/index.html).
* [Eikon Data API Troubleshooting article](https://developers.refinitiv.com/article/eikon-data-apipython-troubleshooting-refinitiv).
* [Pandas API Reference](https://pandas.pydata.org/docs/reference/index.html).
For any question related to this example or Eikon Data API, please use the Developers Community [Q&A Forum](https://community.developers.refinitiv.com/spaces/92/eikon-scripting-apis.html).
| true |
code
| 0.501709 | null | null | null | null |
|
# Neural Network for Hadronic Top Reconstruction
This file creates a feed-forward binary classification neural network for hadronic top reconstruction by classifying quark jet triplets as being from a top quark or not.
```
from __future__ import print_function, division
import pandas as pd
import numpy as np
import torch as th
from torch.autograd import Variable
import torch.optim as optim
from torch.utils.data import Dataset, DataLoader
from sklearn.metrics import f1_score, roc_auc_score
from nn_classes import *
import utils
```
## Load the Datasets
Here I load the datasets using my custom <code>Dataset</code> class. This ensures that the data is scaled properly and then the PyTorch <code>DataLoader</code> shuffles and iterates over the dataset in batches.
```
trainset = utils.CollisionDataset("ttH_hadT_cut_raw_train.csv", header=0, target_col=0, index_col=0)
valset = utils.CollisionDataset("ttH_hadT_cut_raw_val.csv", header=0, target_col=0, index_col=0, scaler=trainset.scaler)
testset = utils.CollisionDataset("ttH_hadT_cut_raw_test.csv", header=0, target_col=0, index_col=0, scaler=trainset.scaler)
trainloader = DataLoader(trainset, batch_size=512, shuffle=True, num_workers=5)
testloader = DataLoader(testset, batch_size=512, shuffle=True, num_workers=5)
```
## Initialize the NN, Loss Function, and Optimizer
```
input_dim = trainset.shape[1]
net = DHTTNet(input_dim)
criterion = nn.BCELoss()
optimizer = optim.Adam(net.parameters())
```
## Train the Neural Network
```
train_X = Variable(trainset[:][0])
train_y = trainset[:][1].numpy()
val_X = Variable(valset[:][0])
val_y = valset[:][1].numpy()
train_discriminant = net(train_X).data.numpy()
val_discriminant = net(val_X).data.numpy()
val_curve = [(roc_auc_score(train_y, train_discriminant), roc_auc_score(val_y, val_discriminant))]
for epoch in range(1, 4):
if epoch%2 == 0: print(epoch)
for batch in trainloader:
inputs, targets = Variable(batch[0]), Variable(batch[1])
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, targets)
loss.backward()
optimizer.step()
#Evaluate the model on the training set
train_discriminant = net(train_X).data.numpy()
# Evaluate the model on a validation set
val_discriminant = net(val_X).data.numpy()
# Add the ROC AUC to the curve
val_curve.append((roc_auc_score(train_y, train_discriminant), roc_auc_score(val_y, val_discriminant)))
print("Done")
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
plt.plot(range(1, len(val_curve)+1), val_curve)
ax.set_ylabel("ROC AUC")
ax.set_xlabel("Epochs Finished")
ax.set_title("Validation Curves")
handles, _ = ax.get_legend_handles_labels()
labels = ["Training", "Validation"]
plt.legend(handles, labels, loc='lower right')
fig.set_size_inches(18, 10)
fig.savefig("hello.png")
```
## Evaluate the Model's Accuracy
```
correct = 0
total = 0
# For Binary
for data in testloader:
images, labels = data['input'].float(), data['target'].long()
outputs = net(Variable(images))
predicted = th.round(outputs.data).long()
total += labels.size(0)
correct += (predicted.view(-1, 1) == labels.view(-1, 1)).sum()
print('Accuracy of the network on the {} samples: {:f} %'.format(len(testset), (
100 * correct / total)))
```
## Save the Model
Here we only serialize the model parameters, i.e. the weights and such, to be loaded again later as follows:
```python
model = BinaryNet(<input_dim>) # Should be the same input dimensions as before.
model.load_state_dict(th.load(<Path>))
```
```
th.save(net.state_dict(), "neural_net.torch")
```
| true |
code
| 0.81231 | null | null | null | null |
|
# Predictive performance comparison
The idea of this notebook is to take a look at the predictive performance on cell lines for all the drugs. The idea is two-fold:
<ul>
<li> Assessing that the source top PVs can yield same predictive performance as a direct ridge on the source data. It would mean that the top PVs contain the relevant information for drug response prediction.
<li> Taking a look at which drug gets predicted using both the PV duos and the consensus representation.
</ul>
We here use all the cell line data for the domain adaptation. Other settings can be imagined as well.
## Parameters (to change)
```
# None for 'rnaseq', 'fpkm' for FPKM
type_data = 'rnaseq'
normalization = 'TMM'
transformation = 'log'
mean_center = True
std_unit = False
filter_mytochondrial = False
protein_coding_only = True
d_test = [40]
n_factors = 70
same_pv_pca = True
drug_file = 'input/drug_list_small.txt' # To change to drug_list.txt for full-scale analysis
n_jobs=5
import os, sys
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import scipy
from sklearn.model_selection import GroupKFold, GridSearchCV
from sklearn.linear_model import ElasticNet, Ridge
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.externals.joblib import Parallel, delayed
import pickle
plt.style.use('ggplot')
#Import src implementations
os.environ['OMP_NUM_THREADS'] = '1'
os.environ['KMP_DUPLICATE_LIB_OK']='True'
from data_reader.read_data import read_data
from data_reader.read_drug_response import read_drug_response
from data_reader.read_cna_tumors import read_cna_tumors
from normalization_methods.feature_engineering import feature_engineering
import precise
from precise import DrugResponsePredictor, ConsensusRepresentation
```
## Read all the drug from the file and load all the data
```
with open(drug_file,'r') as drug_file_reader:
drug_file_content = drug_file_reader.read()
drug_file_content = drug_file_content.split('\n')
drug_file_content = [e.split(',') for e in drug_file_content]
# drug_IDs and tumor tissues are ordered in the same way
drug_IDs = np.array(list(zip(*drug_file_content))[0]).astype(int)
tumor_tissues = np.array(list(zip(*drug_file_content))[1])
unique_tumor_tissues = np.unique(tumor_tissues)
target_raw_data = dict()
source_raw_data = dict()
target_barcodes = dict()
source_names = dict()
target_data = dict()
source_data = dict()
source_data_filtered = dict()
source_response_data = dict()
source_names_filtered = dict()
drug_names = dict()
target_primary_site = dict()
# Load cell line data
# /!\ Due to some mismatch in the genes available in TCGA, cell line data has to be loaded all the time
for tissue_name in unique_tumor_tissues:
print(tissue_name)
if tissue_name in target_raw_data:
continue
X_target, X_source, _, s, target_names = read_data('cell_line',
'tumor',
'count',
None,
tissue_name,
filter_mytochondrial)
target_raw_data[tissue_name] = X_target
source_raw_data[tissue_name] = X_source
target_barcodes[tissue_name] = target_names
source_names[tissue_name] = s
# Normalize the data
for tissue_name in unique_tumor_tissues:
print(tissue_name)
if tissue_name in target_data:
continue
target_data[tissue_name] = feature_engineering(target_raw_data[tissue_name],
normalization,
transformation,
mean_center,
std_unit)
# source data is not mean-centered as it will be done during cross-validation procedure.
source_data[tissue_name] = feature_engineering(source_raw_data[tissue_name],
normalization,
transformation,
False,
False)
# Normalize for variance
for tissue_name in unique_tumor_tissues:
print(tissue_name)
if tissue_name in target_data:
continue
target_total_variance = np.sqrt(np.sum(np.var(target_data[tissue_name], 0)))
target_data[tissue_name] = target_data[tissue_name] / target_total_variance * 10**3
source_total_variance = np.sqrt(np.sum(np.var(source_data[tissue_name], 0)))
source_data[tissue_name] = source_data[tissue_name] / source_total_variance * 10**3
# Read drug response
for i, (ID, tissue) in enumerate(zip(drug_IDs, tumor_tissues)):
if (ID, tissue) in source_data_filtered:
continue
x, y, s, name = read_drug_response(ID,
source_data[tissue],
source_names[tissue],
'count')
source_data_filtered[(ID, tissue)] = x
source_response_data[(ID, tissue)] = y
drug_names[(ID, tissue)] = name
source_names_filtered[(ID, tissue)] = s
```
## Principal vector test
Here we compute the predictive performance for several different drugs using either the osurce, the target of both principal vector. The latter one is still biases towards the source.
### Consensus representation
```
l1_ratio = 0
for ID, tissue in zip(drug_IDs, tumor_tissues):
print(ID, tissue)
X_source = source_data_filtered[ID, tissue]
y_source = source_response_data[ID, tissue]
X_target = target_data[tissue]
pickle_file = 'consensus_drug_%s_tissue_%s_l1_ratio_%s_n_factors_%s.pkl'%(ID,
tissue,
l1_ratio,
n_factors)
if pickle_file in os.listdir('./output/pred_performance/'):
print('%s, %s ALREADY COMPUTED'%(ID, tissue))
continue
with open('./output/pred_performance/%s'%(pickle_file), 'wb') as f:
pickle.dump(dict(), f, pickle.HIGHEST_PROTOCOL)
pred_performance = {}
for d in d_test:
print(d)
predictor = DrugResponsePredictor(source_data=source_data[tissue][~np.isin(source_names[tissue], source_names_filtered[(ID, tissue)])],\
method='consensus',\
n_representations = 100,\
target_data=X_target,\
n_pv=d,\
n_factors=n_factors,\
n_jobs=n_jobs,\
mean_center=mean_center,\
std_unit=std_unit,\
l1_ratio=l1_ratio)
predictor.alpha_values = list(np.logspace(-2,10,17))
predictor.verbose = 5
predictor.fit(X_source, y_source, use_data=True)
pred_performance[d] = predictor.compute_predictive_performance(X_source, y_source)
plt.plot(predictor.alpha_values, predictor.regression_model_.cv_results_['mean_test_score'], '+-')
plt.title(pred_performance[d])
plt.xscale('log')
plt.show()
with open('./output/pred_performance/%s'%(pickle_file), 'wb') as f:
pickle.dump(pred_performance, f, pickle.HIGHEST_PROTOCOL)
```
### ElasticNet/Ridge comparison
```
from sklearn.model_selection import GroupKFold
l1_ratio = 0.
pickle_file = 'elasticnet_drug_l1_ratio_%s_std.pkl'%(l1_ratio)
if pickle_file in os.listdir('./output/pred_performance/'):
with open('./output/pred_performance/%s'%(pickle_file), 'rb') as f:
elasticnet_perf = pickle.load(f)
for ID, tissue in zip(drug_IDs, tumor_tissues):
print(ID, tissue)
pickle_file = 'en_std_drug_%s_tissue_%s_l1_ratio_%s_n_factors_%s.pkl'%(ID,
tissue,
l1_ratio,
n_factors)
if pickle_file in os.listdir('./output/pred_performance/'):
print('%s, %s ALREADY COMPUTED'%(ID, tissue))
continue
if (ID, tissue) in elasticnet_perf:
continue
with open('./output/pred_performance/%s'%(pickle_file), 'wb') as f:
pickle.dump(dict(), f, pickle.HIGHEST_PROTOCOL)
X_source = source_data_filtered[ID, tissue]
y_source = source_response_data[ID, tissue]
X_target = target_data[tissue]
#Parameters for the grid search
alpha_values = np.logspace(-5,10,16)
param_grid ={
'regression__alpha': alpha_values
}
#Grid search setup
k_fold_split = GroupKFold(10)
y_predicted = np.zeros(X_source.shape[0])
for train_index, test_index in k_fold_split.split(X_source, y_source, y_source):
grid_en = GridSearchCV(Pipeline([
('normalization', StandardScaler(with_mean=mean_center, with_std=True)),
('regression', ElasticNet(l1_ratio) if l1_ratio > 0 else Ridge())
]),\
cv=10, n_jobs=30, param_grid=param_grid, verbose=1, scoring='neg_mean_squared_error')
grid_en.fit(X_source[train_index], y_source[train_index])
y_predicted[test_index] = grid_en.predict(X_source[test_index])
#Fit grid search
grid_en.fit(X_source, y_source)
elasticnet_perf[ID, tissue] = scipy.stats.pearsonr(y_predicted, y_source)[0]
print(elasticnet_perf[ID, tissue])
with open('./output/pred_performance/%s'%(pickle_file), 'wb') as f:
pickle.dump(elasticnet_perf[ID, tissue], f, pickle.HIGHEST_PROTOCOL)
```
## Load pickle and look at results
```
l1_ratio = 0
l1_ratio_en = 0.
two_pv_results = dict()
consensus_pv_results = dict()
source_pv_results = dict()
target_pv_results = dict()
en_results_std = dict()
def sort_dictionary(d):
return {e:d[e] for e in sorted(d)}
for ID, tissue in zip(drug_IDs, tumor_tissues):
print(ID, tissue)
# Read results of consensus PVs
pickle_file = 'consensus_drug_%s_tissue_%s_l1_ratio_%s_n_factors_%s.pkl'%(ID,
tissue,
l1_ratio,
n_factors)
with open('./output/pred_performance/%s'%(pickle_file), 'rb') as f:
consensus_pv_results[ID,tissue] = sort_dictionary(pickle.load(f))
# Read results of EN
pickle_file = 'en_std_drug_%s_tissue_%s_l1_ratio_%s_n_factors_%s.pkl'%(ID,
tissue,
'0.0',
n_factors)
with open('./output/pred_performance/%s'%(pickle_file), 'rb') as f:
en_results_std[ID,tissue] = pickle.load(f)
print(en_results[ID, tissue])
for ID, tissue in zip(drug_IDs, tumor_tissues):
# Plot for a specific number of PV
plt.plot([e[0] for e in consensus_pv_results[ID,tissue].items()],
[e[1] for e in consensus_pv_results[ID,tissue].items()],
label='consensus', linewidth=3, alpha=0.5, marker='+')
plt.plot([e[0] for e in source_pv_results[ID,tissue].items()],
[e[1] for e in source_pv_results[ID,tissue].items()],
label='source', linewidth=3, alpha=0.5, marker='+')
plt.plot([e[0] for e in target_pv_results[ID,tissue].items()],
[e[1] for e in target_pv_results[ID,tissue].items()],
label='target', linewidth=3, alpha=0.5, marker='+')
plt.plot([e[0] for e in two_pv_results[ID,tissue].items()],
[e[1] for e in two_pv_results[ID,tissue].items()],
label='2 pv', linewidth=3, alpha=0.5, marker='+')
plt.hlines(en_results[ID,tissue], xmin=0, xmax=plt.xlim()[1], label='Ridge', linewidth=3, alpha=0.7)
plt.title(drug_names[ID, tissue] + ' '+ tissue)
plt.xlabel('Number of Principal Vectors', fontsize=15)
plt.ylabel('Predictive Performance', fontsize=15)
plt.legend()
plt.show()
n_pv = 40
perf_scatter = []
for ID, tissue in zip(drug_IDs, tumor_tissues):
#print(ID, tissue)
if n_pv not in consensus_pv_results[ID,tissue]:
print(ID, tissue)
continue
plt.scatter(en_results_std[ID,tissue],
consensus_pv_results[ID,tissue][n_pv],
color='blue', marker='x', alpha=0.7)
perf_scatter.append([en_results_std[ID,tissue], consensus_pv_results[ID,tissue][n_pv]])
plt.xlabel('ElasticNet', fontsize=20)
plt.ylabel('Consensus \n representation', fontsize=20)
plt.xticks(fontsize=15, color='black')
plt.yticks(fontsize=15, color='black')
plt.tight_layout()
plt.xlim(0.1,0.8)
plt.ylim(0.1,0.8)
plt.plot(plt.xlim(), plt.xlim(), linewidth=3, alpha=0.5)
#plt.savefig('./figures/fig4_pred_perf_consensus_%s_en_%s.png'%(l1_ratio, l1_ratio_en), dpi=300)
plt.show()
perf_scatter = np.array(perf_scatter)
p = scipy.stats.pearsonr(perf_scatter[:,0], perf_scatter[:,1])
print('Pearson Correlation: %s, %s'%(p[0], p[1]))
plt.scatter(perf_scatter[:,1], (perf_scatter[:,0] - perf_scatter[:,1])/perf_scatter[:,0])
np.median((perf_scatter[:,0] - perf_scatter[:,1])/perf_scatter[:,0])
#for e in en_results:
# print(e, en_results[e], consensus_pv_results[e])
for ID, tissue in zip(drug_IDs, tumor_tissues):
#print(ID, tissue)
if n_pv not in consensus_pv_results[ID,tissue]:
print(ID, tissue)
continue
plt.scatter(en_results[ID,tissue],
en_results_std[ID,tissue],
color='blue', marker='x', alpha=0.7)
#perf_scatter.append([en_results[ID,tissue], consensus_pv_results[ID,tissue][n_pv]])
```
| true |
code
| 0.479504 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/vgaurav3011/100-Days-of-ML/blob/master/DCGAN.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
import glob
import imageio
import os
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras import layers
import time
from IPython import display
import PIL
from tensorflow.keras.datasets import mnist
(train_images, train_labels), (_,_) = mnist.load_data()
train_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype('float32')
train_images = (train_images - 127.5) / 127.5
batch_size = 256
buffer_size = 60000
train_dataset = tf.data.Dataset.from_tensor_slices(train_images).shuffle(buffer_size).batch(batch_size)
def generator_model():
model = tf.keras.Sequential()
model.add(layers.Dense(7*7*256, use_bias=False, input_shape=(100,)))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Reshape((7, 7, 256)))
model.add(layers.Conv2DTranspose(128, (5, 5), strides=(1, 1), padding='same', use_bias=False))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Conv2DTranspose(64, (5, 5), strides=(2, 2), padding='same', use_bias=False))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Conv2DTranspose(1, (5, 5), strides=(2, 2), padding='same', use_bias=False, activation='tanh'))
return model
generator = generator_model()
noise = tf.random.normal([1, 100])
generated_image = generator(noise, training=False)
plt.imshow(generated_image[0, :, :, 0], cmap='gray')
def discriminator_model():
model = tf.keras.Sequential()
model.add(layers.Conv2D(64, (5,5), strides=(2,2), padding='same', input_shape=[28,28,1]))
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.1))
model.add(layers.Conv2D(128, (5, 5), strides=(2, 2), padding='same'))
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.1))
model.add(layers.Flatten())
model.add(layers.Dense(1))
return model
discriminator = discriminator_model()
decision = discriminator(generated_image)
print (decision)
cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True)
def discriminator_loss(real_output, fake_output):
real_loss = cross_entropy(tf.ones_like(real_output), real_output)
fake_loss = cross_entropy(tf.zeros_like(fake_output), fake_output)
total_loss = real_loss + fake_loss
return total_loss
def generator_loss(fake_output):
return cross_entropy(tf.ones_like(fake_output), fake_output)
generator_optimizer = tf.keras.optimizers.Adam(1e-4)
discriminator_optimizer = tf.keras.optimizers.Adam(1e-4)
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(generator_optimizer=generator_optimizer,
discriminator_optimizer=discriminator_optimizer,
generator=generator,
discriminator=discriminator)
EPOCHS = 50
noise_dim = 100
num_examples_to_generate = 16
seed = tf.random.normal([num_examples_to_generate, noise_dim])
@tf.function
def train_step(images):
noise = tf.random.normal([batch_size, noise_dim])
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
generated_images = generator(noise, training=True)
real_output = discriminator(images, training=True)
fake_output = discriminator(generated_images, training=True)
gen_loss = generator_loss(fake_output)
disc_loss = discriminator_loss(real_output, fake_output)
gradients_of_generator = gen_tape.gradient(gen_loss, generator.trainable_variables)
gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.trainable_variables)
generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.trainable_variables))
discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.trainable_variables))
def train(dataset, epochs):
for epoch in range(epochs):
start = time.time()
for image_batch in dataset:
train_step(image_batch)
display.clear_output(wait=True)
generate_and_save_images(generator,
epoch + 1,
seed)
if (epoch + 1) % 15 == 0:
checkpoint.save(file_prefix = checkpoint_prefix)
print ('Time for epoch {} is {} sec'.format(epoch + 1, time.time()-start))
display.clear_output(wait=True)
generate_and_save_images(generator,
epochs,
seed)
def generate_and_save_images(model, epoch, test_input):
predictions = model(test_input, training=False)
fig = plt.figure(figsize=(4,4))
for i in range(predictions.shape[0]):
plt.subplot(4, 4, i+1)
plt.imshow(predictions[i, :, :, 0] * 127.5 + 127.5, cmap='gray')
plt.axis('off')
plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))
plt.show()
train(train_dataset, EPOCHS)
PIL.Image.open('image_at_epoch_{:04d}.png'.format(EPOCHS))
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
anim_file = 'output.gif'
with imageio.get_writer(anim_file, mode='I') as writer:
filenames = glob.glob('image*.png')
filenames = sorted(filenames)
last = -1
for i,filename in enumerate(filenames):
frame = 2*(i**0.5)
if round(frame) > round(last):
last = frame
else:
continue
image = imageio.imread(filename)
writer.append_data(image)
image = imageio.imread(filename)
writer.append_data(image)
import IPython
if IPython.version_info > (6,2,0,''):
display.Image(filename=anim_file)
```
| true |
code
| 0.772144 | null | null | null | null |
|
# Contour Plots
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
np.random.seed(0)
def f(x, y):
return x**2 + y**2
x = np.arange(-5, 5.0, 0.25)
y = np.arange(-5, 5.0, 0.25)
print(x[:10])
print(y[:10])
```
### Meshgrid
```python
np.meshgrid(
*xi,
copy=True,
sparse=False,
indexing='xy'
)
```
Return coordinate matrices from coordinate vectors.
Make N-D coordinate arrays for vectorized evaluations of N-D scalar/vector fields over N-D grids, given one-dimensional coordinate arrays x1, x2,…, xn.
```
X, Y = np.meshgrid(x, y)
print(X)
print(Y)
plt.scatter(X, Y, s=10);
Z = f(X, Y)
print(Z)
plt.contour(X, Y, Z, colors='black');
```
### Colorbars
'BuGn_r', 'BuPu', 'BuPu_r', 'CMRmap', 'CMRmap_r', 'Dark2', 'Dark2_r', 'GnBu', 'GnBu_r', 'Greens', 'Greens_r', 'Greys', 'Greys_r', 'OrRd', 'OrRd_r', 'Oranges', 'Oranges_r', 'PRGn', 'PRGn_r', 'Paired', 'Paired_r', 'Pastel1', 'Pastel1_r', 'Pastel2', 'Pastel2_r', 'PiYG', 'PiYG_r', 'PuBu', 'PuBuGn', 'PuBuGn_r', 'PuBu_r', 'PuOr', 'PuOr_r', 'PuRd', 'PuRd_r', 'Purples', 'Purples_r', 'RdBu', 'RdBu_r', 'RdGy', 'RdGy_r', 'RdPu', 'RdPu_r', 'RdYlBu', 'RdYlBu_r', 'RdYlGn', 'RdYlGn_r', 'Reds', 'Reds_r', 'Set1', 'Set1_r', 'Set2', 'Set2_r', 'Set3', 'Set3_r', 'Spectral', 'Spectral_r', 'Wistia', 'Wistia_r', 'YlGn', 'YlGnBu', 'YlGnBu_r', 'YlGn_r', 'YlOrBr', 'YlOrBr_r', 'YlOrRd', 'YlOrRd_r', 'afmhot', 'afmhot_r', 'autumn', 'autumn_r', 'binary', 'binary_r', 'bone', 'bone_r', 'brg', 'brg_r', 'bwr', 'bwr_r', 'cividis', 'cividis_r', 'cool', 'cool_r', 'coolwarm', 'coolwarm_r', 'copper', 'copper_r', 'cubehelix', 'cubehelix_r', 'flag', 'flag_r', 'gist_earth', 'gist_earth_r', 'gist_gray', 'gist_gray_r', 'gist_heat', 'gist_heat_r', 'gist_ncar', 'gist_ncar_r', 'gist_rainbow', 'gist_rainbow_r', 'gist_stern', 'gist_stern_r', 'gist_yarg', 'gist_yarg_r', 'gnuplot', 'gnuplot2', 'gnuplot2_r', 'gnuplot_r', 'gray', 'gray_r', 'hot', 'hot_r', 'hsv', 'hsv_r', 'inferno', 'inferno_r', 'jet', 'jet_r', 'magma', 'magma_r', 'nipy_spectral', 'nipy_spectral_r', 'ocean', 'ocean_r', 'pink', 'pink_r', 'plasma', 'plasma_r', 'prism', 'prism_r', 'rainbow', 'rainbow_r', 'seismic', 'seismic_r', 'spring', 'spring_r', 'summer', 'summer_r', 'tab10', 'tab10_r', 'tab20', 'tab20_r', 'tab20b', 'tab20b_r', 'tab20c', 'tab20c_r', 'terrain', 'terrain_r', 'turbo', 'turbo_r', 'twilight', 'twilight_r', 'twilight_shifted', 'twilight_shifted_r', 'viridis', 'viridis_r', 'winter', 'winter_r'
```
plt.contourf(X, Y, Z, 20, cmap='RdGy')
plt.colorbar();
plt.contourf(X, Y, Z, 20, cmap='cool')
plt.colorbar();
delta = 0.025
x = np.arange(-3.0, 3.0, delta)
y = np.arange(-2.0, 2.0, delta)
X, Y = np.meshgrid(x, y)
Z1 = np.exp(-X**2 - Y**2)
Z2 = np.exp(-(X - 1)**2 - (Y - 1)**2)
Z = (Z1 - Z2) * 2
fig, ax = plt.subplots()
CS = ax.contour(X, Y, Z)
```
| true |
code
| 0.48054 | null | null | null | null |
|
# ThreadBuffer Performance
This notebook demonstrates the use of `ThreadBuffer` to generate batches of data asynchronously from the training thread.
Under certain circumstances the main thread can be busy with the training operations, that is interacting with GPU memory and invoking CUDA operations, which is independent of batch generation operations. If the time taken to generate a batch is significant compared to the time taken to train the network for an iteration, and assuming operations can be done in parallel given the limitations of the GIL or other factors, this should speed up the whole training process. The efficiency gains will be relative to the proportion of these two times, so if batch generation is lengthy but training is very fast then very little parallel computation is possible.
[](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/acceleration/threadbuffer_performance.ipynb)
## Setup Environment
The current MONAI master branch must be installed for this feature (as of release 0.3.0), skip this step if already installed:
```
%pip install git+https://github.com/Project-MONAI/MONAI#egg=MONAI
```
This install for Pytorch 1.6 specifically may be necessary for Colab:
```
%pip install torch==1.6.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html
import numpy as np
import matplotlib.pyplot as plt
import torch
import monai
from monai.data import Dataset, DataLoader, ThreadBuffer, create_test_image_2d
from monai.networks.nets import UNet
from monai.losses import Dice
from monai.transforms import Compose, MapTransform, AddChanneld, ToTensord
monai.utils.set_determinism(seed=0)
monai.config.print_config()
```
The data pipeline is given here which creates random 2D segmentation training pairs. It is artificially slowed by setting the number of worker processes to 0 (often necessary under Windows).
```
class RandomGenerator(MapTransform):
"""Generates a dictionary containing image and segmentation images from a given seed value."""
def __call__(self, seed):
rs = np.random.RandomState(seed)
im, seg = create_test_image_2d(256, 256, num_seg_classes=1, random_state=rs)
return {self.keys[0]: im, self.keys[1]: seg}
data = np.random.randint(0, monai.utils.MAX_SEED, 1000)
trans = Compose(
[
RandomGenerator(keys=("im", "seg")),
AddChanneld(keys=("im", "seg")),
ToTensord(keys=("im", "seg")),
]
)
train_ds = Dataset(data, trans)
train_loader = DataLoader(train_ds, batch_size=20, shuffle=True, num_workers=0)
```
Network, loss, and optimizers defined as normal:
```
device = torch.device("cuda:0")
net = UNet(2, 1, 1, (8, 16, 32), (2, 2, 2), num_res_units=2).to(device)
loss_function = Dice(sigmoid=True)
optimizer = torch.optim.Adam(net.parameters(), 1e-5)
epoch_num = 10
```
A simple training function is defined which only performs step optimization of the network:
```
def train_step(batch):
inputs, labels = batch["im"].to(device), batch["seg"].to(device)
optimizer.zero_grad()
outputs = net(inputs)
loss = loss_function(outputs, labels)
loss.backward()
optimizer.step()
def train(use_buffer):
# wrap the loader in the ThreadBuffer if selected
src = ThreadBuffer(train_loader, 1) if use_buffer else train_loader
for epoch in range(epoch_num):
for batch in src:
train_step(batch)
```
Timing how long it takes to generate a single batch versus the time taken to optimize the network for one step reveals the proportion of time taken by each during each full training iteration:
```
it = iter(train_loader)
batch = next(it)
%timeit -n 1 next(it)
%timeit -n 1 train_step(batch)
```
Without using an asynchronous buffer for batch generation these operations must be sequential:
```
%timeit -n 1 train(False)
```
With overlap we see a significant speedup:
```
%timeit -n 1 train(True)
```
| true |
code
| 0.799247 | null | null | null | null |
|
# 4 - Hybdrid Absorbing Boundary Condition (HABC)
# 4.1 - Introduction
In this notebook we describe absorbing boundary conditions and their use combined with the *Hybdrid Absorbing Boundary Condition* (*HABC*). The common points to the previous notebooks <a href="01_introduction.ipynb">Introduction to Acoustic Problem</a>, <a href="02_damping.ipynb">Damping</a> and <a href="03_pml.ipynb">PML</a> will be used here, with brief descriptions.
# 4.2 - Absorbing Boundary Conditions
We initially describe absorbing boundary conditions, the so called A1 and A2 Clayton's conditions and
the scheme from Higdon. These methods can be used as pure boundary conditions, designed to reduce reflections,
or as part of the Hybrid Absorbing Boundary Condition, in which they are combined with an absorption layer in a manner to be described ahead.
In the presentation of these boundary conditions we initially consider the wave equation to be solved on
the spatial domain $\Omega=\left[x_{I},x_{F}\right] \times\left[z_{I},z_{F}\right]$ as show in the figure bellow. More details about the equation and domain definition can be found in the <a href="01_introduction.ipynb">Introduction to Acoustic Problem</a> notebook.
<img src='domain1.png' width=500>
## 4.2.1 - Clayton's A1 Boundary Condition
Clayton's A1 boundary condition is based on a one way wave equation (OWWE). This simple condition
is such that outgoing waves normal to the border would leave without reflection. At the $\partial \Omega_1$ part of the boundary
we have,
- $\displaystyle\frac{\partial u(x,z,t)}{\partial t}-c(x,z)\displaystyle\frac{\partial u(x,z,t)}{\partial x}=0.$
while at $\partial \Omega_3$ the condition is
- $\displaystyle\frac{\partial u(x,z,t)}{\partial t}+c(x,z)\displaystyle\frac{\partial u(x,z,t)}{\partial x}=0.$
and at $\partial \Omega_2$
- $\displaystyle\frac{\partial u(x,z,t)}{\partial t}+c(x,z)\displaystyle\frac{\partial u(x,z,t)}{\partial z}=0.$
## 4.2.2 - Clayton's A2 Boundary Condition
The A2 boundary condition aims to impose a boundary condition that would make outgoing waves leave the domain without being reflected. This condition is approximated (using a Padé approximation in the wave dispersion relation) by the following equation to be imposed on the boundary part $\partial \Omega_1$
- $\displaystyle\frac{\partial^{2} u(x,z,t)}{\partial t^{2}}+c(x,z)\displaystyle\frac{\partial^{2} u(x,z,t)}{\partial x \partial t}+\frac{c^2(x,z)}{2}\displaystyle\frac{\partial^{2} u(x,z,t)}{\partial z^{2}}=0.$
At $\partial \Omega_3$ we have
- $\displaystyle\frac{\partial^{2} u(x,z,t)}{\partial t^{2}}-c(x,z)\displaystyle\frac{\partial^{2} u(x,z,t)}{\partial z \partial t}+\frac{c^2(x,z)}{2}\displaystyle\frac{\partial^{2} u(x,z,t)}{\partial x^{2}}=0.$
while at $\partial \Omega_2$ the condition is
- $\displaystyle\frac{\partial^{2} u(x,z,t)}{\partial t^{2}}-c(x,z)\displaystyle\frac{\partial^{2} u(x,z,t)}{\partial x \partial t}+\frac{c^2(x,z)}{2}\displaystyle\frac{\partial^{2} u(x,z,t)}{\partial z^{2}}=0.$
At the corner points the condition is
- $\displaystyle\frac{\sqrt{2}\partial u(x,z,t)}{\partial t}+c(x,z)\left(\displaystyle\frac{\partial u(x,z,t)}{\partial x}+\displaystyle\frac{\partial u(x,z,t)}{\partial z}\right)=0.$
## 4.2.3 - Higdon Boundary Condition
The Higdon Boundary condition of order p is given at $\partial \Omega_1$ and $\partial \Omega_3$n by:
- $\Pi_{j=1}^p(\cos(\alpha_j)\left(\displaystyle\frac{\partial }{\partial t}-c(x,z)\displaystyle\frac{\partial }{\partial x}\right)u(x,z,t)=0.$
and at $\partial \Omega_2$
- $\Pi_{j=1}^p(\cos(\alpha_j)\left(\displaystyle\frac{\partial}{\partial t}-c(x,z)\displaystyle\frac{\partial}{\partial z}\right)u(x,z,t)=0.$
This method would make that outgoing waves with angle of incidence at the boundary equal to $\alpha_j$ would
present no reflection. The method we use in this notebook employs order 2 ($p=2$) and angles $0$ and $\pi/4$.
Observation: There are similarities between Clayton's A2 and the Higdon condition. If one chooses $p=2$ and
both angles equal to zero in Higdon's method, this leads to the condition:
$ u_{tt}-2cu_{xt}+c^2u_{xx}=0$. But, using the wave equation, we have that $c^2u_{xx}=u_{tt}-c^2u_{zz}$. Replacing this relation in the previous equation, we get: $2u_{tt}-2cu_{xt}-c^2u_{zz}=0$ which is Clayton's A2
boundary condition. In this sence, Higdon's method would generalize Clayton's scheme. But the discretization of
both methods are quite different, since in Higdon's scheme the boundary operators are unidirectional, while
in Clayton's A2 not.
# 4.3 - Acoustic Problem with HABC
In the hybrid absorption boundary condition (HABC) scheme we will also extend the spatial domain as $\Omega=\left[x_{I}-L,x_{F}+L\right] \times\left[z_{I},z_{F}+L\right]$.
We added to the target domain $\Omega_{0}=\left[x_{I},x_{F}\right]\times\left[z_{I},z_{F}\right]$ an extension zone, of length $L$ in both ends of the direction $x$ and at the end of the domain in the direction $z$, as represented in the figure bellow.
<img src='domain2.png' width=500>
The difference with respect to previous schemes, is that this extended region will now be considered as the union of several gradual extensions. As represented in the next figure, we define a region $A_M=\Omega_{0}$. The regions $A_k, k=M-1,\cdots,1$ will be defined as the previous region $A_{k+1}$ to which we add one extra grid line to the left,
right and bottom sides of it, such that the final region $A_1=\Omega$ (we thus have $M=L+1$).
<img src='region1.png' width=500>
We now consider the temporal evolution
of the solution of the HABC method. Suppose that $u(x,z,t-1)$ is the solution at a given instant $t-1$ in all the
extended $\Omega$ domain. We update it to instant $t$, using one of the absorbing boundary conditions described in the previous section (A1, A2 or Higdon) producing a preliminar new function $u(x,z,t)$. Now, call $u_{1}(x,z,t)$ the solution at instant $t$ constructed in the extended region, by applying the same absorbing boundary condition at the border of each of the domains $A_k,k=1,..,M$. The HABC solution will be constructed as a convex combination of $u(x,z,t)$ and $u_{1}(x,z,t)$:
- $u(x,z,t) = (1-\omega)u(x,z,t)+\omega u_{1}(x,z,t)$.
The function $u_{1}(x,z,t)$ is defined (and used) only in the extension of the domain. The function $w$ is a
weight function growing from zero at the boundary $\partial\Omega_{0}$ to one at $\partial\Omega$. The particular weight function to be used could vary linearly, as when the scheme was first proposed by Liu and Sen. But HABC produces better results with a non-linear weight function to be described ahead.
The wave equation employed here will be the same as in the previous notebooks, with same velocity model, source term and initial conditions.
## 4.3.1 The weight function $\omega$
One can choose a *linear* weight function as
\begin{equation}
\omega_{k} = \displaystyle\frac{M-k}{M};
\end{equation}
or preferably a *non linear*
\begin{equation}
\omega_{k}=\left\{ \begin{array}{ll}
1, & \textrm{if $1\leq k \leq P+1$,} \\ \left(\displaystyle\frac{M-k}{M-P}\right)^{\alpha} , & \textrm{if $P+2 \leq k \leq M-1.$} \\ 0 , & \textrm{if $k=M$.}\end{array}\right.
\label{eq:elo8}
\end{equation}
In general we take $P=2$ and we choose $\alpha$ as follows:
- $\alpha = 1.5 + 0.07(npt-P)$, in the case of A1 and A2;
- $\alpha = 1.0 + 0.15(npt-P)$, in the case of Higdon.
The value *npt* designates the number of discrete points that define the length of the blue band in the direction $x$ and/or $z$.
# 4.4 - Finite Difference Operators and Discretization of Spatial and Temporal Domains
We employ the same methods as in the previous notebooks.
# 4.5 - Standard Problem
Redeeming the Standard Problem definitions discussed on the notebook <a href="01_introduction.ipynb">Introduction to Acoustic Problem</a> we have that:
- $x_{I}$ = 0.0 Km;
- $x_{F}$ = 1.0 Km = 1000 m;
- $z_{I}$ = 0.0 Km;
- $z_{F}$ = 1.0 Km = 1000 m;
The spatial discretization parameters are given by:
- $\Delta x$ = 0.01 km = 10m;
- $\Delta z$ = 0.01 km = 10m;
Let's consider a $I$ the time domain with the following limitations:
- $t_{I}$ = 0 s = 0 ms;
- $t_{F}$ = 1 s = 1000 ms;
The temporal discretization parameters are given by:
- $\Delta t$ $\approx$ 0.0016 s = 1.6 ms;
- $NT$ = 626.
The source term, velocity model and positioning of receivers will be as in the previous notebooks.
# 4.6 - Numerical Simulations
For the numerical simulations of this notebook we use several of the notebook codes presented in <a href="02_damping.ipynb">Damping</a> e <a href="03_pml.ipynb">PML</a>. The new features will be described in more detail.
So, we import the following Python and Devito packages:
```
# NBVAL_IGNORE_OUTPUT
import numpy as np
import matplotlib.pyplot as plot
import math as mt
import matplotlib.ticker as mticker
from mpl_toolkits.axes_grid1 import make_axes_locatable
from matplotlib import cm
```
From Devito's library of examples we import the following structures:
```
# NBVAL_IGNORE_OUTPUT
%matplotlib inline
from examples.seismic import TimeAxis
from examples.seismic import RickerSource
from examples.seismic import Receiver
from examples.seismic import plot_velocity
from devito import SubDomain, Grid, NODE, TimeFunction, Function, Eq, solve, Operator
```
The mesh parameters that we choose define the domain $\Omega_{0}$ plus the absorption region. For this, we use the following data:
```
nptx = 101
nptz = 101
x0 = 0.
x1 = 1000.
compx = x1-x0
z0 = 0.
z1 = 1000.
compz = z1-z0;
hxv = (x1-x0)/(nptx-1)
hzv = (z1-z0)/(nptz-1)
```
As we saw previously, HABC has three approach possibilities (A1, A2 and Higdon) and two types of weights (linear and non-linear). So, we insert two control variables. The variable called *habctype* chooses the type of HABC approach and is such that:
- *habctype=1* is equivalent to choosing A1;
- *habctype=2* is equivalent to choosing A2;
- *habctype=3* is equivalent to choosing Higdon;
Regarding the weights, we will introduce the variable *habcw* that chooses the type of weight and is such that:
- *habcw=1* is equivalent to linear weight;
- *habcw=2* is equivalent to non-linear weights;
In this way, we make the following choices:
```
habctype = 3
habcw = 2
```
The number of points of the absorption layer in the directions $x$ and $z$ are given, respectively, by:
```
npmlx = 20
npmlz = 20
```
The lengths $L_{x}$ and $L_{z}$ are given, respectively, by:
```
lx = npmlx*hxv
lz = npmlz*hzv
```
For the construction of the *grid* we have:
```
nptx = nptx + 2*npmlx
nptz = nptz + 1*npmlz
x0 = x0 - hxv*npmlx
x1 = x1 + hxv*npmlx
compx = x1-x0
z0 = z0
z1 = z1 + hzv*npmlz
compz = z1-z0
origin = (x0,z0)
extent = (compx,compz)
shape = (nptx,nptz)
spacing = (hxv,hzv)
```
As in the case of the acoustic equation with Damping and in the acoustic equation with PML, we can define specific regions in our domain, since the solution $u_{1}(x,z,t)$ is only calculated in the blue region. We will soon follow a similar scheme for creating *subdomains* as was done on notebooks <a href="02_damping.ipynb">Damping</a> and <a href="03_pml.ipynb">PML</a>.
First, we define a region corresponding to the entire domain, naming this region as *d0*. In the language of *subdomains* *d0* it is written as:
```
class d0domain(SubDomain):
name = 'd0'
def define(self, dimensions):
x, z = dimensions
return {x: x, z: z}
d0_domain = d0domain()
```
The blue region will be built with 3 divisions:
- *d1* represents the left range in the direction *x*, where the pairs $(x,z)$ satisfy: $x\in\{0,npmlx\}$ and $z\in\{0,nptz\}$;
- *d2* represents the rigth range in the direction *x*, where the pairs $(x,z)$ satisfy: $x\in\{nptx-npmlx,nptx\}$ and $z\in\{0,nptz\}$;
- *d3* represents the left range in the direction *y*, where the pairs $(x,z)$ satisfy: $x\in\{npmlx,nptx-npmlx\}$ and $z\in\{nptz-npmlz,nptz\}$;
Thus, the regions *d1*, *d2* and *d3* aare described as follows in the language of *subdomains*:
```
class d1domain(SubDomain):
name = 'd1'
def define(self, dimensions):
x, z = dimensions
return {x: ('left',npmlx), z: z}
d1_domain = d1domain()
class d2domain(SubDomain):
name = 'd2'
def define(self, dimensions):
x, z = dimensions
return {x: ('right',npmlx), z: z}
d2_domain = d2domain()
class d3domain(SubDomain):
name = 'd3'
def define(self, dimensions):
x, z = dimensions
if((habctype==3)&(habcw==1)):
return {x: x, z: ('right',npmlz)}
else:
return {x: ('middle', npmlx, npmlx), z: ('right',npmlz)}
d3_domain = d3domain()
```
The figure below represents the division of domains that we did previously:
<img src='domain3.png' width=500>
After we defining the spatial parameters and constructing the *subdomains*, we then generate the *spatial grid* and set the velocity field:
```
grid = Grid(origin=origin, extent=extent, shape=shape, subdomains=(d0_domain,d1_domain,d2_domain,d3_domain), dtype=np.float64)
v0 = np.zeros((nptx,nptz))
X0 = np.linspace(x0,x1,nptx)
Z0 = np.linspace(z0,z1,nptz)
x10 = x0+lx
x11 = x1-lx
z10 = z0
z11 = z1 - lz
xm = 0.5*(x10+x11)
zm = 0.5*(z10+z11)
pxm = 0
pzm = 0
for i in range(0,nptx):
if(X0[i]==xm): pxm = i
for j in range(0,nptz):
if(Z0[j]==zm): pzm = j
p0 = 0
p1 = pzm
p2 = nptz
v0[0:nptx,p0:p1] = 1.5
v0[0:nptx,p1:p2] = 2.5
```
Previously we introduce the local variables *x10,x11,z10,z11,xm,zm,pxm* and *pzm* that help us to create a specific velocity field, where we consider the whole domain (including the absorpion region). Below we include a routine to plot the velocity field.
```
def graph2dvel(vel):
plot.figure()
plot.figure(figsize=(16,8))
fscale = 1/10**(3)
scale = np.amax(vel[npmlx:-npmlx,0:-npmlz])
extent = [fscale*(x0+lx),fscale*(x1-lx), fscale*(z1-lz), fscale*(z0)]
fig = plot.imshow(np.transpose(vel[npmlx:-npmlx,0:-npmlz]), vmin=0.,vmax=scale, cmap=cm.seismic, extent=extent)
plot.gca().xaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f km'))
plot.gca().yaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f km'))
plot.title('Velocity Profile')
plot.grid()
ax = plot.gca()
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
cbar = plot.colorbar(fig, cax=cax, format='%.2e')
cbar.set_label('Velocity [km/s]')
plot.show()
```
Below we include the plot of velocity field.
```
# NBVAL_IGNORE_OUTPUT
graph2dvel(v0)
```
Time parameters are defined and constructed by the following sequence of commands:
```
t0 = 0.
tn = 1000.
CFL = 0.4
vmax = np.amax(v0)
dtmax = np.float64((min(hxv,hzv)*CFL)/(vmax))
ntmax = int((tn-t0)/dtmax)+1
dt0 = np.float64((tn-t0)/ntmax)
```
With the temporal parameters, we generate the time properties with *TimeAxis* as follows:
```
time_range = TimeAxis(start=t0,stop=tn,num=ntmax+1)
nt = time_range.num - 1
```
The symbolic values associated with the spatial and temporal grids that are used in the composition of the equations are given by:
```
(hx,hz) = grid.spacing_map
(x, z) = grid.dimensions
t = grid.stepping_dim
dt = grid.stepping_dim.spacing
```
We set the Ricker source:
```
f0 = 0.01
nsource = 1
xposf = 0.5*(compx-2*npmlx*hxv)
zposf = hzv
src = RickerSource(name='src',grid=grid,f0=f0,npoint=nsource,time_range=time_range,staggered=NODE,dtype=np.float64)
src.coordinates.data[:, 0] = xposf
src.coordinates.data[:, 1] = zposf
```
Below we include the plot of Ricker source.
```
# NBVAL_IGNORE_OUTPUT
src.show()
```
We set the receivers:
```
nrec = nptx
nxpos = np.linspace(x0,x1,nrec)
nzpos = hzv
rec = Receiver(name='rec',grid=grid,npoint=nrec,time_range=time_range,staggered=NODE,dtype=np.float64)
rec.coordinates.data[:, 0] = nxpos
rec.coordinates.data[:, 1] = nzpos
```
The displacement field *u* and the velocity *vel* are allocated:
```
u = TimeFunction(name="u",grid=grid,time_order=2,space_order=2,staggered=NODE,dtype=np.float64)
vel = Function(name="vel",grid=grid,space_order=2,staggered=NODE,dtype=np.float64)
vel.data[:,:] = v0[:,:]
```
We include the source term as *src_term* using the following command:
```
src_term = src.inject(field=u.forward,expr=src*dt**2*vel**2)
```
The Receivers are again called *rec_term*:
```
rec_term = rec.interpolate(expr=u)
```
The next step is to generate the $\omega$ weights, which are selected using the *habcw* variable. Our construction approach will be in two steps: in a first step we build local vectors *weightsx* and *weightsz* that represent the weights in the directions $x$ and $z$, respectively. In a second step, with the *weightsx* and *weightsz* vectors, we distribute them in two global arrays called *Mweightsx* and *Mweightsz* that represent the distribution of these weights along the *grid* in the directions $x$ and $z$ respectively. The *generateweights* function below perform the operations listed previously:
```
def generateweights():
weightsx = np.zeros(npmlx)
weightsz = np.zeros(npmlz)
Mweightsx = np.zeros((nptx,nptz))
Mweightsz = np.zeros((nptx,nptz))
if(habcw==1):
for i in range(0,npmlx):
weightsx[i] = (npmlx-i)/(npmlx)
for i in range(0,npmlz):
weightsz[i] = (npmlz-i)/(npmlz)
if(habcw==2):
mx = 2
mz = 2
if(habctype==3):
alphax = 1.0 + 0.15*(npmlx-mx)
alphaz = 1.0 + 0.15*(npmlz-mz)
else:
alphax = 1.5 + 0.07*(npmlx-mx)
alphaz = 1.5 + 0.07*(npmlz-mz)
for i in range(0,npmlx):
if(0<=i<=(mx)):
weightsx[i] = 1
elif((mx+1)<=i<=npmlx-1):
weightsx[i] = ((npmlx-i)/(npmlx-mx))**(alphax)
else:
weightsx[i] = 0
for i in range(0,npmlz):
if(0<=i<=(mz)):
weightsz[i] = 1
elif((mz+1)<=i<=npmlz-1):
weightsz[i] = ((npmlz-i)/(npmlz-mz))**(alphaz)
else:
weightsz[i] = 0
for k in range(0,npmlx):
ai = k
af = nptx - k - 1
bi = 0
bf = nptz - k
Mweightsx[ai,bi:bf] = weightsx[k]
Mweightsx[af,bi:bf] = weightsx[k]
for k in range(0,npmlz):
ai = k
af = nptx - k
bf = nptz - k - 1
Mweightsz[ai:af,bf] = weightsz[k]
return Mweightsx,Mweightsz
```
Once the *generateweights* function has been created, we execute it with the following command:
```
Mweightsx,Mweightsz = generateweights();
```
Below we include a routine to plot the weight fields.
```
def graph2dweight(D):
plot.figure()
plot.figure(figsize=(16,8))
fscale = 1/10**(-3)
fscale = 10**(-3)
scale = np.amax(D)
extent = [fscale*x0,fscale*x1, fscale*z1, fscale*z0]
fig = plot.imshow(np.transpose(D), vmin=0.,vmax=scale, cmap=cm.seismic, extent=extent)
plot.gca().xaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f km'))
plot.gca().yaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f km'))
plot.title('Weight Function')
plot.grid()
ax = plot.gca()
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
cbar = plot.colorbar(fig, cax=cax, format='%.2e')
cbar.set_label('Weights')
plot.show()
```
Below we include the plot of weights field in $x$ direction.
```
# NBVAL_IGNORE_OUTPUT
graph2dweight(Mweightsx)
```
Below we include the plot of weights field in $z$ direction.
```
# NBVAL_IGNORE_OUTPUT
graph2dweight(Mweightsz)
```
Next we create the fields for the weight arrays *weightsx* and *weightsz*:
```
weightsx = Function(name="weightsx",grid=grid,space_order=2,staggered=NODE,dtype=np.float64)
weightsx.data[:,:] = Mweightsx[:,:]
weightsz = Function(name="weightsz",grid=grid,space_order=2,staggered=NODE,dtype=np.float64)
weightsz.data[:,:] = Mweightsz[:,:]
```
For the discretization of the A2 and Higdon's boundary conditions (to calculate $u_{1}(x,z,t)$) we need information from three time levels, namely $u(x,z,t-1)$, $u (x,z,t)$ and $u(x,z,t+1)$. So it is convenient to create the three fields:
```
u1 = Function(name="u1" ,grid=grid,space_order=2,staggered=NODE,dtype=np.float64)
u2 = Function(name="u2" ,grid=grid,space_order=2,staggered=NODE,dtype=np.float64)
u3 = Function(name="u3" ,grid=grid,space_order=2,staggered=NODE,dtype=np.float64)
```
We will assign to each of them the three time solutions described previously, that is,
- u1(x,z) = u(x,z,t-1);
- u2(x,z) = u(x,z,t);
- u3(x,z) = u(x,z,t+1);
These three assignments can be represented by the *stencil01* given by:
```
stencil01 = [Eq(u1,u.backward),Eq(u2,u),Eq(u3,u.forward)]
```
An update of the term *u3(x,z)* will be necessary after updating *u(x,z,t+1)* in the direction $x$, so that we can continue to apply the HABC method. This update is given by *stencil02* defined as:
```
stencil02 = [Eq(u3,u.forward)]
```
For the acoustic equation with HABC without the source term we need in $\Omega$
- eq1 = u.dt2 - vel0 * vel0 * u.laplace;
So the *pde* that represents this equation is given by:
```
pde0 = Eq(u.dt2 - u.laplace*vel**2)
```
And the *stencil* for *pde0* is given to:
```
stencil0 = Eq(u.forward, solve(pde0,u.forward))
```
For the blue region we will divide it into $npmlx$ layers in the $x$ direction and $npmlz$ layers in the $z$ direction. In this case, the representation is a little more complex than shown in the figures that exemplify the regions $A_{k}$ because there are intersections between the layers.
**Observation:** Note that the representation of the $A_{k}$ layers that we present in our text reflects the case where $npmlx=npmlz$. However, our code includes the case illustrated in the figure, as well as situations in which $npmlx\neq npmlz$. The discretizations of the bounadry conditions A1, A2 and Higdon follow in the bibliographic references at the end. They will not be detailled here, but can be seen in the codes below.
In the sequence of codes below we build the *pdes* that represent the *eqs* of the regions $B_{1}$, $B_{2}$ and $B_{3}$ and/or in the corners (red points in the case of *A2*) as represented in the following figure:
<img src='region2.png' width=500>
In the sequence, we present the *stencils* for each of these *pdes*.
So, for the A1 case we have the following *pdes* and *stencils*:
```
if(habctype==1):
# Region B_{1}
aux1 = ((-vel[x,z]*dt+hx)*u2[x,z] + (vel[x,z]*dt+hx)*u2[x+1,z] + (vel[x,z]*dt-hx)*u3[x+1,z])/(vel[x,z]*dt+hx)
pde1 = (1-weightsx[x,z])*u3[x,z] + weightsx[x,z]*aux1
stencil1 = Eq(u.forward,pde1,subdomain = grid.subdomains['d1'])
# Region B_{3}
aux2 = ((-vel[x,z]*dt+hx)*u2[x,z] + (vel[x,z]*dt+hx)*u2[x-1,z] + (vel[x,z]*dt-hx)*u3[x-1,z])/(vel[x,z]*dt+hx)
pde2 = (1-weightsx[x,z])*u3[x,z] + weightsx[x,z]*aux2
stencil2 = Eq(u.forward,pde2,subdomain = grid.subdomains['d2'])
# Region B_{2}
aux3 = ((-vel[x,z]*dt+hz)*u2[x,z] + (vel[x,z]*dt+hz)*u2[x,z-1] + (vel[x,z]*dt-hz)*u3[x,z-1])/(vel[x,z]*dt+hz)
pde3 = (1-weightsz[x,z])*u3[x,z] + weightsz[x,z]*aux3
stencil3 = Eq(u.forward,pde3,subdomain = grid.subdomains['d3'])
```
For the A2 case we have the following *pdes* and *stencils*:
```
if(habctype==2):
# Region B_{1}
cte11 = (1/(2*dt**2)) + (1/(2*dt*hx))*vel[x,z]
cte21 = -(1/(2*dt**2)) + (1/(2*dt*hx))*vel[x,z] - (1/(2*hz**2))*vel[x,z]*vel[x,z]
cte31 = -(1/(2*dt**2)) - (1/(2*dt*hx))*vel[x,z]
cte41 = (1/(dt**2))
cte51 = (1/(4*hz**2))*vel[x,z]**2
aux1 = (cte21*(u3[x+1,z] + u1[x,z]) + cte31*u1[x+1,z] + cte41*(u2[x,z]+u2[x+1,z]) + cte51*(u3[x+1,z+1] + u3[x+1,z-1] + u1[x,z+1] + u1[x,z-1]))/cte11
pde1 = (1-weightsx[x,z])*u3[x,z] + weightsx[x,z]*aux1
stencil1 = Eq(u.forward,pde1,subdomain = grid.subdomains['d1'])
# Region B_{3}
cte12 = (1/(2*dt**2)) + (1/(2*dt*hx))*vel[x,z]
cte22 = -(1/(2*dt**2)) + (1/(2*dt*hx))*vel[x,z] - (1/(2*hz**2))*vel[x,z]**2
cte32 = -(1/(2*dt**2)) - (1/(2*dt*hx))*vel[x,z]
cte42 = (1/(dt**2))
cte52 = (1/(4*hz**2))*vel[x,z]*vel[x,z]
aux2 = (cte22*(u3[x-1,z] + u1[x,z]) + cte32*u1[x-1,z] + cte42*(u2[x,z]+u2[x-1,z]) + cte52*(u3[x-1,z+1] + u3[x-1,z-1] + u1[x,z+1] + u1[x,z-1]))/cte12
pde2 = (1-weightsx[x,z])*u3[x,z] + weightsx[x,z]*aux2
stencil2 = Eq(u.forward,pde2,subdomain = grid.subdomains['d2'])
# Region B_{2}
cte13 = (1/(2*dt**2)) + (1/(2*dt*hz))*vel[x,z]
cte23 = -(1/(2*dt**2)) + (1/(2*dt*hz))*vel[x,z] - (1/(2*hx**2))*vel[x,z]**2
cte33 = -(1/(2*dt**2)) - (1/(2*dt*hz))*vel[x,z]
cte43 = (1/(dt**2))
cte53 = (1/(4*hx**2))*vel[x,z]*vel[x,z]
aux3 = (cte23*(u3[x,z-1] + u1[x,z]) + cte33*u1[x,z-1] + cte43*(u2[x,z]+u2[x,z-1]) + cte53*(u3[x+1,z-1] + u3[x-1,z-1] + u1[x+1,z] + u1[x-1,z]))/cte13
pde3 = (1-weightsz[x,z])*u3[x,z] + weightsz[x,z]*aux3
stencil3 = Eq(u.forward,pde3,subdomain = grid.subdomains['d3'])
# Red point rigth side
stencil4 = [Eq(u[t+1,nptx-1-k,nptz-1-k],(1-weightsz[nptx-1-k,nptz-1-k])*u3[nptx-1-k,nptz-1-k] +
weightsz[nptx-1-k,nptz-1-k]*(((-(1/(4*hx)) + (1/(4*hz)) - (np.sqrt(2))/(4*vel[nptx-1-k,nptz-1-k]*dt))*u3[nptx-1-k,nptz-2-k]
+ ((1/(4*hx)) - (1/(4*hz)) - (np.sqrt(2))/(4*vel[nptx-1-k,nptz-1-k]*dt))*u3[nptx-2-k,nptz-1-k]
+ ((1/(4*hx)) + (1/(4*hz)) - (np.sqrt(2))/(4*vel[nptx-1-k,nptz-1-k]*dt))*u3[nptx-2-k,nptz-2-k]
+ (-(1/(4*hx)) - (1/(4*hz)) + (np.sqrt(2))/(4*vel[nptx-1-k,nptz-1-k]*dt))*u2[nptx-1-k,nptz-1-k]
+ (-(1/(4*hx)) + (1/(4*hz)) + (np.sqrt(2))/(4*vel[nptx-1-k,nptz-1-k]*dt))*u2[nptx-1-k,nptz-2-k]
+ ((1/(4*hx)) - (1/(4*hz)) + (np.sqrt(2))/(4*vel[nptx-1-k,nptz-1-k]*dt))*u2[nptx-2-k,nptz-1-k]
+ ((1/(4*hx)) + (1/(4*hz)) + (np.sqrt(2))/(4*vel[nptx-1-k,nptz-1-k]*dt))*u2[nptx-2-k,nptz-2-k])
/ (((1/(4*hx)) + (1/(4*hz)) + (np.sqrt(2))/(4*vel[nptx-1-k,nptz-1-k]*dt))))) for k in range(0,npmlz)]
# Red point left side
stencil5 = [Eq(u[t+1,k,nptz-1-k],(1-weightsx[k,nptz-1-k] )*u3[k,nptz-1-k]
+ weightsx[k,nptz-1-k]*(( (-(1/(4*hx)) + (1/(4*hz)) - (np.sqrt(2))/(4*vel[k,nptz-1-k]*dt))*u3[k,nptz-2-k]
+ ((1/(4*hx)) - (1/(4*hz)) - (np.sqrt(2))/(4*vel[k,nptz-1-k]*dt))*u3[k+1,nptz-1-k]
+ ((1/(4*hx)) + (1/(4*hz)) - (np.sqrt(2))/(4*vel[k,nptz-1-k]*dt))*u3[k+1,nptz-2-k]
+ (-(1/(4*hx)) - (1/(4*hz)) + (np.sqrt(2))/(4*vel[k,nptz-1-k]*dt))*u2[k,nptz-1-k]
+ (-(1/(4*hx)) + (1/(4*hz)) + (np.sqrt(2))/(4*vel[k,nptz-1-k]*dt))*u2[k,nptz-2-k]
+ ((1/(4*hx)) - (1/(4*hz)) + (np.sqrt(2))/(4*vel[k,nptz-1-k]*dt))*u2[k+1,nptz-1-k]
+ ((1/(4*hx)) + (1/(4*hz)) + (np.sqrt(2))/(4*vel[k,nptz-1-k]*dt))*u2[k+1,nptz-2-k])
/ (((1/(4*hx)) + (1/(4*hz)) + (np.sqrt(2))/(4*vel[k,nptz-1-k]*dt))))) for k in range(0,npmlx)]
```
For the Higdon case we have the following *pdes* and *stencils*:
```
if(habctype==3):
alpha1 = 0.0
alpha2 = np.pi/4
a1 = 0.5
b1 = 0.5
a2 = 0.5
b2 = 0.5
# Region B_{1}
gama111 = np.cos(alpha1)*(1-a1)*(1/dt)
gama121 = np.cos(alpha1)*(a1)*(1/dt)
gama131 = np.cos(alpha1)*(1-b1)*(1/hx)*vel[x,z]
gama141 = np.cos(alpha1)*(b1)*(1/hx)*vel[x,z]
gama211 = np.cos(alpha2)*(1-a2)*(1/dt)
gama221 = np.cos(alpha2)*(a2)*(1/dt)
gama231 = np.cos(alpha2)*(1-b2)*(1/hx)*vel[x,z]
gama241 = np.cos(alpha2)*(b2)*(1/hx)*vel[x,z]
c111 = gama111 + gama131
c121 = -gama111 + gama141
c131 = gama121 - gama131
c141 = -gama121 - gama141
c211 = gama211 + gama231
c221 = -gama211 + gama241
c231 = gama221 - gama231
c241 = -gama221 - gama241
aux1 = ( u2[x,z]*(-c111*c221-c121*c211) + u3[x+1,z]*(-c111*c231-c131*c211) + u2[x+1,z]*(-c111*c241-c121*c231-c141*c211-c131*c221)
+ u1[x,z]*(-c121*c221) + u1[x+1,z]*(-c121*c241-c141*c221) + u3[x+2,z]*(-c131*c231) +u2[x+2,z]*(-c131*c241-c141*c231)
+ u1[x+2,z]*(-c141*c241))/(c111*c211)
pde1 = (1-weightsx[x,z])*u3[x,z] + weightsx[x,z]*aux1
stencil1 = Eq(u.forward,pde1,subdomain = grid.subdomains['d1'])
# Region B_{3}
gama112 = np.cos(alpha1)*(1-a1)*(1/dt)
gama122 = np.cos(alpha1)*(a1)*(1/dt)
gama132 = np.cos(alpha1)*(1-b1)*(1/hx)*vel[x,z]
gama142 = np.cos(alpha1)*(b1)*(1/hx)*vel[x,z]
gama212 = np.cos(alpha2)*(1-a2)*(1/dt)
gama222 = np.cos(alpha2)*(a2)*(1/dt)
gama232 = np.cos(alpha2)*(1-b2)*(1/hx)*vel[x,z]
gama242 = np.cos(alpha2)*(b2)*(1/hx)*vel[x,z]
c112 = gama112 + gama132
c122 = -gama112 + gama142
c132 = gama122 - gama132
c142 = -gama122 - gama142
c212 = gama212 + gama232
c222 = -gama212 + gama242
c232 = gama222 - gama232
c242 = -gama222 - gama242
aux2 = ( u2[x,z]*(-c112*c222-c122*c212) + u3[x-1,z]*(-c112*c232-c132*c212) + u2[x-1,z]*(-c112*c242-c122*c232-c142*c212-c132*c222)
+ u1[x,z]*(-c122*c222) + u1[x-1,z]*(-c122*c242-c142*c222) + u3[x-2,z]*(-c132*c232) +u2[x-2,z]*(-c132*c242-c142*c232)
+ u1[x-2,z]*(-c142*c242))/(c112*c212)
pde2 = (1-weightsx[x,z])*u3[x,z] + weightsx[x,z]*aux2
stencil2 = Eq(u.forward,pde2,subdomain = grid.subdomains['d2'])
# Region B_{2}
gama113 = np.cos(alpha1)*(1-a1)*(1/dt)
gama123 = np.cos(alpha1)*(a1)*(1/dt)
gama133 = np.cos(alpha1)*(1-b1)*(1/hz)*vel[x,z]
gama143 = np.cos(alpha1)*(b1)*(1/hz)*vel[x,z]
gama213 = np.cos(alpha2)*(1-a2)*(1/dt)
gama223 = np.cos(alpha2)*(a2)*(1/dt)
gama233 = np.cos(alpha2)*(1-b2)*(1/hz)*vel[x,z]
gama243 = np.cos(alpha2)*(b2)*(1/hz)*vel[x,z]
c113 = gama113 + gama133
c123 = -gama113 + gama143
c133 = gama123 - gama133
c143 = -gama123 - gama143
c213 = gama213 + gama233
c223 = -gama213 + gama243
c233 = gama223 - gama233
c243 = -gama223 - gama243
aux3 = ( u2[x,z]*(-c113*c223-c123*c213) + u3[x,z-1]*(-c113*c233-c133*c213) + u2[x,z-1]*(-c113*c243-c123*c233-c143*c213-c133*c223)
+ u1[x,z]*(-c123*c223) + u1[x,z-1]*(-c123*c243-c143*c223) + u3[x,z-2]*(-c133*c233) +u2[x,z-2]*(-c133*c243-c143*c233)
+ u1[x,z-2]*(-c143*c243))/(c113*c213)
pde3 = (1-weightsz[x,z])*u3[x,z] + weightsz[x,z]*aux3
stencil3 = Eq(u.forward,pde3,subdomain = grid.subdomains['d3'])
```
The surface boundary conditions of the problem are the same as in the notebook <a href="01_introduction.ipynb">Introduction to Acoustic Problem</a>. They are placed in the term *bc* and have the following form:
```
bc = [Eq(u[t+1,x,0],u[t+1,x,1])]
```
We will then define the operator (*op*) that will join the acoustic equation, source term, boundary conditions and receivers.
- 1. The acoustic wave equation in the *d0* region: *[stencil01];*
- 2. Source term: *src_term;*
- 3. Updating solutions over time: *[stencil01,stencil02];*
- 4. The acoustic wave equation in the *d1*, *d2* e *d3* regions: *[stencil1,stencil2,stencil3];*
- 5. The equation for red points for A2 method: *[stencil5,stencil4];*
- 6. Boundry Conditions: *bc;*
- 7. Receivers: *rec_term;*
We then define two types of *op*:
- The first *op* is for the cases A1 and Higdon;
- The second *op* is for the case A2;
The *ops* are constructed by the following commands:
```
# NBVAL_IGNORE_OUTPUT
if(habctype!=2):
op = Operator([stencil0] + src_term + [stencil01,stencil3,stencil02,stencil2,stencil1] + bc + rec_term,subs=grid.spacing_map)
else:
op = Operator([stencil0] + src_term + [stencil01,stencil3,stencil02,stencil2,stencil1,stencil02,stencil4,stencil5] + bc + rec_term,subs=grid.spacing_map)
```
Initially:
```
u.data[:] = 0.
u1.data[:] = 0.
u2.data[:] = 0.
u3.data[:] = 0.
```
We assign to *op* the number of time steps it must execute and the size of the time step in the local variables *time* and *dt*, respectively.
```
# NBVAL_IGNORE_OUTPUT
op(time=nt,dt=dt0)
```
We view the result of the displacement field at the end time using the *graph2d* routine given by:
```
def graph2d(U,i):
plot.figure()
plot.figure(figsize=(16,8))
fscale = 1/10**(3)
x0pml = x0 + npmlx*hxv
x1pml = x1 - npmlx*hxv
z0pml = z0
z1pml = z1 - npmlz*hzv
scale = np.amax(U[npmlx:-npmlx,0:-npmlz])/10.
extent = [fscale*x0pml,fscale*x1pml,fscale*z1pml,fscale*z0pml]
fig = plot.imshow(np.transpose(U[npmlx:-npmlx,0:-npmlz]),vmin=-scale, vmax=scale, cmap=cm.seismic, extent=extent)
plot.gca().xaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f km'))
plot.gca().yaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f km'))
plot.axis('equal')
if(i==1): plot.title('Map - Acoustic Problem with Devito - HABC A1')
if(i==2): plot.title('Map - Acoustic Problem with Devito - HABC A2')
if(i==3): plot.title('Map - Acoustic Problem with Devito - HABC Higdon')
plot.grid()
ax = plot.gca()
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
cbar = plot.colorbar(fig, cax=cax, format='%.2e')
cbar.set_label('Displacement [km]')
plot.draw()
plot.show()
# NBVAL_IGNORE_OUTPUT
graph2d(u.data[0,:,:],habctype)
```
We plot the Receivers shot records using the *graph2drec* routine.
```
def graph2drec(rec,i):
plot.figure()
plot.figure(figsize=(16,8))
fscaled = 1/10**(3)
fscalet = 1/10**(3)
x0pml = x0 + npmlx*hxv
x1pml = x1 - npmlx*hxv
scale = np.amax(rec[:,npmlx:-npmlx])/10.
extent = [fscaled*x0pml,fscaled*x1pml, fscalet*tn, fscalet*t0]
fig = plot.imshow(rec[:,npmlx:-npmlx], vmin=-scale, vmax=scale, cmap=cm.seismic, extent=extent)
plot.gca().xaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f km'))
plot.gca().yaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f s'))
plot.axis('equal')
if(i==1): plot.title('Receivers Signal Profile - Devito with HABC A1')
if(i==2): plot.title('Receivers Signal Profile - Devito with HABC A2')
if(i==3): plot.title('Receivers Signal Profile - Devito with HABC Higdon')
ax = plot.gca()
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
cbar = plot.colorbar(fig, cax=cax, format='%.2e')
plot.show()
# NBVAL_IGNORE_OUTPUT
graph2drec(rec.data,habctype)
assert np.isclose(np.linalg.norm(rec.data), 990, rtol=1)
```
# 4.7 - Conclusions
We have presented the HABC method for the acoustic wave equation, which can be used with any of the
absorbing boundary conditions A1, A2 or Higdon. The notebook also include the possibility of using these boundary conditions alone, without being combined with the HABC. The user has the possibilty of testing several combinations of parameters and observe the effects in the absorption of spurious reflections on computational boundaries.
The relevant references for the boundary conditions are furnished next.
## 4.8 - References
- Clayton, R., & Engquist, B. (1977). "Absorbing boundary conditions for acoustic and elastic wave equations", Bulletin of the seismological society of America, 67(6), 1529-1540. <a href="https://pubs.geoscienceworld.org/ssa/bssa/article/67/6/1529/117727?casa_token=4TvjJGJDLQwAAAAA:Wm-3fVLn91tdsdHv9H6Ek7tTQf0jwXVSF10zPQL61lXtYZhaifz7jsHxqTvrHPufARzZC2-lDw">Reference Link.</a>
- Engquist, B., & Majda, A. (1979). "Radiation boundary conditions for acoustic and elastic wave calculations," Communications on pure and applied mathematics, 32(3), 313-357. DOI: 10.1137/0727049. <a href="https://epubs.siam.org/doi/abs/10.1137/0727049">Reference Link.</a>
- Higdon, R. L. (1987). "Absorbing boundary conditions for difference approximations to the multidimensional wave equation," Mathematics of computation, 47(176), 437-459. DOI: 10.1090/S0025-5718-1986-0856696-4. <a href="https://www.ams.org/journals/mcom/1986-47-176/S0025-5718-1986-0856696-4/">Reference Link.</a>
- Higdon, Robert L. "Numerical absorbing boundary conditions for the wave equation," Mathematics of computation, v. 49, n. 179, p. 65-90, 1987. DOI: 10.1090/S0025-5718-1987-0890254-1. <a href="https://www.ams.org/journals/mcom/1987-49-179/S0025-5718-1987-0890254-1/">Reference Link.</a>
- Liu, Y., & Sen, M. K. (2018). "An improved hybrid absorbing boundary condition for wave equation modeling," Journal of Geophysics and Engineering, 15(6), 2602-2613. DOI: 10.1088/1742-2140/aadd31. <a href="https://academic.oup.com/jge/article/15/6/2602/5209803">Reference Link.</a>
| true |
code
| 0.544256 | null | null | null | null |
|
```
#!pip3 install sklearn
from sklearn.datasets import make_classification
from sklearn.calibration import CalibratedClassifierCV
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score, brier_score_loss
from sklearn.preprocessing import MinMaxScaler
import pandas as pd
import numpy as np
```
## Create Dataset
Making a ton of adjustments to make the dataset as real as actual transaction data as possible.
- `price` is the value of the laptop
- `num_past_orders` is the number of orders this person has made in the past with grandma fixes
```
X, y = make_classification(n_samples=10000,
n_features=2,
n_redundant=0,
random_state=42,
weights=[0.9])
scaler = MinMaxScaler()
X = scaler.fit_transform(X)
y = scaler.fit_transform(y.reshape(-1,1))
Xs = pd.DataFrame(X, columns = ['price', 'num_past_orders'])
ys = pd.DataFrame(y, columns=['label'])
Xs['price'] = Xs['price'].apply(lambda x: 50 + int(x*2000))
Xs['num_past_orders'] = Xs['num_past_orders'].apply(lambda x: int(x*50))
Xs.describe()
X_train_raw, X_test, y_train_raw, y_test = train_test_split(Xs, ys, test_size=0.10, shuffle=False)
X_train, X_val, y_train, y_val = train_test_split(X_train_raw, y_train_raw, test_size=0.10, shuffle=False)
y_train['label'].value_counts()
y_test['label'].value_counts()
```
## Create (and calibrate) model
Calibration is done to ensure the output of the model is actually a probability. Required depending on the model you use. If you sample a subset of data, or weight certain samples over others, calibration becomes more important.
We will take a look into this more in another video
```
clf = LogisticRegression(class_weight='balanced')
calibrated_clf = CalibratedClassifierCV(base_estimator=clf, cv=3, method='isotonic')
calibrated_clf.fit(X_train, y_train.values.ravel())
y_pred = calibrated_clf.predict_proba(X_test)[:, 1]
roc_auc_score(y_test, y_pred)
y_pred_df = pd.DataFrame(y_pred, columns=['prediction'])
pred_df = pd.concat([y_pred_df, y_test.reset_index()],axis=1)[['prediction', 'label']]
y_pred_df.describe()
```
## Cost Calculations
```
df = X_test.merge(y_test,left_index=True, right_index=True)
```
### Case 1: Insure nothing
We pay full price for the laptops we lose
```
df['price'][df['label']==1].sum()
```
### Case 2: Insure Everything
We pay \\$30 for every laptop regardless of whether we lose them or not
```
df.shape[0] * 30
```
### Case 3: Insure Based on Model
```
predictions = df.reset_index().drop('index', axis=1).merge(pred_df[['prediction']], left_index=True, right_index=True)
predictions.sample(2)
predictions['E_x'] = predictions['price'] * predictions['prediction']
predictions['insure'] = predictions['E_x'] > 30
predictions.sample(2)
predictions['insure'].value_counts()
def cal_loss(x):
if x['insure']:
return 30
if not x['insure'] and x['label']==1:
return x['price']
return 0
predictions['loss'] = predictions.apply(cal_loss, axis=1)
predictions['loss'].sum()
```
| true |
code
| 0.578865 | null | null | null | null |
|
# Model zoo
```
import torch
import numpy as np
import tensorflow as tf
```
## Generate toy data
```
def generate_data(n=16, samples_per_class=1000):
"""
Generate some classification data
Args:
n (int): square root of the number of features.
samples_per_class (int): number of samples per class.
Returns:
a tuple containing data and labels.
"""
# data for a class
a_class_samples = np.random.rand(samples_per_class, n, n).astype(np.float32)
a_class_labels = np.zeros(samples_per_class, dtype=int)
# data for another class
another_class_samples = np.array([
np.eye(n)*np.random.rand(1).item()
for _ in range(samples_per_class)
]).astype(np.float32)
another_class_labels = np.ones(samples_per_class, dtype=int)
# aggregate data
data = np.vstack([a_class_samples, another_class_samples])
labels = np.hstack([a_class_labels, another_class_labels])
# prepare a shuffled index
indices = np.arange(data.shape[0])
np.random.shuffle(indices)
return data[indices], labels[indices]
# get data
n = 16
features = n*n
number_of_classes = 2
X_train, y_train = generate_data(n=n)
X_test, y_test = generate_data(n=n)
```
## MLP
```
# parameters
units = [32, 8]
```
### PyTorch
```
class MLP(torch.nn.Module):
"""A MultiLayer Perceptron class."""
def __init__(
self, features,
units=[8], number_of_classes=2,
activation_module=torch.nn.ReLU
):
"""
Inititalize the MLP.
Args:
features (int): number of features.
units (list): list of hidden layer units.
number_of_classes (int): number of classes to predict.
activation_module (torch.nn.Module): module representing
the activation function to apply in the hidden layers.
"""
super(MLP, self).__init__()
self.units = [features] + units
self.activation_module = activation_module
self.hidden_layers = torch.nn.Sequential(*[
torch.nn.Sequential(
torch.nn.Linear(input_size, output_size),
self.activation_module()
)
for input_size, output_size in zip(
self.units, self.units[1:]
)
])
self.last_layer = self.last_layer = torch.nn.Sequential(*[
torch.nn.Linear(self.units[-1], number_of_classes),
torch.nn.Softmax(dim=1)
])
def forward(self, sample):
"""
Apply the forward pass of the model.
Args:
sample (torch.Tensor): a torch.Tensor representing a sample.
Returns:
a torch.Tensor containing softmaxed predictions.
"""
encoded_sample = self.hidden_layers(sample)
return self.last_layer(encoded_sample)
X = torch.from_numpy(X_train.reshape(-1, features))
model = MLP(features=features, units=units, number_of_classes=number_of_classes)
model(X)
```
### TensorFlow/Keras
```
def mlp(
features,
units=[8], number_of_classes=2,
activation='relu'
):
"""
Build a MLP.
Args:
features (int): number of features.
units (list): list of hidden layer units.
number_of_classes (int): number of classes to predict.
activation (str): string identifying the activation used.
Returns:
a tf.keras.Model.
"""
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(units[0], activation=activation, input_shape=(features,)))
for unit in units[1:]:
model.add(tf.keras.layers.Dense(unit, activation=activation))
model.add(tf.keras.layers.Dense(number_of_classes, activation='softmax'))
return model
X = X_train.reshape(-1, features)
model = mlp(features=features, units=units, number_of_classes=number_of_classes)
model.predict(X)
```
## AE
```
# parameters
units = [32, 8]
```
### PyTorch
```
class AE(torch.nn.Module):
"""An AutoEncoder class."""
def __init__(
self, features,
units=[8], activation_module=torch.nn.ReLU
):
"""
Inititalize the AE.
Args:
features (int): number of features.
units (list): list of hidden layer units.
activation_module (torch.nn.Module): module representing
the activation function to apply in the hidden layers.
"""
super(AE, self).__init__()
self.units = [features] + units
self.activation_module = activation_module
zipped_units = list(zip(
self.units, self.units[1:]
))
# encoding
self.encoder = torch.nn.Sequential(*[
torch.nn.Sequential(
torch.nn.Linear(input_size, output_size),
self.activation_module()
)
for input_size, output_size in zipped_units
])
# decoding
last_decoder_units, *hidden_decoder_units = zipped_units
self.decoder = torch.nn.Sequential(*[
torch.nn.Sequential(
torch.nn.Linear(input_size, output_size),
self.activation_module()
)
for input_size, output_size in map(
lambda t: t[::-1],
hidden_decoder_units[::-1]
)
])
self.last_layer = torch.nn.Linear(*last_decoder_units[::-1])
def forward(self, sample):
"""
Apply the forward pass of the model.
Args:
sample (torch.Tensor): a torch.Tensor representing a sample.
Returns:
a torch.Tensor containing the reconstructed example.
"""
encoded_sample = self.encoder(sample)
decoded_sample = self.decoder(encoded_sample)
return self.last_layer(decoded_sample)
X = torch.from_numpy(X_train.reshape(-1, features))
model = AE(features=features, units=units)
model(X)
# get encoded representation
model.encoder(X)
```
### TensorFlow/Keras
```
def ae(features, units=[8], activation='relu'):
"""
Build an AE.
Args:
features (int): number of features.
units (list): list of hidden layer units.
number_of_classes (int): number of classes to predict.
activation (str): string identifying the activation used.
Returns:
a tf.keras.Model.
"""
model = tf.keras.Sequential()
# encoding
model.add(tf.keras.layers.Dense(
units[0], activation=activation, input_shape=(features,)
))
for unit in units[1:]:
model.add(tf.keras.layers.Dense(unit, activation=activation))
# decoding
for unit in units[::-1][1:]:
model.add(tf.keras.layers.Dense(unit, activation=activation))
model.add(tf.keras.layers.Dense(features))
return model
X = X_train.reshape(-1, features)
model = ae(features=features, units=units)
model.predict(X)
# get encoded representation
encoder = tf.keras.Model(
inputs=model.input,
outputs=model.layers[len(units) - 1].output
)
encoder.predict(X)
```
## CNN
```
# parameters
filters = [64, 32]
kernel_size = (3, 3)
channels = 1
```
### PyTorch
```
class CNN(torch.nn.Module):
"""A Convolutional Neural Network class."""
def __init__(
self, channels,
filters=[8], kernel_size=(3,3),
number_of_classes=2,
activation_module=torch.nn.ReLU
):
"""
Inititalize the CNN.
Args:
channels (int): number of input channels.
filters (list): list of filters.
kernel_size (tuple): size of the kernel.
number_of_classes (int): number of classes to predict.
activation_module (torch.nn.Module): module representing
the activation function to apply in the hidden layers.
"""
super(CNN, self).__init__()
self.filters = [channels] + filters
self.kernel_size = kernel_size
self.activation_module = activation_module
self.stacked_convolutions = torch.nn.Sequential(*[
torch.nn.Sequential(
torch.nn.Conv2d(input_size, output_size, kernel_size),
self.activation_module(),
torch.nn.MaxPool2d((2,2), stride=2)
)
for input_size, output_size in zip(
self.filters, self.filters[1:]
)
])
self.last_layer = torch.nn.Sequential(*[
torch.nn.Linear(self.filters[-1], number_of_classes),
torch.nn.Softmax(dim=1)
])
def forward(self, sample):
"""
Apply the forward pass of the model.
Args:
sample (torch.Tensor): a torch.Tensor representing a sample.
Returns:
a torch.Tensor containing softmaxed predictions.
"""
encoded_sample = self.stacked_convolutions(sample)
return self.last_layer(encoded_sample.mean((2,3)))
X = torch.from_numpy(np.expand_dims(X_train, 1))
model = CNN(
channels=channels, filters=filters,
kernel_size=kernel_size,
number_of_classes=number_of_classes
)
model(X)
```
### TensorFlow/Keras
```
def cnn(
channels, input_shape,
filters=[8], kernel_size=(3,3),
number_of_classes=2, activation='relu'):
"""
Build a CNN.
Args:
channels (int): number of input channels.
input_shape (tuple): input shape.
filters (list): list of filters.
kernel_size (tuple): size of the kernel.
number_of_classes (int): number of classes to predict.
activation (str): string identifying the activation used.
Returns:
a tf.keras.Model.
"""
model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv2D(
filters[0], kernel_size, activation=activation,
input_shape=input_shape
))
for a_filter in filters[1:]:
model.add(tf.keras.layers.Conv2D(
a_filter, kernel_size, activation=activation
))
model.add(tf.keras.layers.GlobalAveragePooling2D())
model.add(tf.keras.layers.Dense(number_of_classes, activation='softmax'))
return model
X = np.expand_dims(X_train, 3)
model = cnn(
channels=channels, input_shape=X.shape[1:],
filters=filters, kernel_size=kernel_size,
number_of_classes=number_of_classes
)
model.predict(X)
```
## RNN
```
# parameters
units = [32, 8]
```
### PyTorch
```
class RNN(torch.nn.Module):
"""A Recurrent Neural Network class."""
def __init__(
self, input_size, units=[8],
number_of_classes=2, rnn_cell=torch.nn.GRU
):
"""
Inititalize the RNN.
Args:
input_size (int): size of the input.
units (list): list of hidden layer units.
number_of_classes (int): number of classes to predict.
rnn_cell (torch.nn.RNNBase): a RNN cell.
"""
super(RNN, self).__init__()
self.units = [input_size] + units
self.rnn_layers = [
rnn_cell(input_size, output_size)
for input_size, output_size in zip(
self.units, self.units[1:]
)
]
self.last_layer = torch.nn.Sequential(*[
torch.nn.Linear(self.units[-1], number_of_classes),
torch.nn.Softmax(dim=1)
])
def forward(self, sample):
"""
Apply the forward pass of the model.
Args:
sample (torch.Tensor): a torch.Tensor representing a sample.
Returns:
a torch.Tensor containing softmaxed predictions.
"""
encoded_sample = sample
for rnn_layer in self.rnn_layers[:-1]:
encoded_sample, _ = rnn_layer(encoded_sample)
encoded_sample = self.rnn_layers[-1](encoded_sample)[1].squeeze(0)
return self.last_layer(encoded_sample)
X = torch.from_numpy(X_train.transpose((1,0,2)))
model = RNN(
input_size=n, units=units,
number_of_classes=number_of_classes
)
model(X)
```
### TensorFlow/Keras
```
def rnn(
sequence_length, input_size,
units=[8], number_of_classes=2,
rnn_cell=tf.keras.layers.GRU
):
"""
Build a RNN.
Args:
sequence_length (int): length of the sequence.
input_size (int): size of the input.
units (list): list of hidden layer units.
number_of_classes (int): number of classes to predict.
rnn_cell (tf.keras.layers.RNN): a RNN cell.
Returns:
a tf.keras.Model.
"""
model = tf.keras.Sequential()
is_stacked = len(units) > 1
model.add(rnn_cell(units=units[0], input_shape=(16, 16,), return_sequences=is_stacked))
for unit in units[1:-1]:
model.add(rnn_cell(units=unit, return_sequences=True))
if is_stacked:
model.add(rnn_cell(units=units[-1]))
model.add(tf.keras.layers.Dense(number_of_classes, activation='softmax'))
return model
X = X_train
model = rnn(
sequence_length=n, input_size=n, units=units,
number_of_classes=number_of_classes
)
model.predict(X)
```
| true |
code
| 0.911928 | null | null | null | null |
|
# 4. Indexing, slicing
Each element of an array can be located by its position in each dimension. Numpy offers multiple ways to access single elements or groups of elements in very efficient ways. We will illustrate these concepts both with small simple matrices as well as a regular image, in order to illustrate them.
```
import numpy as np
import matplotlib.pyplot as plt
plt.gray();
import skimage
```
We first load an image included in the scikit-image package:
```
image = skimage.data.chelsea()
plt.imshow(image);
```
We can check the dimensions of the image and see that it is an RGB image with 3 channels:
```
image.shape
```
## 4.1 Accessing single values
We create a small 2D array to use as an example:
```
normal_array = np.random.normal(10, 2, (3,4))
normal_array
```
It is very easy to access an array's values. One can just pass an *index* for each dimensions. For example to recover the value on the last row and second column of the ```normal_array``` array we just write (remember counting starts at 0):
```
single_value = normal_array[2,1]
single_value
```
What is returned in that case is a single number that we can re-use:
```
single_value += 10
single_value
```
And that change doesn't affect the original value in the array:
```
normal_array
```
However we can also directly change the value in an array:
```
normal_array[2,1] = 23
normal_array
```
## 4.2 Accessing part of an array with indices: slicing
### 4.2.1 Selecting a range of elements
One can also select multiple elements in each dimension (e.g. multiple rows and columns in 2D) by using the ```start:end:step``` syntax. By default, if omitted, ```start=0```, ```end=last element``` and ```step=1```. For example to select the first **and** second rows of the first column, we can write:
```
normal_array[0:2,0]
```
Note that the ```end``` element is **not** included. One can use the same notation for all dimensions:
```
normal_array[0:2,2:4]
normal_array[1:,2:4]
```
### 4.2.2 Selecting all elements
If we only specify ```:```, it means we want to recover all elements in that dimension:
```
normal_array[:,2:4]
```
Also in general, if you only specify the value for a single axis, this will take the first element of the first dimension:
```
normal_array
normal_array[1]
```
Finally note that if you want to recover only one element along a dimension (single row, column etc), you can do that in two ways:
```
normal_array[0,:]
```
This returns a one-dimensional array containing a single row from the original array:
```
normal_array[0,:].shape
```
Instead, if you specify actual boundaries that still return only a single row:
```
normal_array[0:1,:]
normal_array[0:1,:].shape
```
you recover a tow dimensional array where one of the dimensions has a size of 1.
### 4.2.3 Illustration on an image
We can for example only select half the rows of the image but all columns and channels:
```
image.shape
sub_image = image[0:150,:,:]
plt.imshow(sub_image);
```
Or we can take every fith column and row from a single channel, which returns a pixelated version of the original image:
```
plt.imshow(image[::5,::5,0]);
```
## 4.3 Sub-arrays are not copies!
As often with Python when you create a new variable using a sub-array, that variable **is not independent** from the original variable:
```
sub_array = normal_array[:,2:4]
sub_array
normal_array
```
If for example we modify ```normal_array```, this is going to be reflected in ```sub_array``` too:
```
normal_array[0,2] = 100
normal_array
sub_array
```
The converse is also true:
```
sub_array[0,1] = 50
sub_array
normal_array
```
If you want your sub-array to be an *independent* copy of the original, you have to use the ```.copy()``` method:
```
sub_array_copy = normal_array[1:3,:].copy()
sub_array_copy
sub_array_copy[0,0] = 500
sub_array_copy
normal_array
```
## 4.4. Accessing parts of an array with coordinates
In the above case, we are limited to select rectangular sub-regions of the array. But sometimes we want to recover a series of specific elements for example the elements (row=0, column=3) and (row=2, column=2). To achieve that we can simply index the array with a list containing row indices and another with columns indices:
```
row_indices = [0,2]
col_indices = [3,2]
normal_array[row_indices, col_indices]
normal_array
selected_elements = normal_array[row_indices, col_indices]
selected_elements
```
## 4.5 Logical indexing
The last way of extracting elements from an array is to use a boolean array of same shape. For example let's create a boolean array by comparing our original matrix to a threshold:
```
bool_array = normal_array > 40
bool_array
```
We see that we only have two elements which are above the threshold. Now we can use this logical array to *index* the original array. Imagine that the logical array is a mask with holes only in ```True``` positions and that we superpose it to the original array. Then we just take all the values visible in the holes:
```
normal_array[bool_array]
```
Coming back to our real image, we can for example first create an image that contains a single channel and then find bright regions in it:
```
single_channel = image[:,:,0]
mask = single_channel > 150
plt.imshow(mask);
```
And now we can recover all the pixels that are "selected" by this mask:
```
single_channel[mask]
```
## 4.6 Reshaping arrays
Often it is necessary to reshape arrays, i.e. keep elements unchanged but change their position. There are multiple functions that allow one to do this. The main one is of course ```reshape```.
### 4.6.1 ```reshape```
Given an array of $MxN$ elements, one can reshape it with a shape $OxP$ as long as $M*N = O*P$.
```
reshaped = np.reshape(normal_array,(2,6))
reshaped
reshaped.shape
300*451/150
```
With the image as example, we can reshape the array from $300x451x3$ to $150x902x3$:
```
plt.imshow(np.reshape(image, (150,902,3)))
```
### 4.6.2 Flattening
It's also possible to simply flatten an array i.e. remove all dimensions to create a 1D array. This can be useful for example to create a histogram of a high-dimensional array.
```
flattened = np.ravel(normal_array)
flattened
flattened.shape
```
### 4.6.3 Dimension collapse
Another common way that leads to reshaping is projection. Let's consider again our ```normal_array```:
```
normal_array
```
We can project all values along the first or second axis, to recover for each row/column the largest value:
```
proj0 = np.max(normal_array, axis = 0)
proj0
proj0.shape
```
We see that our projected array has lost a dimension, the one along which we performed the projection. With the image, we could project all channels along the third dimension:
```
plt.imshow(image.max(axis=2));
```
### 4.6.4 Swaping dimensions
We can also simply exchange the position of dimensions. This can be achieved in different ways. For example we can ```np.roll``` dimensions, i.e. circularly shift dimensions. This conserves the relative oder of all axes:
```
array3D = np.ones((4, 10, 20))
array3D.shape
array_rolled = np.rollaxis(array3D, axis=1, start=0)
array_rolled.shape
```
Alternatively you can swap two axes. This doesn't preserver their relative positions:
```
array_swapped = np.swapaxes(array3D, 0,2)
array_swapped.shape
```
With the image, we can for example swap the two first axes:
```
plt.imshow(np.swapaxes(image, 0, 1));
```
### 4.6.5 Change positions
Finally, we can also change the position of elements without changing the shape of the array. For example if we have an array with two columns, we can swap them:
```
array2D = np.random.normal(0,1,(4,2))
array2D
np.fliplr(array2D)
```
Similarly, if we have two rows:
```
array2D = np.random.normal(0,1,(2,4))
array2D
np.flipud(array2D)
```
For more complex cases you can also use the more general ```np.flip()``` function.
With the image, flipping a dimension just mirrors the picture. To do that we select a single channel:
```
plt.imshow(np.flipud(image[:,:,0]));
```
| true |
code
| 0.615723 | null | null | null | null |
|
## Background Information
In a Stroop task, participants are presented with a list of words, with each word displayed in a color of ink. The participant’s task is to say out loud the color of the ink in which the word is printed. The task has two conditions: a congruent words condition, and an incongruent words condition. In the congruent words condition, the words being displayed are color words whose names match the colors in which they are printed: for example RED, BLUE. In the incongruent words condition, the words displayed are color words whose names do not match the colors in which they are printed: for example PURPLE, ORANGE. In each case, we measure the time it takes to name the ink colors in equally-sized lists. Each participant will go through and record a time from each condition.
## Questions For Investigation
**Question 1:**
What is our independent variable? What is our dependent variable?
**Answer 1:**
- Our independent variable will be the congruency of the word (congruent or incongruent).
- The dependent variable will be the time taken to name the ink color.
**Question 2:**
What is an appropriate set of hypotheses for this task? What kind of statistical test do you expect to perform? Justify your choices.
**Answer 2:**
- **Null Hypothesis ($H_0$)**: Incongruency of word will have no effect or decrease the time taken to name the ink color.
- **Alternative Hypothesis ($H_1$)**: The incongruency of word will increase the time taken to name the ink color.
$$H_0: \mu_i \le \mu_c$$
$$H_1: \mu_i > \mu_c$$
Where,
- $\mu_i$ = Population mean of time taken to name the ink color for incongruent words
- $\mu_c$ = Population mean of time taken to name the ink color for congruent words
**Statistical Test**: *Paired one tail (positive) t-test* because both tests have been performed on the same set of users one after other. This means that both tests are dependent and paired. We will be performing one tail t-test because we are looking to compare means in one direction only. We are using t-test because population parameters are unknown.
Assumptions:
- 95% Confidence Interval i.e. $\alpha = 0.05$
```
# Use inline plotting
%matplotlib inline
# Import modules
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# Read dataset
df = pd.read_csv("Stroop-Dataset.csv")
# View the dataset
df.head(5)
# Print dataset description
df.describe()
# Calculate median of values
print("Median for congruent: {}".format(df['Congruent'].median()))
print("Median for incongruent: {}".format(df['Incongruent'].median()))
```
**Question 3**
Report some descriptive statistics regarding this dataset. Include at least one measure of central tendency and at least one measure of variability.
**Answer 3**
*Central Tendency*
- **Mean**: Congruent = 14.05, Incongruent = 22.01
- **Median**: Congruent = 14.3565, Incongruent = 21.0175
*Variability*
- **Standard deviation**: Congruent = 3.559, Incongruent = 4.797
**Question 4**
Provide one or two visualizations that show the distribution of the sample data. Write one or two sentences noting what you observe about the plot or plots.
```
dataset = np.genfromtxt('Stroop-Dataset.csv', delimiter=',',dtype=np.float32)
dataset=np.delete(dataset,(0),axis=0)
plot = plt.boxplot(dataset,vert=True,widths = 0.2,patch_artist=True)
plt.setp(plot['boxes'], linewidth=2, facecolor='#1b9e77')
plt.setp(plot['whiskers'], linewidth=2)
plt.setp(plot['caps'], linewidth=2)
plt.setp(plot['fliers'], marker='x', markersize=8)
plt.setp(plot['medians'], linewidth=2)
df.hist()
plt.show()
```
From the **histogram**, it's clear that both distributions are slightly positively skewed. The mean in both cases is also near the peak for each peak.
From the **boxplot**, it's clear that the incongruent data has two outliers which can also increase the mean for that dataset.
**Question 5**
Now, perform the statistical test and report your results. What is your confidence level and your critical statistic value? Do you reject the null hypothesis or fail to reject it? Come to a conclusion in terms of the experiment task. Did the results match up with your expectations?
```
df
df['Difference'] = df['Incongruent'] - df['Congruent']
df
mean_difference = df['Difference'].mean()
mean_difference
standard_deviation = np.std(df['Difference'],ddof=1)
standard_deviation
standard_error = standard_deviation/np.sqrt(len(df['Difference']))
standard_error
t_statistic = mean_difference/standard_error
t_statistic
# t_critical value at degree of freedom (24-1 = 23) = 1.714
```
Results are as follows:
- **Mean difference** = 7.965
- **Standard deviation** = 4.865 (corrected)
- **Standard error** = 0.993
- **t statistic** = 8.021
- **t critical** = 1.714
- **p value** < 0.0001 => **Result is significant** (since the p-value is less than 0.05)
Thus, the null hypothesis is **rejected**.
**Question 6**
What do you think is responsible for the effects observed? Can you think of an alternative or similar task that would result in a similar effect? Some research about the problem will be helpful for thinking about these two questions!
**Answer 6**
The lower time for congruent words maybe because of the habitual behavior. One part of the brain can recognize the color and the other can recognize the word. When both the results are same, it takes lesser time to give the result as no further correction is required (which is necessary in case of incongruent words).
A similar task can be a task where words are jumbled in such a manner that the first and last letters stay at the same place and users are asked to write them. In most cases, one can recognize the word if it's very familiar to him/her but while typing it, they will tend to write the correct spelling (because of muscle memory) and then fix it to write the given incorrect spelling. This in turn should take more time.
| true |
code
| 0.634713 | null | null | null | null |
|
# Point Processes
**Author: Serge Rey <[email protected]> and Wei Kang <[email protected]>**
## Introduction
One philosophy of applying inferential statistics to spatial data is to think in terms of spatial processes and their possible realizations. In this view, an observed map pattern is one of the possible patterns that might have been generated by a hypothesized process. In this notebook, we are going to regard point patterns as the outcome of point processes. There are three major types of point process, which will result in three types of point patterns:
* [Random Patterns](#Random-Patterns)
* [Clustered Patterns](#Clustered-Patterns)
* [Regular Patterns](#Regular-Patterns)
We will investigate how to generate these point patterns via simulation (Data Generating Processes (DGP) is the corresponding point process), and inspect how these resulting point patterns differ from each other visually. In [Quadrat statistics notebook](Quadrat_statistics.ipynb) and [distance statistics notebook](distance_statistics.ipynb), we will adpot some statistics to infer whether it is a [Complete Spaital Randomness](https://en.wikipedia.org/wiki/Complete_spatial_randomness) (CSR) process.
A python file named "process.py" contains several point process classes with which we can generate point patterns of different types.
```
from pysal.explore.pointpats import PoissonPointProcess, PoissonClusterPointProcess, Window, poly_from_bbox, PointPattern
import pysal.lib as ps
from pysal.lib.cg import shapely_ext
%matplotlib inline
import numpy as np
#import matplotlib.pyplot as plt
```
## Random Patterns
Random point patterns are the outcome of CSR. CSR has two major characteristics:
1. Uniform: each location has equal probability of getting a point (where an event happens)
2. Independent: location of event points are independent
It usually serves as the null hypothesis in testing whether a point pattern is the outcome of a random process.
There are two types of CSR:
* $N$-conditioned CSR: $N$ is fixed
* Given the total number of events $N$ occurring within an area $A$, the locations of the $N$ events represent an independent random sample of $N$ locations where each location is equally likely to be chosen as an event.
* $\lambda$-conditioned CSR: $N$ is randomly generated from a Poisson process.
* The number of events occurring within a finite region $A$ is a random variable $\dot{N}$ following a Poisson distribution with mean $\lambda|A|$, with $|A|$ denoting area of $A$ and $\lambda$ denoting the intensity of the point pattern.
* Given the total number of events $\dot{N}$ occurring within an area $A$, the locations of the $\dot{N}$ events represent an independent random sample of $\dot{N}$ locations where each location is equally likely to be chosen as an event.
### Simulating CSR
We are going to generate several point patterns (200 events) from CSR within Virginia state boundary.
```
# open the virginia polygon shapefile
va = ps.io.open(ps.examples.get_path("virginia.shp"))
polys = [shp for shp in va]
# Create the exterior polygons for VA from the union of the county shapes
state = shapely_ext.cascaded_union(polys)
# create window from virginia state boundary
window = Window(state.parts)
```
#### 1. Generate a point series from N-conditioned CSR
```
# simulate a csr process in the same window (200 points, 1 realization)
# by specifying "asPP" false, we can generate a point series
# by specifying "conditioning" false, we can simulate a N-conditioned CSR
np.random.seed(5)
samples = PoissonPointProcess(window, 200, 1, conditioning=False, asPP=False)
samples
samples.realizations[0] # simulated event points
# build a point pattern from the simulated point series
pp_csr = PointPattern(samples.realizations[0])
pp_csr
pp_csr.plot(window=True, hull=True, title='Random Point Pattern')
pp_csr.n
```
#### 2. Generate a point series from $\lambda$-conditioned CSR
```
# simulate a csr process in the same window (200 points, 1 realization)
# by specifying "asPP" false, we can generate a point series
# by specifying "conditioning" True, we can simulate a lamda-conditioned CSR
np.random.seed(5)
samples = PoissonPointProcess(window, 200, 1, conditioning=True, asPP=False)
samples
samples.realizations[0] # simulated points
# build a point pattern from the simulated point series
pp_csr = PointPattern(samples.realizations[0])
pp_csr
pp_csr.plot(window=True, hull=True, title='Random Point Pattern')
pp_csr.n
```
The simulated point pattern has $194$ events rather than the Possion mean $200$.
#### 3. Generate a point pattern from N-conditioned CSR
```
# simulate a csr process in the same window (200 points, 1 realization)
# by specifying "asPP" True, we can generate a point pattern
# by specifying "conditioning" false, we can simulate a N-conditioned CSR
np.random.seed(5)
samples = PoissonPointProcess(window, 200, 1, conditioning=False, asPP=True)
samples
pp_csr = samples.realizations[0] # simulated point pattern
pp_csr
pp_csr.plot(window=True, hull=True, title='Random Point Pattern')
pp_csr.n
```
#### 4. Generate a point pattern of size 200 from a $\lambda$-conditioned CSR
```
# simulate a csr process in the same window (200 points, 1 realization)
# by specifying "asPP" True, we can generate a point pattern
# by specifying "conditioning" True, we can simulate a lamda-conditioned CSR
np.random.seed(5)
samples = PoissonPointProcess(window, 200, 1, conditioning=True, asPP=True)
samples
pp_csr = samples.realizations[0] # simulated point pattern
pp_csr
pp_csr.plot(window=True, hull=True, title='Random Point Pattern')
pp_csr.n
```
## Clustered Patterns
Clustered Patterns are more grouped than random patterns. Visually, we can observe more points at short distances. There are two sources of clustering:
* Contagion: presence of events at one location affects probability of events at another location (correlated point process)
* Heterogeneity: intensity $\lambda$ varies with location (heterogeneous Poisson point process)
We are going to focus on simulating correlated point process in this notebook. One example of correlated point process is Poisson cluster process. Two stages are involved in simulating a Poisson cluster process. First, parent events are simulted from a $\lambda$-conditioned or $N$-conditioned CSR. Second, $n$ offspring events for each parent event are simulated within a circle of radius $r$ centered on the parent. Offspring events are independently and identically distributed.
#### 1. Simulate a Poisson cluster process of size 200 with 10 parents and 20 children within 0.5 units of each parent (parent events: $N$-conditioned CSR)
```
np.random.seed(5)
csamples = PoissonClusterPointProcess(window, 200, 10, 0.5, 1, asPP=True, conditioning=False)
csamples
csamples.parameters #number of total events for each realization
csamples.num_parents #number of parent events for each realization
csamples.children # number of children events centered on each parent event
pp_pcp = csamples.realizations[0]
pp_pcp
pp_pcp.plot(window=True, hull=True, title='Clustered Point Pattern') #plot the first realization
```
It is obvious that there are several clusters in the above point pattern.
#### 2. Simulate a Poisson cluster process of size 200 with 10 parents and 20 children within 0.5 units of each parent (parent events: $\lambda$-conditioned CSR)
```
import numpy as np
np.random.seed(10)
csamples = PoissonClusterPointProcess(window, 200, 10, 0.5, 1, asPP=True, conditioning=True)
csamples
csamples.parameters #number of events for the realization might not be equal to 200
csamples.num_parents #number of parent events for the realization, not equal to 10
csamples.children # number of children events centered on each parent event
pp_pcp = csamples.realizations[0]
pp_pcp.plot(window=True, hull=True, title='Clustered Point Pattern')
```
#### 3. Simulate a Poisson cluster process of size 200 with 5 parents and 40 children within 0.5 units of each parent (parent events: $N$-conditioned CSR)
```
np.random.seed(10)
csamples = PoissonClusterPointProcess(window, 200, 5, 0.5, 1, asPP=True)
pp_pcp = csamples.realizations[0]
pp_pcp.plot(window=True, hull=True, title='Clustered Point Pattern')
```
| true |
code
| 0.805317 | null | null | null | null |
|
## Programming Language : Python
<img align='left' src='https://github.com/harveenchadha/Breast_Cancer_Prediction/blob/master/images/python.jpeg?raw=1' >
## Problem Statement
Breast cancer (BC) is one of the most common cancers among women worldwide, representing the majority of new cancer cases and cancer-related deaths according to global statistics, making it a significant public health problem in today’s society.
The early diagnosis of BC can improve the prognosis and chance of survival significantly, as it can promote timely clinical treatment to patients. Further accurate classification of benign tumors can prevent patients undergoing unnecessary treatments. Thus, the correct diagnosis of BC and classification of patients into malignant or benign groups is the subject of much research. Because of its unique advantages in critical features detection from complex BC datasets, machine learning (ML) is widely recognized as the methodology of choice in BC pattern classification and forecast modelling.
<b>There are two main classifications of tumors. One is known as benign and the other as malignant. A benign tumor is a tumor that does not invade its surrounding tissue or spread around the body. A malignant tumor is a tumor that may invade its surrounding tissue or spread around the body.</b>
<hr>
## Dataset
1. https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(Diagnostic)
2. https://www.kaggle.com/uciml/breast-cancer-wisconsin-data
## Step 0: Importing Libraries
```
## For Data Manipulation: Provides Dataframe as the datastructure to hold data
import pandas as pd
```
<img src='https://github.com/harveenchadha/Breast_Cancer_Prediction/blob/master/images/pandas_logo.png?raw=1' >
```
## For Faster data computation: Provides multidimentional array support to hold and manipulate data
import numpy as np
```
<img src='https://github.com/harveenchadha/Breast_Cancer_Prediction/blob/master/images/numpy.jpeg?raw=1' >
```
## For Data Visualization
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
```
<img src='https://github.com/harveenchadha/Breast_Cancer_Prediction/blob/master/images/matplotlib.png?raw=1' >
```
pip install tensorflow==2.0.0-alpha0
import tensorflow as tf
tf.__version__
```
## Step 1: Question
1. What are the factors that contribute to malignant and benign tumor?
2. Is the problem stated as classification or regression problem?
3. Our final model is capable enough to predict the diffence between two types?
## Step 2: Wrangle Data
### Step 2.1: Gathering Data
```
from google.colab import files
uploaded = files.upload()
df = pd.read_csv('./data.csv') ## reading data from a csv into pandas datastructure
```
### Step 2.2: Accessing Data
```
df.head()
df.describe()
df.info()
```
### Step 2.3: Cleaning Data
```
df.isnull().sum() #checking if any column has a null value because nulls can cause a problem while training model
df.drop(columns=['Unnamed: 32'], inplace=True)
```
## Step 3: EDA
```
df.diagnosis.value_counts().plot(kind= 'bar');
fig, ax = plt.subplots(figsize =(16,4))
df.concavity_mean.plot()
malignant = df[df.diagnosis == 'M']
benign = df[df.diagnosis == 'B']
fig, ax = plt.subplots(figsize =(16,4))
plt.plot(malignant.concavity_mean, label = 'Malignant')
plt.plot(benign.concavity_mean, label = 'Benign')
plt.legend();
```
## Step 4: Model Data
```
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder, StandardScaler
from sklearn.metrics import confusion_matrix, classification_report
## Seperate out features and labels
X = df.drop(columns=['diagnosis'])
y = df.diagnosis
sc = StandardScaler()
X = sc.fit_transform(X)
## Since machines understand language of either 0 or 1, you have to provide them data in that language only.
## So convert M to 0 and B to 1
le = LabelEncoder()
y = le.fit_transform(y)
## Set aside Training and test data for validation of our model
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, shuffle=True, random_state = 42)
## checking out shape of the variables for traiing and test
X_train.shape, y_train.shape, X_test.shape, y_test.shape
from tensorflow.keras.layers import Dense, Flatten, Activation, Dropout
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import Adam
model = Sequential()
model.add(Dense(32, input_shape=(31,)))
model.add(Activation('relu'))
model.add(Dense(16))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss = 'binary_crossentropy' , metrics=['accuracy'], optimizer=Adam(lr=0.0001))
model.fit(X_train, y_train, validation_split = 0.1, epochs= 50, verbose =1, batch_size = 8)
```
## Step 5: Evaluating Model
```
y_pred = model.predict(X_test) ## we perform prediction on the validation set kept aside in step 4
y_pred = (y_pred >= 0.5).astype(int)
```
<b> Metric for evaluation: Confusion Matrix </b>
```
confusion_matrix( y_test, y_pred) ## for validation set
```
## Step 9: Conclusion:
1. What are the factors that contribute to malignant and benign tumor? : Answer Given in Step 8
2. Is the problem stated as classification or regression problem?: Classification
3. Our final model is capable enough to predict the diffence between two types?: Our model is more than >94% accurate
## Step 10: Communicate
Create a powerpoint and communicate your findings
| true |
code
| 0.836087 | null | null | null | null |
|
<table class="ee-notebook-buttons" align="left">
<td><a target="_parent" href="https://github.com/giswqs/geemap/tree/master/tutorials/ImageCollection/03_filtering_image_collection.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_parent" href="https://nbviewer.jupyter.org/github/giswqs/geemap/blob/master/tutorials/ImageCollection/03_filtering_image_collection.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_parent" href="https://colab.research.google.com/github/giswqs/geemap/blob/master/tutorials/ImageCollection/03_filtering_image_collection.ipynb"><img width=26px src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
# Filtering an ImageCollection
As illustrated in the [Get Started section](https://developers.google.com/earth-engine/getstarted) and the [ImageCollection Information section](https://developers.google.com/earth-engine/ic_info), Earth Engine provides a variety of convenience methods for filtering image collections. Specifically, many common use cases are handled by `imageCollection.filterDate()`, and `imageCollection.filterBounds()`. For general purpose filtering, use `imageCollection.filter()` with an ee.Filter as an argument. The following example demonstrates both convenience methods and `filter()` to identify and remove images with bad registration from an `ImageCollection`:
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py#L13) can be added using the `Map.add_basemap()` function.
```
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
```
## Add Earth Engine Python script
### Simple cloud score
For scoring Landsat pixels by their relative cloudiness, Earth Engine provides a rudimentary cloud scoring algorithm in the `ee.Algorithms.Landsat.simpleCloudScore()` method. Also note that `simpleCloudScore()` adds a band called `cloud` to the input image. The cloud band contains the cloud score from 0 (not cloudy) to 100 (most cloudy).
```
# Load Landsat 5 data, filter by date and bounds.
collection = ee.ImageCollection('LANDSAT/LT05/C01/T2') \
.filterDate('1987-01-01', '1990-05-01') \
.filterBounds(ee.Geometry.Point(25.8544, -18.08874))
# Also filter the collection by the IMAGE_QUALITY property.
filtered = collection \
.filterMetadata('IMAGE_QUALITY', 'equals', 9)
# Create two composites to check the effect of filtering by IMAGE_QUALITY.
badComposite = ee.Algorithms.Landsat.simpleComposite(collection, 75, 3)
goodComposite = ee.Algorithms.Landsat.simpleComposite(filtered, 75, 3)
# Display the composites.
Map.setCenter(25.8544, -18.08874, 13)
Map.addLayer(badComposite,
{'bands': ['B3', 'B2', 'B1'], 'gain': 3.5},
'bad composite')
Map.addLayer(goodComposite,
{'bands': ['B3', 'B2', 'B1'], 'gain': 3.5},
'good composite')
```
## Display Earth Engine data layers
```
Map.addLayerControl()
Map
```
| true |
code
| 0.615203 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/raqueeb/TensorFlow2/blob/master/scratch_model_weight_changes_affect_accuracy.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import numpy as np
# আমাদের প্রেডিকশন করার জন্য ডেটা পয়েন্ট, ছবির সাথে মিলিয়ে দেখুন
input_data = np.array([2, 3])
# আমাদের ডিকশনারী
weights = {'node_0': np.array([1, 1]),
'node_1': np.array([-1, 1]),
'output': np.array([2, -1])
}
# node 0 এর ভ্যালু ক্যালকুলেট করি: node_0_value
node_0_value = (input_data * weights['node_0']).sum()
# node ১ এর ভ্যালু ক্যালকুলেট করি: node_1_value
node_1_value = (input_data * weights['node_1']).sum()
# নোডগুলোর ভ্য়ালুগুলোকে অ্যারেতে রাখি: hidden_layer_outputs
hidden_layer_outputs = np.array([node_0_value, node_1_value])
# আউটপুট ক্যালকুলেট করি: output
output = (hidden_layer_outputs * weights['output']).sum()
# আউটপুট প্রিন্ট করে দেখি
print(hidden_layer_outputs)
print(output)
# নতুন ওয়েট এবং ইনপুট ডেটা
weights = np.array([1, 2])
input_data = np.array([3, 4])
# প্রেডিকশন ক্যাল্কুলেট করি: preds
preds = (weights * input_data).sum()
# ধরি আমাদের টার্গেট ৬
target = 6
# এরর ক্যালকুলেট করি: error
error = preds - target
# স্লোপ ক্যালকুলেট করি: slope
slope = 2 * input_data * error
# স্লোপ প্রিন্ট করি
print(slope)
# লার্নিং রেট ঠিক করি: learning_rate
learning_rate = 0.01
# স্লোপ/গ্রেডিয়েন্ট ক্যালকুলেট করি: gradient
gradient = 2 * input_data * error
# ওয়েট আপডেট করি: weights_updated
weights_updated = weights - learning_rate * gradient
# প্রেডিকশন আপডেট নেই : preds_updated
preds_updated = (weights_updated * input_data).sum()
# এররের আপডেট নেই: error_updated
error_updated = preds_updated - target
# শুরুর এরর প্রিন্ট করি
print(error)
# নতুন এরর প্রিন্ট করি
print(error_updated)
import numpy as np
# আমাদের প্রেডিকশন করার জন্য ডেটা পয়েন্ট, ছবির সাথে মিলিয়ে দেখুন
input_data = np.array([0, 3])
# স্যাম্পল ওয়েট যা পাল্টে দিয়েছি আমরা
weights_0 = {'node_0': [2, 1],
'node_1': [1, 2],
'output': [1, 1]
}
# আসল টার্গেট ভ্যালু, এরর বের করার জন্য লাগবে
target_actual = 3
# দুটো মেথড ডিফাইন করি
def relu(input):
output = max(0, input)
return output
def predict_with_network(input_data_row, weights):
node_0_input = (input_data_row * weights['node_0']).sum()
# print(node_0_input)
node_0_output = relu(node_0_input)
# print(node_0_output)
node_1_input = (input_data_row * weights['node_1']).sum()
node_1_output = relu(node_1_input)
hidden_layer_outputs = np.array([node_0_output, node_1_output])
input_to_final_layer = (hidden_layer_outputs * weights['output']).sum()
model_output = relu(input_to_final_layer)
return model_output
# শুরুর ওয়েট দিয়ে প্রেডিকশন করি
model_output_0 = predict_with_network(input_data, weights_0)
# এরর ক্যালকুলেট করি: error_0
error_0 = model_output_0 - target_actual
# নতুন ওয়েট দেই যাতে টার্গেট প্রেডিকশন (3) ধরতে পারে: weights_1
weights_1 = {'node_0': [2, 1],
'node_1': [1, 2],
'output': [1, 0]
}
# নতুন ওয়েট দিয়ে প্রেডিকশন: model_output_1
model_output_1 = predict_with_network(input_data, weights_1)
# আবার এরর ক্যালকুলেট করি: error_1
error_1 = model_output_1 - target_actual
# সবকিছু প্রিন্ট করে দেখি
print(model_output_0)
print(model_output_1)
print(error_0)
print(error_1)
import numpy as np
# আমাদের প্রেডিকশন করার জন্য ডেটা পয়েন্ট, ছবির সাথে মিলিয়ে দেখুন
input_data = np.array([-1, 2])
# আমাদের ডিকশনারী
weights = {'node_0': np.array([3, 3]),
'node_1': np.array([1, 5]),
'output': np.array([2, -1])
}
def relu(input):
'''রেল্যু ফাংশনকে ডিফাইন করে দিচ্ছি এখানে'''
# ইনপুটে যা পাবো সেটাকে ম্যাক্সিমাম যা আসবে, অথবা ঋনাত্বক আসলে "০" : output
output = max(0, input)
# Return the value just calculated
return(output)
# নোড ১ এর ভ্যালু ক্যালকুলেট করি: node_0_output
node_0_input = (input_data * weights['node_0']).sum()
node_0_output = relu(node_0_input)
# নোড ২ এর ভ্যালু ক্যালকুলেট করি: node_1_output
node_1_input = (input_data * weights['node_1']).sum()
node_1_output = relu(node_1_input)
# নতুন ভ্যালুগুলোকে অ্যারেতে বসাই: hidden_layer_outputs
hidden_layer_outputs = np.array([node_0_output, node_1_output])
# মডেলের আউটপুট ক্যালকুলেট করি (রেল্যুকে সরাসরি ব্যবহার না করে)
model_output = (hidden_layer_outputs * weights['output']).sum()
# Print model output
print(node_0_output)
print(node_1_output)
print(hidden_layer_outputs)
print(model_output)
```
| true |
code
| 0.284588 | null | null | null | null |
|
This notebook is part of the *orix* documentation https://orix.readthedocs.io. Links to the documentation won’t work from the notebook.
# Visualizing Crystal Poles in the Pole Density Function
This notebook demonstrates how to quantify the distribution of crystallographic poles,
which is useful, for example, in texture analysis, using the Pole Distribution Function
(PDF).
```
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from orix import plot
from orix.crystal_map import Phase
from orix.data import ti_orientations
from orix.sampling import sample_S2
from orix.vector import Miller, Vector3d
# We'll want our plots to look a bit larger than the default size
plt.rcParams.update(
{
"figure.figsize": (10, 5),
"lines.markersize": 2,
"font.size": 15,
"axes.grid": False,
}
)
w, h = plt.rcParams["figure.figsize"]
```
First, we load some sample orientations from a Titanium sample dataset which represent
crystal orientations in the sample reference frame. These orientations have a defined
$622$ point group symmetry:
<div class="alert alert-info">
Note
If not previously downloaded, running this cell will download some example data from an
online repository to a local cache, see the docstring of
[ti_orientations](reference.rst#orix.data.ti_orientations) for more details.
</div>
```
ori = ti_orientations(allow_download=True)
ori
```
Let's look at the sample's $\{01\bar{1}1\}$ texture plotted in the stereographic projection.
First we must define the crystal's point group and generate the set of symmetrically
unique $(01\bar{1}1)$ poles:
```
m = Miller(hkil=(0, 1, -1, 1), phase=Phase(point_group=ori.symmetry))
m = m.symmetrise(unique=True)
m
```
Now let's compute the direction of these poles in the sample reference frame.
This is done using the [Orientation](reference.rst#orix.quaternion.Orientation)-[Vector3d](reference.rst#orix.vector.Vector3d)
`outer` product. We can pass `lazy=True` parameter to perform the computation in chunks
using `Dask`, this helps to reduce memory usage when there are many computations to be
performed.
```
poles = (~ori).outer(m, lazy=True, progressbar=True, chunk_size=2000)
poles.shape
```
We can plot these poles in the sterographic projection:
```
poles.scatter(
hemisphere="both",
alpha=0.02,
figure_kwargs=dict(figsize=(2 * h, h)),
axes_labels=["X", "Y"],
)
```
In this case there are many individual data points, which makes it difficult to
interpret whether regions contain higher or lower pole density.
In this case we can use the [Vector3d.pole_density_function()](reference.rst#orix.vector.Vector3d.pole_density_function)
to measure the pole density on the unit sphere $S_2$. Internally this uses the equal
area parameterization to calculate cells on $S_2$ with the same solid angle. In this
representation randomly oriented vectors have the same probability of intercepting each
cell, thus we can represent our sample's PDF as Multiples of Random Density (MRD). This
follows the work of <cite data-cite="rohrer2004distribution">Rohrer et al.(2004)</cite>.
Below is the equal area sampling representation on $S_2$ in both the stereographic
projection and 3D with a resolution of 10°:
```
fig = plt.figure(figsize=(2 * h, h))
ax0 = fig.add_subplot(121, projection="stereographic")
ax1 = fig.add_subplot(122, projection="3d")
v_mesh = sample_S2(resolution=10, method="equal_area")
ax0.hemisphere = "upper"
ax0.scatter(v_mesh)
ax0.show_hemisphere_label()
ax0.set_labels("X", "Y", None)
ax1.scatter(*v_mesh.data.T)
lim = 1
ax1.set_xlim(-lim, lim)
ax1.set_ylim(-lim, lim)
ax1.set_zlim(-lim, lim)
ax1.set_xticks((-1, 0, 1))
ax1.set_yticks((-1, 0, 1))
ax1.set_zticks((-1, 0, 1))
ax1.set_xlabel("X")
ax1.set_ylabel("Y")
ax1.set_zlabel("Z")
ax1.set_box_aspect((1, 1, 1))
```
For randomly distributed vectors on $S_2$, we can can see that MRD tends to 1 with an increasing number of vectors:
NB. PDF plots are displayed on the same color scale.
```
num = (10_000, 100_000, 1_000_000, 10_000_000)
fig, ax = plt.subplots(
nrows=2,
ncols=2,
figsize=(2 * h, 2 * h),
subplot_kw=dict(projection="stereographic"),
)
ax = ax.ravel()
for i, n in enumerate(num):
v = Vector3d(np.random.randn(n, 3)).unit
ax[i].pole_density_function(v, log=False, vmin=0.8, vmax=1.2)
ax[i].set_labels("X", "Y", None)
ax[i].set_title(str(n))
```
We can also change the sampling angular `resolution` on $S_2$, the colormap with the
`cmap` parameter, and broadening of the density distribution with `sigma`:
```
fig, ax = plt.subplots(
nrows=2,
ncols=2,
figsize=(2 * h, 2 * h),
subplot_kw=dict(projection="stereographic"),
)
ax = ax.ravel()
v = Vector3d(np.random.randn(1_000_000, 3)).unit
ax[0].pole_density_function(v, log=False, resolution=1)
ax[0].set_title("Sampling resolution: 1$\degree$")
# change sampling resolution on S2
ax[1].pole_density_function(v, log=False, resolution=5)
ax[1].set_title("Sampling resolution: 5$\degree$")
# increase peak broadening
ax[2].pole_density_function(v, log=False, resolution=1, sigma=15)
ax[2].set_title("Sampling resolution: 1$\degree$\n$\sigma$: 15$\degree$")
# change colormap
ax[3].pole_density_function(v, log=False, resolution=1, cmap="gray_r")
ax[3].set_title('Sampling resolution: 1$\degree$\ncmap: "gray_r"')
for a in ax:
a.set_labels("X", "Y", None)
```
Poles from real samples tend not to be randomly oriented, as the material microstructure
is arranged into regions of similar crystal orientation, known as grains.
The PDF for the measured $\{01\bar{1}1\}$ poles from the Titanium sample loaded at the beginning
of the notebook:
```
poles.pole_density_function(
hemisphere="both", log=False, figure_kwargs=dict(figsize=(2 * h, h))
)
```
We can also plot these densities on a `log` scale to reduce the contrast between high
and low density regions.
By comparing the point data shown at the top of the notebook with the calculated pole
densities from PDF, we can see that not all regions in the point data representation
have the same density and that PDF is needed for better quantification:
```
fig, ax = plt.subplots(
ncols=2, subplot_kw=dict(projection="stereographic"), figsize=(2 * h, h)
)
ax[0].hemisphere = "upper"
ax[1].hemisphere = "upper"
ax[0].scatter(poles, s=2, alpha=0.02)
ax[1].pole_density_function(poles, log=True)
for a in ax:
a.set_labels("X", "Y", None)
```
A clear example of this can be shown by combining the PDF and point data onto the same
plot:
```
fig = poles.scatter(
alpha=0.01,
c="w",
return_figure=True,
axes_labels=["X", "Y"],
show_hemisphere_label=True,
)
poles.pole_density_function(log=True, figure=fig)
```
| true |
code
| 0.751605 | null | null | null | null |
|
### Prerequisites
You should have completed steps 1-3 of this tutorial before beginning this exercise. The files required for this notebook are generated by those previous steps.
This notebook takes approximately 3 hours to run on an AWS `p3.8xlarge` instance.
```
# # Optional: you can set what GPU you want to use in a notebook like this.
# # Useful if you want to run concurrent experiments at the same time on different GPUs.
# import os
# os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
# os.environ["CUDA_VISIBLE_DEVICES"]="2"
from pathlib import Path
import numpy as np
from seq2seq_utils import extract_encoder_model, load_encoder_inputs
from keras.layers import Input, Dense, BatchNormalization, Dropout, Lambda
from keras.models import load_model, Model
from seq2seq_utils import load_text_processor
#where you will save artifacts from this step
OUTPUT_PATH = Path('./data/code2emb/')
OUTPUT_PATH.mkdir(exist_ok=True)
# These are where the artifacts are stored from steps 2 and 3, respectively.
seq2seq_path = Path('./data/seq2seq/')
langemb_path = Path('./data/lang_model_emb/')
# set seeds
from numpy.random import seed
seed(1)
from tensorflow import set_random_seed
set_random_seed(2)
```
# Train Model That Maps Code To Sentence Embedding Space
In step 2, we trained a seq2seq model that can summarize function code using `(code, docstring)` pairs as the training data.
In this step, we will fine tune the encoder from the seq2seq model to generate code embeddings in the docstring space by using `(code, docstring-embeddings)` as the training data. Therefore, this notebook will go through the following steps:
1. Load the seq2seq model and extract the encoder (remember seq2seq models have an encoder and a decoder).
2. Freeze the weights of the encoder.
3. Add some dense layers on top of the encoder.
4. Train this new model supplying by supplying `(code, docstring-embeddings)` pairs. We will call this model `code2emb_model`.
5. Unfreeze the entire model, and resume training. This helps fine tune the model a little more towards this task.
6. Encode all of the code, including code that does not contain a docstring and save that into a search index for future use.
### Load seq2seq model from Step 2 and extract the encoder
First load the seq2seq model from Step2, then extract the encoder (we do not need the decoder).
```
# load the pre-processed data for the encoder (we don't care about the decoder in this step)
tokens_encoder_input_data, tokens_doc_length = load_encoder_inputs(seq2seq_path/'train.tokens.npy')
tokens_seq2seq_Model = load_model(seq2seq_path/'code_summary_seq2seq_model.h5')
# load the pre-processed data for the encoder (we don't care about the decoder in this step)
apiseq_encoder_input_data, apiseq_doc_length = load_encoder_inputs(seq2seq_path/'train.apiseq.npy')
apiseq_seq2seq_Model = load_model(seq2seq_path/'api_seq_seq2seq_model.h5')
# load the pre-processed data for the encoder (we don't care about the decoder in this step)
methname_encoder_input_data, methname_doc_length = load_encoder_inputs(seq2seq_path/'train.methname.npy')
methname_seq2seq_Model = load_model(seq2seq_path/'methname_seq2seq_model.h5')
# Extract Encoder from seq2seq model
token_encoder_model = extract_encoder_model(tokens_seq2seq_Model)
# Get a summary of the encoder and its layers
token_encoder_model.name = 'Token-Encoder-Model'
token_encoder_model.summary()
# Extract Encoder from seq2seq model
apiseq_encoder_model = extract_encoder_model(apiseq_seq2seq_Model)
# Get a summary of the encoder and its layers
apiseq_encoder_model.name = 'ApiSeq-Encoder-Model'
apiseq_encoder_model.summary()
# Extract Encoder from seq2seq model
methname_encoder_model = extract_encoder_model(methname_seq2seq_Model)
# Get a summary of the encoder and its layers
methname_encoder_model.name = 'Methname-Encoder-Model'
methname_encoder_model.summary()
```
Freeze the encoder
```
# Freeze Encoder Model
for encoder_model in [token_encoder_model, apiseq_encoder_model, methname_encoder_model]:
for l in encoder_model.layers:
l.trainable = False
print(l, l.trainable)
```
### Load Docstring Embeddings From From Step 3
The target for our `code2emb` model will be docstring-embeddings instead of docstrings. Therefore, we will use the embeddings for docstrings that we computed in step 3. For this tutorial, we will use the average over all hidden states, which is saved in the file `avg_emb_dim500_v2.npy`.
Note that in our experiments, a concatenation of the average, max, and last hidden state worked better than using the average alone. However, in the interest of simplicity we demonstrate just using the average hidden state. We leave it as an exercise to the reader to experiment with other approaches.
```
# Load Fitlam Embeddings
fastailm_emb = np.load(langemb_path/'avg_emb_dim500_v2.npy')
# check that the encoder inputs have the same number of rows as the docstring embeddings
assert tokens_encoder_input_data.shape[0] == fastailm_emb.shape[0]
assert methname_encoder_input_data.shape[0] == fastailm_emb.shape[0]
assert apiseq_encoder_input_data.shape[0] == fastailm_emb.shape[0]
fastailm_emb.shape
encoder_input_data.shape[0]
fastailm_emb.shape[0]
```
### Construct `codeFusion` Model Architecture
The `codeFusion` model is the the fusion of the tokens, api sequence, and method name encoders followed by a dense layer - this model should feed into the `code2emb` model
```
from keras.layers import Concatenate
token_input = Input(shape=(tokens_doc_length,), name='Token-Input')
apiseq_input = Input(shape=(apiseq_doc_length,), name='API-Input')
methname_input = Input(shape=(methname_doc_length,), name='Methname-Input')
token_out = token_encoder_model(token_input)
apiseq_out = apiseq_encoder_model(apiseq_input)
methname_out = methname_encoder_model(methname_input)
concatenation_layer = Concatenate(name="Concatenate-Token-API-Methname")\
([token_out, apiseq_out, methname_out])
codeFusion_layer = Dense(1000, activation='relu')(concatenation_layer)
```
### Construct `code2emb` Model Architecture
The `code2emb` model is the encoder from the seq2seq model with some dense layers added on top. The output of the last dense layer of this model needs to match the dimensionality of the docstring embedding, which is 500 in this case.
```
# first dense layer with batch norm
x = Dense(500, activation='relu')(codeFusion_layer)
x = BatchNormalization(name='bn-1')(x)
out = Dense(500)(x)
code2emb_model = Model([token_input, apiseq_input, methname_input], out)
code2emb_model.summary()
```
### Train the `code2emb` Model
The model we are training is relatively simple - with two dense layers on top of the pre-trained encoder. We are leaving the encoder frozen at first, then will unfreeze the encoder in a later step.
```
from keras.callbacks import CSVLogger, ModelCheckpoint
from keras import optimizers
code2emb_model.compile(optimizer=optimizers.Nadam(lr=0.002), loss='cosine_proximity')
script_name_base = 'code2emb_model_'
csv_logger = CSVLogger('{:}.log'.format(script_name_base))
model_checkpoint = ModelCheckpoint('{:}.epoch{{epoch:02d}}-val{{val_loss:.5f}}.hdf5'.format(script_name_base),
save_best_only=True)
batch_size = 20000
epochs = 15
history = code2emb_model.fit([encoder_input_data], fastailm_emb,
batch_size=batch_size,
epochs=epochs,
validation_split=0.12, callbacks=[csv_logger, model_checkpoint])
```
`.7453`
### Unfreeze all Layers of Model and Resume Training
In the previous step, we left the encoder frozen. Now that the dense layers are trained, we will unfreeze the entire model and let it train some more. This will hopefully allow this model to specialize on this task a bit more.
```
for l in code2emb_model.layers:
l.trainable = True
print(l, l.trainable)
code2emb_model.compile(optimizer=optimizers.Nadam(lr=0.0001), loss='cosine_proximity')
script_name_base = 'code2emb_model_unfreeze_'
csv_logger = CSVLogger('{:}.log'.format(script_name_base))
model_checkpoint = ModelCheckpoint('{:}.epoch{{epoch:02d}}-val{{val_loss:.5f}}.hdf5'.format(script_name_base),
save_best_only=True)
batch_size = 2000
epochs = 20
history = code2emb_model.fit([encoder_input_data], fastailm_emb,
batch_size=batch_size,
epochs=epochs,
initial_epoch=16,
validation_split=0.12, callbacks=[csv_logger, model_checkpoint])
```
### Save `code2emb` model
```
code2emb_model.save(OUTPUT_PATH/'code2emb_model.hdf5')
```
This file has been cached and is also available for download here:
`code2emb_model.hdf5`:https://storage.googleapis.com/kubeflow-examples/code_search/data/code2emb/code2emb_model.hdf5
# Vectorize all of the code without docstrings
We want to vectorize all of the code without docstrings so we can test the efficacy of the search on the code that was never seen by the model.
```
from keras.models import load_model
from pathlib import Path
import numpy as np
from seq2seq_utils import load_text_processor
code2emb_path = Path('./data/code2emb/')
seq2seq_path = Path('./data/seq2seq/')
data_path = Path('./data/processed_data/')
code2emb_model = load_model(code2emb_path/'code2emb_model.hdf5')
num_encoder_tokens, enc_pp = load_text_processor(seq2seq_path/'py_code_proc_v2.dpkl')
with open(data_path/'without_docstrings.function', 'r') as f:
no_docstring_funcs = f.readlines()
```
### Pre-process code without docstrings for input into `code2emb` model
We use the same transformer we used to train the original model.
```
# tokenized functions that did not contain docstrigns
no_docstring_funcs[:5]
encinp = enc_pp.transform_parallel(no_docstring_funcs)
np.save(code2emb_path/'nodoc_encinp.npy', encinp)
```
### Extract code vectors
```
from keras.models import load_model
from pathlib import Path
import numpy as np
code2emb_path = Path('./data/code2emb/')
encinp = np.load(code2emb_path/'nodoc_encinp.npy')
code2emb_model = load_model(code2emb_path/'code2emb_model.hdf5')
```
Use the `code2emb` model to map the code into the same vector space as natural language
```
nodoc_vecs = code2emb_model.predict(encinp, batch_size=20000)
# make sure the number of output rows equal the number of input rows
assert nodoc_vecs.shape[0] == encinp.shape[0]
```
Save the vectorized code
```
np.save(code2emb_path/'nodoc_vecs.npy', nodoc_vecs)
```
| true |
code
| 0.713344 | null | null | null | null |
|
# Regular Expressions
Regular expressions are text-matching patterns described with a formal syntax. You'll often hear regular expressions referred to as 'regex' or 'regexp' in conversation. Regular expressions can include a variety of rules, from finding repetition, to text-matching, and much more. As you advance in Python you'll see that a lot of your parsing problems can be solved with regular expressions (they're also a common interview question!).
If you're familiar with Perl, you'll notice that the syntax for regular expressions are very similar in Python. We will be using the <code>re</code> module with Python for this lecture.
Let's get started!
## Searching for Patterns in Text
One of the most common uses for the re module is for finding patterns in text. Let's do a quick example of using the search method in the re module to find some text:
```
import re
# List of patterns to search for
patterns = ['term1', 'term2']
# Text to parse
text = 'This is a string with term1, but it does not have the other term.'
for pattern in patterns:
print('Searching for "%s" in:\n "%s"\n' %(pattern,text))
#Check for match
if re.search(pattern,text):
print('Match was found. \n')
else:
print('No Match was found.\n')
```
Now we've seen that <code>re.search()</code> will take the pattern, scan the text, and then return a **Match** object. If no pattern is found, **None** is returned. To give a clearer picture of this match object, check out the cell below:
```
# List of patterns to search for
pattern = 'term1'
# Text to parse
text = 'This is a string with term1, but it does not have the other term.'
match = re.search(pattern,text)
type(match)
```
This **Match** object returned by the search() method is more than just a Boolean or None, it contains information about the match, including the original input string, the regular expression that was used, and the location of the match. Let's see the methods we can use on the match object:
```
# Show start of match
match.start()
# Show end
match.end()
```
## Split with regular expressions
Let's see how we can split with the re syntax. This should look similar to how you used the split() method with strings.
```
# Term to split on
split_term = '@'
phrase = 'What is the domain name of someone with the email: [email protected]'
# Split the phrase
re.split(split_term,phrase)
```
Note how <code>re.split()</code> returns a list with the term to split on removed and the terms in the list are a split up version of the string. Create a couple of more examples for yourself to make sure you understand!
## Finding all instances of a pattern
You can use <code>re.findall()</code> to find all the instances of a pattern in a string. For example:
```
# Returns a list of all matches
re.findall('match','test phrase match is in middle')
```
## re Pattern Syntax
This will be the bulk of this lecture on using re with Python. Regular expressions support a huge variety of patterns beyond just simply finding where a single string occurred.
We can use *metacharacters* along with re to find specific types of patterns.
Since we will be testing multiple re syntax forms, let's create a function that will print out results given a list of various regular expressions and a phrase to parse:
```
def multi_re_find(patterns,phrase):
'''
Takes in a list of regex patterns
Prints a list of all matches
'''
for pattern in patterns:
print('Searching the phrase using the re check: %r' %(pattern))
print(re.findall(pattern,phrase))
print('\n')
```
### Repetition Syntax
There are five ways to express repetition in a pattern:
1. A pattern followed by the meta-character <code>*</code> is repeated zero or more times.
2. Replace the <code>*</code> with <code>+</code> and the pattern must appear at least once.
3. Using <code>?</code> means the pattern appears zero or one time.
4. For a specific number of occurrences, use <code>{m}</code> after the pattern, where **m** is replaced with the number of times the pattern should repeat.
5. Use <code>{m,n}</code> where **m** is the minimum number of repetitions and **n** is the maximum. Leaving out **n** <code>{m,}</code> means the value appears at least **m** times, with no maximum.
Now we will see an example of each of these using our multi_re_find function:
```
test_phrase = 'sdsd..sssddd...sdddsddd...dsds...dsssss...sdddd'
test_patterns = [ 'sd*', # s followed by zero or more d's
'sd+', # s followed by one or more d's
'sd?', # s followed by zero or one d's
'sd{3}', # s followed by three d's
'sd{2,3}', # s followed by two to three d's
]
multi_re_find(test_patterns,test_phrase)
```
## Character Sets
Character sets are used when you wish to match any one of a group of characters at a point in the input. Brackets are used to construct character set inputs. For example: the input <code>[ab]</code> searches for occurrences of either **a** or **b**.
Let's see some examples:
```
test_phrase = 'sdsd..sssddd...sdddsddd...dsds...dsssss...sdddd'
test_patterns = ['[sd]', # either s or d
's[sd]+'] # s followed by one or more s or d
multi_re_find(test_patterns,test_phrase)
```
It makes sense that the first input <code>[sd]</code> returns every instance of s or d. Also, the second input <code>s[sd]+</code> returns any full strings that begin with an s and continue with s or d characters until another character is reached.
## Exclusion
We can use <code>^</code> to exclude terms by incorporating it into the bracket syntax notation. For example: <code>[^...]</code> will match any single character not in the brackets. Let's see some examples:
```
test_phrase = 'This is a string! But it has punctuation. How can we remove it?'
```
Use <code>[^!.? ]</code> to check for matches that are not a !,.,?, or space. Add a <code>+</code> to check that the match appears at least once. This basically translates into finding the words.
```
re.findall('[^!.? ]+',test_phrase)
```
## Character Ranges
As character sets grow larger, typing every character that should (or should not) match could become very tedious. A more compact format using character ranges lets you define a character set to include all of the contiguous characters between a start and stop point. The format used is <code>[start-end]</code>.
Common use cases are to search for a specific range of letters in the alphabet. For instance, <code>[a-f]</code> would return matches with any occurrence of letters between a and f.
Let's walk through some examples:
```
test_phrase = 'This is an example sentence. Lets see if we can find some letters.'
test_patterns=['[a-z]+', # sequences of lower case letters
'[A-Z]+', # sequences of upper case letters
'[a-zA-Z]+', # sequences of lower or upper case letters
'[A-Z][a-z]+'] # one upper case letter followed by lower case letters
multi_re_find(test_patterns,test_phrase)
```
## Escape Codes
You can use special escape codes to find specific types of patterns in your data, such as digits, non-digits, whitespace, and more. For example:
<table border="1" class="docutils">
<colgroup>
<col width="14%" />
<col width="86%" />
</colgroup>
<thead valign="bottom">
<tr class="row-odd"><th class="head">Code</th>
<th class="head">Meaning</th>
</tr>
</thead>
<tbody valign="top">
<tr class="row-even"><td><tt class="docutils literal"><span class="pre">\d</span></tt></td>
<td>a digit</td>
</tr>
<tr class="row-odd"><td><tt class="docutils literal"><span class="pre">\D</span></tt></td>
<td>a non-digit</td>
</tr>
<tr class="row-even"><td><tt class="docutils literal"><span class="pre">\s</span></tt></td>
<td>whitespace (tab, space, newline, etc.)</td>
</tr>
<tr class="row-odd"><td><tt class="docutils literal"><span class="pre">\S</span></tt></td>
<td>non-whitespace</td>
</tr>
<tr class="row-even"><td><tt class="docutils literal"><span class="pre">\w</span></tt></td>
<td>alphanumeric</td>
</tr>
<tr class="row-odd"><td><tt class="docutils literal"><span class="pre">\W</span></tt></td>
<td>non-alphanumeric</td>
</tr>
</tbody>
</table>
Escapes are indicated by prefixing the character with a backslash <code>\</code>. Unfortunately, a backslash must itself be escaped in normal Python strings, and that results in expressions that are difficult to read. Using raw strings, created by prefixing the literal value with <code>r</code>, eliminates this problem and maintains readability.
Personally, I think this use of <code>r</code> to escape a backslash is probably one of the things that block someone who is not familiar with regex in Python from being able to read regex code at first. Hopefully after seeing these examples this syntax will become clear.
```
test_phrase = 'This is a string with some numbers 1233 and a symbol #hashtag'
test_patterns=[ r'\d+', # sequence of digits
r'\D+', # sequence of non-digits
r'\s+', # sequence of whitespace
r'\S+', # sequence of non-whitespace
r'\w+', # alphanumeric characters
r'\W+', # non-alphanumeric
]
multi_re_find(test_patterns,test_phrase)
```
## Conclusion
You should now have a solid understanding of how to use the regular expression module in Python. There are a ton of more special character instances, but it would be unreasonable to go through every single use case. Instead take a look at the full [documentation](https://docs.python.org/3/library/re.html#regular-expression-syntax) if you ever need to look up a particular pattern.
You can also check out the nice summary tables at this [source](http://www.tutorialspoint.com/python/python_reg_expressions.htm).
Good job!
| true |
code
| 0.335147 | null | null | null | null |
|
# Encoding of categorical variables
In this notebook, we will present typical ways of dealing with
**categorical variables** by encoding them, namely **ordinal encoding** and
**one-hot encoding**.
Let's first load the entire adult dataset containing both numerical and
categorical data.
```
import pandas as pd
adult_census = pd.read_csv("../datasets/adult-census.csv")
# drop the duplicated column `"education-num"` as stated in the first notebook
adult_census = adult_census.drop(columns="education-num")
target_name = "class"
target = adult_census[target_name]
data = adult_census.drop(columns=[target_name])
```
## Identify categorical variables
As we saw in the previous section, a numerical variable is a
quantity represented by a real or integer number. These variables can be
naturally handled by machine learning algorithms that are typically composed
of a sequence of arithmetic instructions such as additions and
multiplications.
In contrast, categorical variables have discrete values, typically
represented by string labels (but not only) taken from a finite list of
possible choices. For instance, the variable `native-country` in our dataset
is a categorical variable because it encodes the data using a finite list of
possible countries (along with the `?` symbol when this information is
missing):
```
data["native-country"].value_counts().sort_index()
```
How can we easily recognize categorical columns among the dataset? Part of
the answer lies in the columns' data type:
```
data.dtypes
```
If we look at the `"native-country"` column, we observe its data type is
`object`, meaning it contains string values.
## Select features based on their data type
In the previous notebook, we manually defined the numerical columns. We could
do a similar approach. Instead, we will use the scikit-learn helper function
`make_column_selector`, which allows us to select columns based on
their data type. We will illustrate how to use this helper.
```
from sklearn.compose import make_column_selector as selector
categorical_columns_selector = selector(dtype_include=object)
categorical_columns = categorical_columns_selector(data)
categorical_columns
```
Here, we created the selector by passing the data type to include; we then
passed the input dataset to the selector object, which returned a list of
column names that have the requested data type. We can now filter out the
unwanted columns:
```
data_categorical = data[categorical_columns]
data_categorical.head()
print(f"The dataset is composed of {data_categorical.shape[1]} features")
```
In the remainder of this section, we will present different strategies to
encode categorical data into numerical data which can be used by a
machine-learning algorithm.
## Strategies to encode categories
### Encoding ordinal categories
The most intuitive strategy is to encode each category with a different
number. The `OrdinalEncoder` will transform the data in such manner.
We will start by encoding a single column to understand how the encoding
works.
```
from sklearn.preprocessing import OrdinalEncoder
education_column = data_categorical[["education"]]
encoder = OrdinalEncoder()
education_encoded = encoder.fit_transform(education_column)
education_encoded
```
We see that each category in `"education"` has been replaced by a numeric
value. We could check the mapping between the categories and the numerical
values by checking the fitted attribute `categories_`.
```
encoder.categories_
```
Now, we can check the encoding applied on all categorical features.
```
data_encoded = encoder.fit_transform(data_categorical)
data_encoded[:5]
encoder.categories_
print(
f"The dataset encoded contains {data_encoded.shape[1]} features")
```
We see that the categories have been encoded for each feature (column)
independently. We also note that the number of features before and after the
encoding is the same.
However, be careful when applying this encoding strategy:
using this integer representation leads downstream predictive models
to assume that the values are ordered (0 < 1 < 2 < 3... for instance).
By default, `OrdinalEncoder` uses a lexicographical strategy to map string
category labels to integers. This strategy is arbitrary and often
meaningless. For instance, suppose the dataset has a categorical variable
named `"size"` with categories such as "S", "M", "L", "XL". We would like the
integer representation to respect the meaning of the sizes by mapping them to
increasing integers such as `0, 1, 2, 3`.
However, the lexicographical strategy used by default would map the labels
"S", "M", "L", "XL" to 2, 1, 0, 3, by following the alphabetical order.
The `OrdinalEncoder` class accepts a `categories` constructor argument to
pass categories in the expected ordering explicitly. You can find more
information in the
[scikit-learn documentation](https://scikit-learn.org/stable/modules/preprocessing.html#encoding-categorical-features)
if needed.
If a categorical variable does not carry any meaningful order information
then this encoding might be misleading to downstream statistical models and
you might consider using one-hot encoding instead (see below).
### Encoding nominal categories (without assuming any order)
`OneHotEncoder` is an alternative encoder that prevents the downstream
models to make a false assumption about the ordering of categories. For a
given feature, it will create as many new columns as there are possible
categories. For a given sample, the value of the column corresponding to the
category will be set to `1` while all the columns of the other categories
will be set to `0`.
We will start by encoding a single feature (e.g. `"education"`) to illustrate
how the encoding works.
```
from sklearn.preprocessing import OneHotEncoder
encoder = OneHotEncoder(sparse=False)
education_encoded = encoder.fit_transform(education_column)
education_encoded
```
<div class="admonition note alert alert-info">
<p class="first admonition-title" style="font-weight: bold;">Note</p>
<p><tt class="docutils literal">sparse=False</tt> is used in the <tt class="docutils literal">OneHotEncoder</tt> for didactic purposes, namely
easier visualization of the data.</p>
<p class="last">Sparse matrices are efficient data structures when most of your matrix
elements are zero. They won't be covered in detail in this course. If you
want more details about them, you can look at
<a class="reference external" href="https://scipy-lectures.org/advanced/scipy_sparse/introduction.html#why-sparse-matrices">this</a>.</p>
</div>
We see that encoding a single feature will give a NumPy array full of zeros
and ones. We can get a better understanding using the associated feature
names resulting from the transformation.
```
feature_names = encoder.get_feature_names_out(input_features=["education"])
education_encoded = pd.DataFrame(education_encoded, columns=feature_names)
education_encoded
```
As we can see, each category (unique value) became a column; the encoding
returned, for each sample, a 1 to specify which category it belongs to.
Let's apply this encoding on the full dataset.
```
print(
f"The dataset is composed of {data_categorical.shape[1]} features")
data_categorical.head()
data_encoded = encoder.fit_transform(data_categorical)
data_encoded[:5]
print(
f"The encoded dataset contains {data_encoded.shape[1]} features")
```
Let's wrap this NumPy array in a dataframe with informative column names as
provided by the encoder object:
```
columns_encoded = encoder.get_feature_names_out(data_categorical.columns)
pd.DataFrame(data_encoded, columns=columns_encoded).head()
```
Look at how the `"workclass"` variable of the 3 first records has been
encoded and compare this to the original string representation.
The number of features after the encoding is more than 10 times larger than
in the original data because some variables such as `occupation` and
`native-country` have many possible categories.
### Choosing an encoding strategy
Choosing an encoding strategy will depend on the underlying models and the
type of categories (i.e. ordinal vs. nominal).
Indeed, using an `OrdinalEncoder` will output ordinal categories. It means
that there is an order in the resulting categories (e.g. `0 < 1 < 2`). The
impact of violating this ordering assumption is really dependent on the
downstream models. Linear models will be impacted by misordered categories
while tree-based models will not be.
Thus, in general `OneHotEncoder` is the encoding strategy used when the
downstream models are **linear models** while `OrdinalEncoder` is used with
**tree-based models**.
You still can use an `OrdinalEncoder` with linear models but you need to be
sure that:
- the original categories (before encoding) have an ordering;
- the encoded categories follow the same ordering than the original
categories.
The next exercise highlight the issue of misusing `OrdinalEncoder` with a
linear model.
Also, there is no need to use a `OneHotEncoder` even if the original
categories do not have a given order with tree-based model. It will be
the purpose of the final exercise of this sequence.
## Evaluate our predictive pipeline
We can now integrate this encoder inside a machine learning pipeline like we
did with numerical data: let's train a linear classifier on the encoded data
and check the generalization performance of this machine learning pipeline using
cross-validation.
Before we create the pipeline, we have to linger on the `native-country`.
Let's recall some statistics regarding this column.
```
data["native-country"].value_counts()
```
We see that the `Holand-Netherlands` category is occurring rarely. This will
be a problem during cross-validation: if the sample ends up in the test set
during splitting then the classifier would not have seen the category during
training and will not be able to encode it.
In scikit-learn, there are two solutions to bypass this issue:
* list all the possible categories and provide it to the encoder via the
keyword argument `categories`;
* use the parameter `handle_unknown`.
Here, we will use the latter solution for simplicity.
<div class="admonition tip alert alert-warning">
<p class="first admonition-title" style="font-weight: bold;">Tip</p>
<p class="last">Be aware the <tt class="docutils literal">OrdinalEncoder</tt> exposes as well a parameter
<tt class="docutils literal">handle_unknown</tt>. It can be set to <tt class="docutils literal">use_encoded_value</tt> and by setting
<tt class="docutils literal">unknown_value</tt> to handle rare categories. You are going to use these
parameters in the next exercise.</p>
</div>
We can now create our machine learning pipeline.
```
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LogisticRegression
model = make_pipeline(
OneHotEncoder(handle_unknown="ignore"), LogisticRegression(max_iter=500)
)
```
<div class="admonition note alert alert-info">
<p class="first admonition-title" style="font-weight: bold;">Note</p>
<p class="last">Here, we need to increase the maximum number of iterations to obtain a fully
converged <tt class="docutils literal">LogisticRegression</tt> and silence a <tt class="docutils literal">ConvergenceWarning</tt>. Contrary
to the numerical features, the one-hot encoded categorical features are all
on the same scale (values are 0 or 1), so they would not benefit from
scaling. In this case, increasing <tt class="docutils literal">max_iter</tt> is the right thing to do.</p>
</div>
Finally, we can check the model's generalization performance only using the
categorical columns.
```
from sklearn.model_selection import cross_validate
cv_results = cross_validate(model, data_categorical, target)
cv_results
scores = cv_results["test_score"]
print(f"The accuracy is: {scores.mean():.3f} +/- {scores.std():.3f}")
```
As you can see, this representation of the categorical variables is
slightly more predictive of the revenue than the numerical variables
that we used previously.
In this notebook we have:
* seen two common strategies for encoding categorical features: **ordinal
encoding** and **one-hot encoding**;
* used a **pipeline** to use a **one-hot encoder** before fitting a logistic
regression.
| true |
code
| 0.618953 | null | null | null | null |
|

<a href="https://hub.callysto.ca/jupyter/hub/user-redirect/git-pull?repo=https%3A%2F%2Fgithub.com%2Fcallysto%2Fcurriculum-notebooks&branch=master&subPath=Science/HeatAndTemperature/heat-and-temperature.ipynb&depth=1" target="_parent"><img src="https://raw.githubusercontent.com/callysto/curriculum-notebooks/master/open-in-callysto-button.svg?sanitize=true" width="123" height="24" alt="Open in Callysto"/></a>
# Heat and Temperature
## Instructions before you start:
### Click the fast forward button ">>" in the menu bar above. Click "Yes" to restart and run.
```
%%html
<button onclick="run_all()">Run All Cells</button>
<script>
function run_all(){
Jupyter.actions.call('jupyter-notebook:run-all-cells-below');
Jupyter.actions.call('jupyter-notebook:save-notebook');
}
</script>
%%html
<script>
function code_toggle() {
if (code_shown){
$('div.input').hide('500');
$('#toggleButton').val('Show Code')
} else {
$('div.input').show('500');
$('#toggleButton').val('Hide Code')
}
code_shown = !code_shown
}
$( document ).ready(function(){ code_shown=false; $('div.input').hide() });
</script>
<form action="javascript:code_toggle()"><input type="submit" id="toggleButton" value="Show Code"></form>
```
## Heat and Temperature: How Human Needs Led to the Technologies for Obtaining and Controlling Thermal Energy
## Introduction
In this notebook we will give a brief overview of thermal energy and then move on to the uses of thermal energy in society and how our uses of it have changed throughout history. We will start by identifying and explaining common devices and systems used to generate, transfer, or control thermal energy. Then we will look at how human purposes have led to the development of heat-related materials and technologies.
### Thermal Energy
First we begin by giving a brief definition of what thermal energy is. A more complete and involved definition will be given in following notebooks. In the most basic sense thermal energy is the energy we associate with temperature. At a microscopic level it is made up of the energy of vibration, rotation, and motion of the particles and molecules that make up matter. As the particles and molecules move faster they contain more thermal energy and the temperature of matter is higher.
<img src="images/Matter_Temperature.jpg" alt="MatterTemp" width=500 align=middle>
As the temperature increases the thermal energy also increases. It's important to note that thermal energy of an object is given by its internal energy and not by its temperature. We can increase the thermal energy of an object by placing it next to an object warmer than itself. The warmer object will heat the cooler object through the transfer of thermal energy. As the thermal energy of the cooler object increases the thermal energy of the warmer object will decrease.
Before we move on let's discuss a few of the ways thermal energy can be generated. It could be generated due to chemical reactions, such as when you light a fire. Thermal energy is generated from the chemical reactions occuring as the wood burns. Thermal energy can also be generated mechanically by rubbing two objects together. For example you could rub your hands together and the energy from the motion of your hands is converted to an increase in thermal energy at a microcoping level. The energy in an electrical current can also generate an increase in thermal energy. An electrical cord for instance will warm to the touch due to electrical energy being converted in part to the thermal energy of the wire. Finally light energy can be converted to thermal energy as anyone who has stood in the sunshine can affirm.
This will be as far as we go into a definition about thermal energy as a more precise and complete one will be given in follow up notebooks.
## Devices and Systems used to Generate, Transfer, or Control Thermal Energy
In this section we are going to cover multiple common devices and systems that are used to generate, transfer, or control thermal energy in some way. Most of these devices and systems are seen everyday, but we might not be fully aware of them. We will start off by considering devices and systems that we have in our homes. I want to let the reader know that we will be explaining what the devices do but we won't be going into detail about how they function since that will be covered in a later notebook. This section is to get the reader familiar with the devices and what they accomplish.
### Exercise
Try to think of and list as many devices and systems that are used to generate, transfer or control thermal energy. If you want you can add them to the list below. You can work with a partner if you are running out of ideas. The **Add** button adds what you type in the box to the list, the **Remove** button removes the last item in the list and the **Clear List** button clears the list.
```
import ipywidgets as widgets
from IPython.display import display, Math, Latex
import traitlets
from IPython.display import Markdown
import random
output_list = []
list_output = widgets.HTML('')
text_box = widgets.Text(
value='',
placeholder='Enter list item',
description='',
disabled=False
)
add_item_button = widgets.Button(
value=False,
description='Add',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Add to list',
continuous_update=True
)
remove_item_button = widgets.Button(
value=False,
description='Remove',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Remove from list',
continuous_update=True
)
clear_list_button = widgets.Button(
value=False,
description='Clear List',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Clear List',
continuous_update=True
)
add_item_button.layout.width = '100px'
remove_item_button.layout.width = '100px'
clear_list_button.layout.width = '100px'
clear_list_button.layout.margin = '0px 0px 10px 600px'
list_output.layout.margin = '20px 0px 0px 0px'
list_widget = widgets.HBox(children=[text_box, add_item_button, remove_item_button])
display_widget = widgets.VBox(children=[clear_list_button, list_widget, list_output])
def update_Add(change):
if(not (text_box.value == '')):
output_list.append(text_box.value)
list_length = len(output_list)
text_box.value = ''
list_output.value = "<ul style='list-style-type:circle'>"
for i in range(list_length):
list_output.value = list_output.value + "<li>" + output_list[i] + "</li>"
list_output.value = list_output.value + "</ul>"
def update_Remove(change):
list_length = len(output_list)
if(not(list_length == 0)):
del output_list[list_length-1]
list_output.value = "<ul style='list-style-type:circle'>"
for i in range(len(output_list)):
list_output.value = list_output.value + "<li>" + output_list[i] + "</li>"
list_output.value = list_output.value + "</ul>"
def update_Clear(change):
del output_list[:]
list_output.value = ''
add_item_button.on_click(update_Add)
remove_item_button.on_click(update_Remove)
clear_list_button.on_click(update_Clear)
display_widget
```
Once you have completed the exercise above click the button below to open up the next section. In the section we will cover various devices that may be on your list and explain how they relate to generating, transferring or controlling thermal energy.
```
button_click = False
show_button = widgets.Button(
value=False,
description='Show Next Section',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Show Next Section',
continuous_update=True
)
def update_Show(change):
global button_click
if(not button_click):
display(Markdown(
"""
### Air Conditioner/Furnace & Thermostat
The first devices we cover are common to most homes and buildings and they are the air conditioner, furnace and thermostat. An air conditioner is used to remove thermal energy and moisture from a home or building and cooling it in turn. A furnace on the other hand is used to add thermal energy to a home or building by heating the air in it. The thermostat is used to control the temperature in a home or building by controlling the furnace and air conditioner. Thermostats have advanced enough that they can automatically adjust the temperature based on the time of day by preference of the building owner.
All of these devices together create a system that generate, transfer and control the thermal energy in a home or building. Some devices not mentioned yet like windows, insultation, building materials also contribute to this system since they maintain the thermal energy of a home or building by not allowing it to transfer outside the home or building.
### Refrigerator/Freezer
Other common devices found in almost every home are the refrigerator and freezer. A refrigerator or freezer keeps the air inside it a constant cold temperature. The refrigerator or freezer does this by constantly cycling and cooling the air similar to an air conditioner. As mentioned above a house is made with insulation to keep the transfer of thermal energy low and a refrigerator and freezer are designed the same way to keep the cold air inside from escaping out. Refrigerators and freezers are much smaller than a house so keeping the thermal energy lower is easier since it doesn't use as much energy to keep the air colder.
<center><img src="images/refrigerator.png" alt="fridge" width="350"></center>
### Stove/Oven
A device that would be the opposite of a refrigerator or freezer would be an oven or stove. An electrical oven generates thermal energy by heating up elements inside it which in turn heat up the air inside it. It is also insulated to keep the thermal energy inside from escaping. A stove generates thermal energy the same way by heating up elements but it is not insulated so pots or pans may transfer the heat from the elements to the food. The amount of thermal energy generated by the elements is controlled from the dials on the stove/oven.
### Barbecue
A barbecue is another device that is used to generate and control thermal energy. This is done by natural gas or propane being burned through the burners on the barbecue or by charcoal being burned to generate the thermal energy. The dials control how much propane or natural gas is burned and the amount of charcoal determine how much thermal energy is generated.
### Water Heater
Another very common device in homes is a hot water heater. A hot water heater uses electricity or natural gas to increase the thermal energy and temperature of the water in its tank. The hot water is then distributed throughout the house when required. Where the hot water ends up is controlled by a person turning on a hot water tap.
### Insulation/Building Materials
Insulation and building materials are both used to control the transfer of thermal energy from one object or space to another. Insulation can be as simple as a layer of air enclosed between two materials like a thermos or could be specialized material similar to that used in a house. The insulating material acts as a barrier to stop the thermal energy from one side transferring to the other. Just as in a house if it's winter time you usually don't want the inside of the house to be the same temperature as the outside so we use insulation to stop this. That said, even with good insulation some thermal energy will constantly be lost from your house in the winter but your furnace is used to constantly add this thermal energy back to keep the temperature of the house constant.
The building materials used also act as an insulator since they form the shell of the building or object. In North America wood is typically used when building the shell of a house since it's cheap and it's a better insulator than concrete, brick or steel. Structures and objects made of concrete, brick, steel, or some type of metal are typically stronger than wood but are usually a lot more expensive which is why houses are generally made of wood or similar material.
<center><img src="images/thermos.jpg" alt="thermos" width="350"></center>
### Doors and Windows
The other devices that are common to every home or building are the doors and windows which also contribute to the insulation of a building. Single pane windows don't act as good insulators since glass is not a good insulator but double pane windows have been developed to have a layer of air or some type of gas inbetween the panes that insulate much better than a single pane.
There are also varying types of doors that are better at insulating a house than others. A thin door doesn't insulate very well from the outside which is why usually thicker doors are used on the outsides of homes. The doors and windows need to be sealed well otherwise the outside and inside air will be able to mix and change the thermal energy. If the doors and windows aren't sealed well then the furnace or air conditioner would have to use more energy to keep the thermal energy in the house or building constant.
### Fans
A device that is a component to many different appliances and things around the home is a fan. Fans are used to transfer the thermal energy generated throughout the appliance. A convection oven has a fan that distributes the thermal energy generated by the elements around the oven to heat up the food evenly. A fridge, air conditioner and freezer will have fans to circulate the cooled air around the appliance or home. Fans are commonly used to transfer thermal energy from its current space to another and along with some vents or ducts also control where that thermal energy is going.
### Hair Dryer
A hair dryer is another device that generates and transfers thermal energy. An element inside the hair dryer will generate the heat and thermal energy and then a fan will blow and transfer the heat and thermal energy out.
### Washing Machine and Dryer
The last devices we will look at are a washing machine and dryer. When the washing machine is running a warm or hot cycle it typically heats the water it needs with an internal element but if it is an older version it will get the hot water it needs from the hot water heater. A dryer is used to dry the clothes that are wet from the washing machine. The dryer uses an element to generate thermal energy and a fan transfers the thermal energy throughout the dryer to the clothes.
"""))
button_click = True
show_button.on_click(update_Show)
show_button
```
## Heat-Related Materials and Technologies Developed for Human Purposes
To understand why we developed heat-related materials and technologies for our own purposes we only need to understand the function of the material or technology. When we look back throughout history a lot of the heat-related materials and technology developed were to make survival easier. In modern days we are improving upon those materials and technologies and making them more efficient and easier for people to acquire them. There are also heat-related materials and technologies for making our lives more convenient or for generating energy.
We will explain the purposes for the devices and systems mentioned in the section above and then move onto heat-related materials and some other technologies that haven't been listed. The list of devices and systems above can be broken down into a few main purposes. The first purpose is for shelter or a place to live in for survival. Our second purpose is for subsistence or for food and water and the third would be for convenience.
### Exercise
Before you move onto the next section go through the devices and technologies that have been listed above and any others you may have on your list and try to determine the purposes for them. These purposes will have to do with their function but there will also be broader purposes that are to do with survival, convenience and others.
- Air Conditioner
- Furnace
- Thermostat
- refrigerator
- Freezer
- Stove
- Oven
- barbecue
- Water Heater
- Insulation
- Building Materials
- Doors
- Windows
- Fans
- Hair Dryer
- Washing Machine & Dryer
```
button_click2 = False
show_button2 = widgets.Button(
value=False,
description='Show Next Section',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Show Next Section',
continuous_update=True
)
def update_Show2(change):
global button_click2
if(not button_click2):
display(Markdown("""
#### Shelter
- Air Conditioner/Furnace & Thermostat
- Insulation/Building Materials
- Doors and Windows
- Fans
The devices listed above all have to do with keeping a house or building at a certain temperature. The Air conditioner/Furnace are concerned with cooling down or heating up the building based on the thermostat setting while the insulation/building materials and doors and windows are concerned with keeping the temperature outside from affecting the temperature inside. The fans on the air conditioner/furnace keep the air circulating in a home to maintain a constant temperature over the whole building.
These devices don't necessarily mean your survival but as most people are aware we live on a planet with a wide variety of climates. If you lived in a location where the temperatures dropped well below zero degrees celsius then a furnace could determine your survival. Now if you lived in a location that reached very hot temperatures then your survival could depend on an air conditioner.
#### Food and Water
- Refrigerator/Freezer
- Stove/Oven
- Barbeque
The devices above have to do with food or water. These devices have to do with both survival and convenience. A refrigerator/freezer allows you to store food and keep food and drinks a lot longer than if they were left out in the open. This doesn't necessarily have to do with survival but without being able to store the food you could potentially run out of it. The stove, oven and barbeque are used to cook the food or boil water. Without being able to cook the food or boil water you could be exposing yourself to a dangerous bacterium or virus. This is why raw meat needs to be cooked and unclean water need to be boiled otherwise you could become quite sick.
#### Convenience
- Water Heater
- Hair Dryer
- Washing Machine and Dryer
These are heat-related devices that are more for convenience than survival. You can make the argument that the hot water from the water heater used in the washing machine, for washing hands or for dishes ensures they are cleaned better but it doesn't mean your survival would be put in jeporady by it. A hair dryer or clothes dryer is for convenience since they make drying your hair or clothes easier and faster. The washing machine is mostly for convenience since it reduces the amount of work and time it takes to wash clothes.
"""))
button_click2 = True
show_button2.on_click(update_Show2)
show_button2
```
### Purposes
Now let's focus on the some of the main purposes for which heat-related materials and technologies have been developed. As mentioned above the purposes can be reduced down to the broad categories of survival and convenience. There is also the purpose for electrical energy generation that will be looked at last.
#### Survival
We have already touched on survival in a couple of the sections above. These days there are a lot of heat-related materials and technologies that people do not realize they depend on. In the past the heat-related materials and technologies would have been more obvious since people could understand them but currently they have advanced enough that people might not understand them and take them for granted. We've addressed some heat-related materials and technologies above that relate to our survival and we will go through a few more to ensure we have a decent understanding of how much materials and technologies have an impact upon our survival.
> **House (Shelter)**
>
> An easy example to understand how heat-related materials and technologies have enabled our survival is to examine our shelter or house. This has been looked at above but here we will examine how the heat-related materials and technologies in a house have developed and advanced through history. In particular we will look at the material the home is made of and the technologies used with adjusting the temperature of it.
>
> The first heat-related technology we look at is burning wood for thermal energy. Burning wood for thermal energy has been used throughout history and still today. The use of burning coal came next and became popular in the 19th century from its abundance, availability and its ability to generate higher heat output than wood. Along with better materials to burn came advancing ventilation and delivery systems for the thermal energy generated in houses. With the use of natural gas becoming popular in the late 19th century and the discovery of electricity the modern heating system started to take shape. With the discovery of electricity and the use of natural gas and a hundred years later the modern furnace was invented and can be found in most houses. Through the use of electricity the air conditioner was invented in the early 20th century and has continually increased in popularity since then. It has also advanced in efficiency and technology till the modern unit we know and use in the majority of homes today.
>
> The material a home is made of can have an effect on survival since it effects how much thermal energy escapes from a house and thus how much the temperature changes. Throughout history houses were usually made out of stone, brick, or concrete like material and in North America homes are typically made out of wood or similar material. These materials are still in use today but have advanced to better keep the temperature and thermal energy of a home constant. The biggest advancement would be the development of insulating material used in between the walls of a house to limit the transfer of thermal energy to the outside of the home.
>
> In moderate climates these materials, devices and technologies would not add up to survival on a normal day but even in moderate climates there can be extreme climate changes that without shelter someone's life would be in danger. This is even more evident living in the Northern Hemispheres where Winter and extremely low temperatures occur. In the past a house or shelter has been linked with survival and is even more today since we have spread out to regions with extreme climates.
> **Food/Water**
>
> No matter what climate we live in the food and water we eat and drink are necessary for our survival. When the water we drink is not clean of bacteria or viruses we could become quite sick. To rid water of bacteria and viruses it needs to be boiled for a few minutes to ensure the bacteria and viruses are killed. In the past burning wood and coal would have been the primary ways to generate thermal energy to boil water. With the discovery of electricity there have been plenty of technology and devices developed like a stove to boil water.
>
> The food we eat can contain harmful bacteria and viruses if it is not properly cleaned or cooked. When food is grown in the ground it needs to be properly cleaned to stop any harmful bacteria or viruses from being eaten. The simplest method to cleaning food is to thoroughly wash it with clean water. When cooking raw meat we need to be sure it is cooked fully otherwise we could become quite sick from eating it. In the past food would have been cooked from the thermal energy generated by burning wood or coal but these days we have multiple devices that generate thermal energy to cook food. Common devices used to cook food are a stove, oven, microwave and barbeque.
>
> Without having clean water to drink or food that has been properly cleaned and cooked we could ingest some pretty harmful bacteria and viruses that could be life threatening.
<table>
<tr>
<td><img src="images/OldStove.jpg" alt="Old" width="300"/></td>
<td><img src="images/modernstove2.jpg" alt="New" width="300"/></td>
</tr>
</table>
> **Clothing**
>
> The most common example of a heat-related material that is pertinent to our survival would be the clothing we wear. Similar to ones house, clothing is most useful to our survival in harsh climates. During the summer if it is hot out we can remove clothing to become cooler but during the winter we need warmer clothing that will allow us to survive if we have to go outside. In the past clothing would have been made of animal hides but we have come a long way in being able to make our own clothing. Through the discovery of new materials and technologies we are able to create or use material that is thinner and more efficient at retaining or releasing thermal energy. Winter is the season of the year that would be considered the harshest and is the reason we have developed specific clothing like jackets, pants, gloves and headwear to retain your thermal energy and allow you to survive outdoors. During the other seasons of the year clothing is more for comfort and convenience since you could survive outdoors without it.
<table>
<tr>
<td><img src="images/OldClothing.jpg" alt="Old" width="250"/></td>
<td><img src="images/modernwinterjacket.jpg" alt="New" width="250"/></td>
</tr>
</table>
#### Convenience and Comfort
When you have a house to live in and clean food and water most of the other heat-related materials and technologies are for making your life easier. These materials and devices have advanced in efficiency and technology throughout history to provide you with more convenience, time and comfort. We will discuss a few examples below and look at how they have changed over time.
> **Hot Water**
>
> Hot water can have an effect on survival but these days we use it more for convenience and comfort. If you have ever lost hot water in your home you quickly realize a shower with cold water is not very comfortable. When cleaning clothes, dishes and anything else hot water is more effective and efficient than using cold water.
>
> To obtain hot water you only need to heat up water. In the past water would have been heated using thermal energy from burning wood or some other fuel source. These days with the use of electricity a hot water heater uses an element to heat up the water.
> **Washing Machine and Dryer**
>
> A washing machine and dryer are used for convenience and saving time. In the past clothes and sheets would have been washed by hand using a washboard or similar device and would have been hung on a washing line to dry. Washing by hand is hard work and time consuming and clothes take a long time to air dry. A washing machine takes out the hard work and time spent hand washing clothes, and a dryer reduces the time it takes for the clothes to dry.
<table>
<tr>
<td><img src="images/oldwashingboard.png" alt="Old" width="200"/></td>
<td><img src="images/washingmachine.jpg" alt="New" width="200"/></td>
</tr>
</table>
> **Transportation**
>
> One of the more convenient pieces of technology you likely use everyday would be some kind of vehicle for transportation. Without a vehicle to drive around it would take you a lot more time when traveling. Modern vehicles are enclosed with air conditioners and heaters making a drive much more comfortable than being exposed to the outside temperature.
>
> In the past vehicles would have used a steam engine to move. The steam engine worked by burning some fuel source to generate thermal energy that was transferred to water which would boil and generate steam to be used to move the vehicle. As history moved forward the modern combustion engine was invented that used fuel like gasoline to create combustion when ignited which was used to move the vehicle. The modern combustion engine is much more efficient than the steam engine in moving a vehicle. From the invention of the modern combustion engine till now there have been great advancements in its design and efficiency to use less fuel while traveling the same distance.
<table>
<tr>
<td><img src="images/oldcar.jpg" alt="Old" width="300"/></td>
<td><img src="images/moderncar.jpg" alt="New" width="300"/></td>
</tr>
</table>
#### Electrical Energy Generation
Heat-related materials and technologies have long been used to generate mechanical, electrical and thermal energy. Without electrical energy all of the devices that we have become so accustomed to and use everyday would not work.Electricity is typically generated from an electrical generator that converts mechanical energy into electrical energy. The mechanical energy comes from a turbine being rotated and its rotation comes from the generation of thermal energy. The thermal energy used can be generated using various methods outlined below.
> **Steam**
>
> A turbine is rotated from the steam generated from heating up water. The thermal energy used in heating up the water can be generated from burning coal or some other material. Another alternative for the thermal energy is using a nuclear reactor that generates thermal energy from nuclear fission which is the used to heat up the water.
>
> **Combustion**
>
> A combustion turbine uses the combustion generated from igniting some type of gas or fuel to rotate the turbine. A gas like natural gas is commonly used as well as gasoline or diesel fuel for combustion. Smaller generators that are similar to turbines can generate electricity in the same manor through the combustion of gasoline or diesel.
>
> **Geothermal Energy**
>
> Geothermal energy is the thermal energy we find deep within the ground. As seen in the image below the hot water found deep underground is pumped up to the surface and is then used to generate steam which then turns a turbine to generate electrical energy.
<img src="images/geothermal3.png" alt="" width="400" align="middle"/>
> Another use of the geothermal energy is using it to heat and cool your home using a heat pump. The heat pump uses the thermal energy from the water pumped up to either heat or cool your house.
<img src="images/geothermal2.gif" alt="" width="400" align="middle"/>
### Exercise
We have touched on some of the broader purposes of heat-related materials and technologies that have to do with survival, convenience and electrical energy generation above. As an exercises try to think of any other heat-related devices, materials or technologies that we haven't discussed and determine what their purpose and function is. If you are having trouble with the purpose or function you can always ask your teacher or do a web search to find it out.
## Conclusion
In this notebook we have addressed what thermal energy is and how heat and temperature are related to it. We discussed multiple devices, technologies and materials that are used to generate, transfer or control thermal energy and heat in some fashion. The purposes of the devices, technologies and materials were discussed in detail and the broader purposes of how they relate to survival, convenience and energy creation were looked at. We also look at how houses, food/water and clothing and the devices and technology associated with them developed throughout history. This notebook gives a lot of information about the various heat-related devices, materials and technologies that are used in our everyday lives and how much of an impact they have. A more indepth look into thermal energy and how the devices, materials and technologies function will be given in later notebooks.
## Image Sites
0. https://chem.libretexts.org/LibreTexts/Mount_Royal_University/Chem_1202/Unit_5%3A_Fundamentals_of_Thermochemistry/5.2%3A_Heat
1. http://outside-in.me/vintage-cook-stoves-for-sale/vintage-cook-stoves-for-sale-old-kitchen-wood-stoves-for-sale-old-cook-stove-yahoo-image-search-results-old-kitchen-cook-vintage-gas-cook-stoves-for-sale/
2. https://pixabay.com/en/kids-stove-children-toys-tin-stove-434916/
3. http://angels4peace.com/modern-kitchen-stove.html/modern-kitchen-stove-simple-popular
4. https://pixabay.com/en/ginsburgconstruction-kitchen-3-330737/
5. http://collections.musee-mccord.qc.ca/scripts/printtour.php?tourID=CW_InuitClothing_EN&Lang=2
6. https://pixabay.com/en/washboard-wash-tub-old-formerly-982990/
7. https://pixabay.com/en/auto-renault-juvaquatre-pkw-old-1661009/
8. https://pixabay.com/en/car-audi-auto-automotive-vehicle-604019/
9. http://photonicswiki.org/index.php?title=Survey_of_Renewables
10. https://sintonair.com/geothermal-heat-pump/
11. http://ca.audubon.org/conservation/geothermal-power
12. https://de.wikipedia.org/wiki/Waschmaschine
[](https://github.com/callysto/curriculum-notebooks/blob/master/LICENSE.md)
| true |
code
| 0.370766 | null | null | null | null |
|
# Emission AI
#### Microsoft AI for Earth Project
AI Monitoring Coal-fired Power Plant Emission from Space
#### Team Members
Ziheng Sun, Ahmed Alnaim, Zack Chester, Daniel Tong
#### Date
4/30/2020-10/30/2021
#### Abstract
The goal is to build a reusable machine learning model to estimate the emission of coal-fired power plants by satellite observations. The machine learning model will be trained on the monitoring data of the power plants collected from EPA eGRID, and the remote sensed datasets of TROPOMI on Sentinel 5 Precursor and the meterological observations from MERRA.
The model will take remote sensing records as inputs, and output an estimated NOX emission daily volume.
### Step 1: Read CSV
The demo CSV files are located in the folder `data`. The CSV initially contains six columns: Facility ID (EPA Code of PP), Latitude, Longitude, Date, EPA Daily NO2 divided by 1e+05, TROPOMI NO2_column_number_density (Total vertical column of NO2, ratio of the slant column density of NO2 and the total air mass factor). Both [EPA](https://www.epa.gov/egrid), [TROPOMI](http://www.tropomi.eu/) and [MERRA](https://gmao.gsfc.nasa.gov/reanalysis/MERRA/) datasets can be accessed and retrieval free of charge.
One preprocessing step is to turn the date column into three separate columns as the machine learning cannot parse date strings as input. It need be turned into numeric values. We transform the date column into dayofweek, dayofmonth, and dayofyear. The original date column is excluded from the training dataset to pass the data type checker.
```
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt # Plotting and Visualizing data
from sklearn.model_selection import train_test_split
import os
print(os.listdir("data"))
# Describe the data, and get a overview
data = pd.read_csv('data/tropomi_epa_kvps_NO2_2019_56.csv',parse_dates=["Date"])
print("==================>")
print(data.describe())
data['dayofyear'] = data['Date'].dt.dayofyear
data['dayofweek'] = data['Date'].dt.dayofweek
data['dayofmonth'] = data['Date'].dt.day
data = data.drop(columns=["Date"])
print("==================>")
print(data.columns)
# Separating dependednt & Indepented Variables
x = data.iloc[:, data.columns != 'EPA_NO2/100000'].values
y = data.iloc[:, data.columns == 'EPA_NO2/100000']
# show the shape of x and y to make sure they have the same length
# Train Test Split at ratio 0.33
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.33)
y_train = y_train.to_numpy()
y_test = y_test.to_numpy()
y_train = y_train.ravel()
y_test = y_test.ravel()
print("===================>")
print("X_train's shape: ", x_train.shape)
print("y_train's shape: ", y_train.shape)
print("x_test's shape: ", x_test.shape)
print("y_test's shape: ", y_test.shape)
# print(y_test)
# print(y_train)
```
### Step 2: Train Deep Learning model
The hyperparameter tuning is a very troublesome task. Use Keras tensorflow model.
```
# Model Import and Build
import tensorflow as tf
# from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.optimizers import SGD, Adagrad, Adadelta, RMSprop, Adam
model = tf.keras.Sequential(
[
tf.keras.Input(shape=(11)),
layers.Dense(500, activation="relu"),
layers.Dense(500, activation="relu"),
layers.Dense(500, activation="relu"),
layers.Dense(500, activation="relu"),
layers.Dense(500, activation="relu"),
layers.Dense(1, activation="sigmoid"),
]
) # No weights at this stage!
# Call the model on a test input
# x = tf.ones((1, 4))
# y = model(x)
print("Number of weights after calling the model:", len(model.weights)) # 6
# lr_schedule = keras.optimizers.schedules.ExponentialDecay(
# initial_learning_rate=1e-2,
# decay_steps=10000,
# decay_rate=0.9)
# sgd = SGD(lr=lr_schedule)
model.summary()
model.compile(optimizer="adadelta", loss="mse", metrics=[tf.keras.metrics.mean_squared_error])
model.fit(x_train, y_train, batch_size=8, validation_split = 0.15, epochs=100)
```
### Step 3: Test ML model
Predict on the test dataset using the trained models
```
# Use the trained models to make predictions
y_test_pred = model.predict(x_test)
```
### Step 4: Visualize the Results
Visualization of the ML results could facilitate the intercomparison of machine learning models and identify the pros and cons of various models in different groups of data samples.
Blue dots are the true observation of EPA. Black dots are the predicted values of machine learning models.
```
def visualizeResults(modelname, x_test, y_test, pred):
# Visualization
## Check the fitting on training set
plt.scatter(x_test[:,3], y_test, color='blue')
plt.scatter(x_test[:,3], pred, color='black')
# plt.scatter(y_test, pred, color='black')
plt.title(modelname + ' Fit on testing set')
plt.xlabel('TROMPOMI-Test')
plt.ylabel('EPA-Test')
plt.show()
visualizeResults("Neural Network", x_test, y_test, y_test_pred)
```
### Step 5: Calculate quantitative metrics
For a regression task, the accuracy metrics are normally mean squared error (MSE), mean absolute error (MAE), and coefficient of determination (R2).
```
from sklearn.metrics import accuracy_score
from sklearn import metrics
def showAccuracyMetrics(mlmethod, model, y_test, y_pred):
print("Model ", mlmethod, " Performance:")
# print(y_test.shape, y_pred.shape)
mae = metrics.mean_absolute_error(y_test, y_pred)
mse = metrics.mean_squared_error(y_test, y_pred)
r2 = metrics.r2_score(y_test, y_pred)
print(" MAE: ", mae)
print(" MSE: ", mse)
print(" R2: ", r2)
# print(y_test, linear_pred)
showAccuracyMetrics("Neural Network: ", model, y_test, y_test_pred)
```
### Step 6: Feature Importance
0 - 'FID',
1 - 'Latitude',
2 - 'Longitude',
3 - 'TROPOMI*1000',
4 - 'Wind (Monthly)',
5 - 'Temp (Monthly)',
6 - 'Precip (Monthly)',
7 - 'Cloud Fraction (Monthly)',
8 - 'dayofyear',
9 - 'dayofweek',
10 - 'dayofmonth'
```
def showImportance(model):
labels = ['FID', 'Latitude', 'Longitude', 'TROPOMI*1000',\
'Wind (Monthly)', 'Temp (Monthly)', 'Precip (Monthly)',\
'Cloud Fraction (Monthly)', 'dayofyear', 'dayofweek', 'dayofmonth']
# get importance
importance = model.best_estimator_.feature_importances_
print(len(labels))
print(importance)
# summarize feature importance
for i,v in enumerate(importance):
print('Feature: %s, Score: %.5f' % (labels[i],v))
# plot feature importance
plt.bar([x for x in range(len(importance))], importance)
plt.show()
# showImportance(rf_regressor)
```
### Conclusion
This notebook shows how to use machine learning models to predict the emission of coal-fired power plants using satellite observations like TROPOMI and meteorology observations from MERRA.
The results show that random forest and voting ensemble models are similar in the performance. The random forest model has a slightly better performance in this case. That is also because the ensembled model is from the trained linear regression and the random forest models. The results are basically an average between the two models' results.
Linear regression model outputs basically the values in a narrow range regarless of the variances in the TROPOMI observation and never produce the values greater than 0.3 or less than 0.1. It is not suitable for this prediction.
##### Final Remarks
Using machine learning to predict ground emission from remote sensed data is possible. More improvements are needed to ensure the accuracy, generality, and stability of the trained models in a long-term operational run. The demonstrated power plant site is in the rural area in Alabama and there is less NO2 emission sources other than the power plant itself. More research is required to make it work for those power plants located in or nearby urban regions, where other emission sources may dominate the NOX in the atmosphere.
### Citation
Please cite this work as:
`Sun, Ziheng, Zack Chester, and Daniel Tong. 2021. "EmissionAI: Ai Monitoring Coal-Fired Power Plant Emission from Space." https://github.com/ZihengSun/EmissionAI `
```
pip list
```
| true |
code
| 0.595022 | null | null | null | null |
|
```
# Adds link to the scripts folder
import sys
import os
sys.path.append("../../scripts/")
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import numpy as np
from trajectory import Trajectory, load_trajectory_dict
from hivevo.patients import Patient
import filenames
import copy
from activity import get_average_activity
from proba_fix import get_proba_fix
```
# Activity plots
## Functions
Format of the dictionnaries : trajectories[region][rev/non_rev/syn/non_syn]
```
def get_mean_in_time(trajectories, nb_bins=20, freq_range=[0.4, 0.6]):
"""
Computes the mean frequency in time of a set of trajectories from the point they are seen in the freq_range window.
Returns the middle of the time bins and the computed frequency mean.
"""
# Create bins and select trajectories going through the freq_range
time_bins = np.linspace(-677, 3000, nb_bins)
trajectories = [traj for traj in trajectories if np.sum(np.logical_and(
traj.frequencies >= freq_range[0], traj.frequencies < freq_range[1]), dtype=bool)]
# Offset trajectories to set t=0 at the point they are seen in the freq_range and adds all the frequencies / times
# to arrays for later computation of mean
t_traj = np.array([])
f_traj = np.array([])
for traj in trajectories:
idx = np.where(np.logical_and(traj.frequencies >=
freq_range[0], traj.frequencies < freq_range[1]))[0][0]
traj.t = traj.t - traj.t[idx]
t_traj = np.concatenate((t_traj, traj.t))
f_traj = np.concatenate((f_traj, traj.frequencies))
# Binning of all the data in the time bins
filtered_fixed = [traj for traj in trajectories if traj.fixation == "fixed"]
filtered_lost = [traj for traj in trajectories if traj.fixation == "lost"]
freqs, fixed, lost = [], [], []
for ii in range(len(time_bins) - 1):
freqs = freqs + [f_traj[np.logical_and(t_traj >= time_bins[ii], t_traj < time_bins[ii + 1])]]
fixed = fixed + [len([traj for traj in filtered_fixed if traj.t[-1] < time_bins[ii]])]
lost = lost + [len([traj for traj in filtered_lost if traj.t[-1] < time_bins[ii]])]
# Computation of the mean in each bin, active trajectories contribute their current frequency,
# fixed contribute 1 and lost contribute 0
mean = []
for ii in range(len(freqs)):
mean = mean + [np.sum(freqs[ii]) + fixed[ii]]
mean[-1] /= (len(freqs[ii]) + fixed[ii] + lost[ii])
nb_active = [len(freq) for freq in freqs]
nb_dead = [fixed[ii] + lost[ii] for ii in range(len(fixed))]
return 0.5 * (time_bins[1:] + time_bins[:-1]), mean, nb_active, nb_dead
def make_mean_in_time_dict(trajectories):
regions = ["env", "pol", "gag", "all"]
means = {}
freq_ranges = [[0.2, 0.4], [0.4, 0.6], [0.6, 0.8]]
times = []
for freq_range in freq_ranges:
means[str(freq_range)] = {}
for region in regions:
means[str(freq_range)][region] = {}
for key in trajectories[region].keys():
times, means[str(freq_range)][region][key], _, _ = get_mean_in_time(trajectories[region][key], freq_range=freq_range)
return times, means, freq_ranges
def make_pfix(nb_bin=8):
regions = ["env", "pol", "gag", "all"]
pfix = {}
for region in regions:
pfix[region] = {}
for key in trajectories[region].keys():
tmp_freq_bin, tmp_proba, tmp_err = get_proba_fix(trajectories[region][key], nb_bin=nb_bin)
pfix[region][key] = {"freq_bin": tmp_freq_bin, "proba": tmp_proba, "error": tmp_err}
return pfix
```
## Mean in time
```
trajectories = load_trajectory_dict("../../trajectory_dict")
times, means, freq_ranges = make_mean_in_time_dict(trajectories)
pfix = make_pfix(nb_bin=8)
def plot_mean(times, means, savefig=False, fontsize=16):
freq_ranges = [[0.2, 0.4], [0.4, 0.6], [0.6, 0.8]]
colors = ["r","b","g"]
plt.figure(figsize=(14,10))
for ii, freq_range in enumerate(freq_ranges):
plt.plot(times, means[str(freq_range)]["all"]["rev"], "-", color=colors[ii], label="rev")
plt.plot(times, means[str(freq_range)]["all"]["non_rev"], "--", color=colors[ii], label="non_rev")
plt.xlabel("Time [days]", fontsize=fontsize)
plt.ylabel("Frequency", fontsize=fontsize)
plt.ylim([-0.03, 1.03])
plt.grid()
plt.legend(fontsize=fontsize)
if savefig:
plt.savefig(savefig+".pdf", format="pdf")
plt.show()
trajectories = load_trajectory_dict("../../trajectory_dict")
times, means, freq_ranges = make_mean_in_time_dict(trajectories)
plot_mean(times, means)
```
# Plot 2
```
fontsize=16
grid_alpha = 0.5
colors = ["C0","C1","C2","C4"]
markersize=12
freq_ranges = [[0.2, 0.4], [0.4, 0.6], [0.6, 0.8]]
regions = ["env","pol","gag"]
lines = ["-","--",":"]
fig, axs = plt.subplots(ncols=2, nrows=1, figsize=(14,7), sharey=True)
# Plot left
for ii, freq_range in enumerate(freq_ranges):
for jj, region in enumerate(regions):
axs[0].plot(times, means[str(freq_range)][region]["non_syn"], lines[jj], color=colors[ii])
line1, = axs[0].plot([0], [0], "k-")
line2, = axs[0].plot([0], [0], "k--")
line3, = axs[0].plot([0], [0], "k:")
line4, = axs[0].plot([0], [0], "-", color=colors[0])
line5, = axs[0].plot([0], [0], "-", color=colors[1])
line6, = axs[0].plot([0], [0], "-", color=colors[2])
axs[0].set_xlabel("Time [days]", fontsize=fontsize)
axs[0].set_ylabel("Frequency", fontsize=fontsize)
axs[0].set_ylim([-0.03, 1.03])
axs[0].grid(grid_alpha)
axs[0].legend([line1, line2, line3, line4, line5, line6], regions + ["[0.2, 0.4]", "[0.4, 0.6]", "[0.6, 0.8]"], fontsize=fontsize, ncol=2)
# Plot right
for ii,region in enumerate(regions):
axs[1].plot(pfix[region]["non_syn"]["freq_bin"], pfix[region]["non_syn"]["proba"], lines[ii], color=colors[3])
axs[1].plot([0,1], [0,1], "k--")
axs[1].set_xlabel("Initial frequency", fontsize=fontsize)
axs[1].set_ylabel("Fixation probability", fontsize=fontsize)
axs[1].set_ylim([-0.03, 1.03])
axs[1].set_xlim([-0.03, 1.03])
axs[1].grid(grid_alpha)
plt.legend(["env", "pol", "gag", "neutral expectation"], fontsize=fontsize, loc="lower right")
plt.tight_layout()
plt.show()
print(pfix["pol"]["non_syn"]["proba"])
print(pfix["pol"]["non_syn"]["freq_bin"])
```
| true |
code
| 0.631765 | null | null | null | null |
|
# 1 - Predicting Salaries from Stack Overflow Surveys
Stack Overflow has been conducting [annual user surveys](https://insights.stackoverflow.com/survey/?utm_source=so-owned&utm_medium=blog&utm_campaign=dev-survey-2017&utm_content=blog-link&utm_term=data) starting in 2011. Yes, this is the same survey that (re)started the whole tabs vs spaces [debate](https://stackoverflow.blog/2017/06/15/developers-use-spaces-make-money-use-tabs/) in 2017. The results for the 2018 survey has been released, and I wanted to try **to use the 2017 results to try and predict salaries in the 2018 results**.
For anyone who has worked on a dataset not from Kaggle or the UCI repository, you might have experienced of the 80/20 rule, where 80% of your time is spent cleaning data, and 20% on modeling. Despite knowing the rule, it still surprised me how much time I spent cleaning the data, which is detailed below.
Broadly, I will be going through:
- Downcasting data
- Identifying and renaming common columns
- Pre-processing data
#### 1.1 - Importing Libraries
Importing all standard libraries because its just habit now. I've also set options to view up to 50 columns without truncation.
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
pd.set_option('display.max_columns', 50)
```
# 2 - Reading and Downcasting Data
Downcasting data means to optimize the datatype of each column to reduce memory usage. For 2018, the dataset was more than 500 MB, which unfortunately is reaching the upper computational limits of my computer. If you are interested in a more detailed explanation, check out my [kernel](https://www.kaggle.com/yscyang1/microsoft-malware-1-loading-a-large-data-set) for the Microsoft malware competition.
Both 2017 and 2018 had the same treatment. First, I printed the breakdown of each datatype's memory usage, including the total memory usage. Then I downcasted each column and checked to see that the downcasting occurred.
Note: I changed the columns "Respondent" from int32 to float32 because when saving to feather format, an error occurs with int32 dtype.
### 2.1 - 2017 Data
- Memory usage before downcasting: 405.03 MB
- Memory usage after downcasting: 15.56 MB
- About a 95% reduction in memory usage
```
df_2017 = pd.read_csv('2017/survey_results_public.csv')
df_2017.info(memory_usage='deep')
def get_memoryUsage(df):
dtype_lst = list(df.get_dtype_counts().index)
for dtype in dtype_lst:
print('Total memory usage for {}: {} MB'.format(dtype, format(df.select_dtypes([dtype]).memory_usage(deep = True).sum()/1024**2,'.5f')))
print('\n' + 'Total Memory Usage: {} MB'.format(format(df.memory_usage(deep=True).sum()/1024**2, '.2f')))
get_memoryUsage(df_2017)
def downcast(df):
for col in df.select_dtypes(['int64']):
df[col] = pd.to_numeric(df[col], downcast = 'signed')
for col in df.select_dtypes(['float64']):
df[col] = pd.to_numeric(df[col], downcast = 'float')
for col in df.select_dtypes(['object']):
df[col] = df[col].astype('category')
downcast(df_2017)
get_memoryUsage(df_2017)
df_2017['Respondent'] = df_2017['Respondent'].astype('float32')
```
### 2.2 - 2018 Data
- Memory usage before downcasting: 619.4 MB
- Memory usage after downcasting: 45.08 MB
- About a 90% reduction in memory usage
```
df_2018 = pd.read_csv('2018/survey_results_public.csv', low_memory=False)
get_memoryUsage(df_2018)
downcast(df_2018)
get_memoryUsage(df_2018)
df_2018['Respondent'] = df_2018['Respondent'].astype('float32')
```
### 2.3 - A Brief Glance at the Columns
There are 154 columns in 2017 and 129 in 2018. Yet, there are only 17 columns with the same name. Surely there are more common columns between the two years?
```
pd.set_option('display.max_columns', 155)
df_2017.head(3)
df_2018.head(3)
print('Number of common columns: {} \n'.format(len(set(df_2017.columns).intersection(set(df_2018)))))
print(set(df_2017.columns).intersection(set(df_2018)))
```
### 2.4 - Identifying and Renaming Columns
From the documentation, I identified 49 columns in common between 2017 and 2018, including the 17 identified above. I isolate each column and rename them to so both years have the same column names.
```
# Identifying columns
df_2017_keep = df_2017[['Respondent', 'ProgramHobby', 'Country', 'University', 'EmploymentStatus', 'FormalEducation',
'MajorUndergrad', 'CompanySize', 'YearsProgram', 'YearsCodedJob', 'DeveloperType', 'CareerSatisfaction',
'JobSatisfaction', 'KinshipDevelopers', 'CompetePeers', 'LastNewJob', 'AssessJobIndustry', 'AssessJobDept',
'AssessJobTech', 'AssessJobCompensation', 'AssessJobOffice', 'AssessJobRemote', 'AssessJobProfDevel',
'AssessJobDiversity', 'AssessJobProduct', 'AssessJobFinances', 'ResumePrompted', 'Currency',
'EducationTypes', 'SelfTaughtTypes', 'TimeAfterBootcamp', 'HaveWorkedLanguage', 'WantWorkLanguage',
'HaveWorkedFramework','WantWorkFramework', 'HaveWorkedDatabase', 'WantWorkDatabase', 'HaveWorkedPlatform',
'WantWorkPlatform', 'IDE', 'Methodology', 'VersionControl', 'CheckInCode', 'StackOverflowJobListing',
'Gender', 'HighestEducationParents', 'Race', 'SurveyLong', 'Salary']]
df_2018_keep = df_2018[['Respondent', 'Hobby', 'Country', 'Student', 'Employment', 'FormalEducation', 'UndergradMajor',
'CompanySize', 'DevType', 'YearsCoding', 'YearsCodingProf', 'JobSatisfaction', 'CareerSatisfaction',
'LastNewJob', 'AssessJob1', 'AssessJob2', 'AssessJob3', 'AssessJob4', 'AssessJob5', 'AssessJob6',
'AssessJob7', 'AssessJob8', 'AssessJob9', 'AssessJob10', 'UpdateCV', 'Currency', 'ConvertedSalary',
'EducationTypes', 'SelfTaughtTypes', 'TimeAfterBootcamp', 'AgreeDisagree1', 'AgreeDisagree2',
'LanguageWorkedWith', 'LanguageDesireNextYear', 'DatabaseWorkedWith', 'DatabaseDesireNextYear',
'PlatformWorkedWith', 'PlatformDesireNextYear', 'FrameworkWorkedWith', 'FrameworkDesireNextYear',
'IDE', 'Methodology', 'VersionControl', 'CheckInCode', 'StackOverflowJobs', 'Gender',
'EducationParents', 'RaceEthnicity', 'SurveyTooLong']]
# Renaming columns
df_2017_keep.rename(columns = {'Respondent': 'ID', 'ProgramHobby': 'Hobby', 'University': 'Student', 'EmploymentStatus': 'Employment',
'FormalEducation': 'Education', 'MajorUndergrad': 'UndergradMajor', 'YearsProgram': 'YearsCoding',
'YearsCodedJob': 'YearsCodingProf', 'DeveloperType': 'DevType', 'ResumePrompted': 'UpdateCV',
'HaveWorkedLanguage': 'LanguageWorkedWith', 'WantWorkLanguage': 'LanguageDesireNextYear',
'HaveWorkedFramework': 'FrameworkWorkedWith', 'WantWorkFramework': 'FrameworkDesireNextYear',
'HaveWorkedDatabase': 'DatabaseWorkedWith', 'WantWorkDatabase': 'DatabaseDesireNextYear',
'HaveWorkedPlatform': 'PlatformWorkedWith', 'WantWorkPlatform': 'PlatformDesireNextYear',
'StackOverflowJobListing': "StackOverflowJobs", 'HighestEducationParents': 'EducationParents'},
inplace = True)
df_2018_keep.rename(columns = {'Respondent': 'ID', 'FormalEducation': 'Education', 'AssessJob1': 'AssessJobIndustry',
'AssessJob2': 'AssessJobFinances', 'AssessJob3': 'AssessJobDept', 'AssessJob4': 'AssessJobTech',
'AssessJob5': 'AssessJobCompensation', 'AssessJob6': 'AssessJobOffice',
'AssessJob7': 'AssessJobRemote', 'AssessJob8': 'AssessJobProfDevel', 'AssessJob9': 'AssessJobDiversity',
'AssessJob10': 'AssessJobProduct', 'AgreeDisagree1': 'KinshipDevelopers', 'AgreeDisagree2': 'CompetePeers',
'RaceEthnicity': 'Race', 'SurveyTooLong': 'SurveyLong', 'ConvertedSalary': 'Salary'},
inplace = True)
```
### 2.5 - Save to Feather
At this point, I would like to save my condensed raw data so I have something to go back to before I start manipulating things.
```
import os
os.makedirs('tmp', exist_ok=True)
df_2017_keep.to_feather('tmp/df_2017_1keep')
df_2018_keep.to_feather('tmp/df_2018_1keep')
```
# 3 - Processing Each Column
This is the last, but arguably the most important part of this post.
### 3.1 - Missing Data
Some respondents didn't fill out too much of the survey. For example, one person filled out the hobby section, and left the rest blank. Such answers are going to be useless for analysis, so I will drop all the rows htat have more than 50% of the answers blank. This results in ~30% reduction of rows.
```
df_2017_keep.dropna(thresh=len(df_2017_keep.columns)/2, inplace=True)
df_2018_keep.dropna(thresh=len(df_2018_keep.columns)/2, inplace=True)
```
### 3.2 - Salary
Since the main goal is to predict salary, any rows without a salary or currency is removed. This results in removing about 66% and 35% of the rows in 2017 and 2018 respectively. I haven't found in the documentation whether the salaries are already converted to US dollars in 2017, but working with the data, it seems like they are converted. In the 2018 documentation, it clearly states salaries have been converted.
Since I want to know how much of each column is missing, I've written a function, getMissingPercent(), which takes a string of the column name and returns what percent of the column is missing for each year.
```
def getMissingPercent(col):
print('{} - percent missing in 2017: {}%'.format(col, df_2017_keep[col].isnull().sum()/len(df_2017_keep)*100))
print('{} - percent missing in 2018: {}%'.format(col, df_2018_keep[col].isnull().sum()/len(df_2018_keep)*100))
getMissingPercent('Salary')
df_2017_keep = df_2017_keep[(df_2017_keep['Currency'].notnull()) & (df_2017_keep['Salary'].notnull())]
df_2018_keep = df_2018_keep[(df_2018_keep['Currency'].notnull()) & (df_2018_keep['Salary'].notnull())]
# Commented out in case need to convert 2017 currencies to USD
# currency_dict = {'British pounds sterling (£)': 1.27386, 'U.S. dollars ($)': 1, 'Euros (€)': 1.14630, 'Brazilian reais (R$)': 0.269293,
# 'Indian rupees (?)': 0.0142103, 'Polish zloty (zl)': 0.266836, 'Canadian dollars (C$)': 0.755728,
# 'Russian rubles (?)': 0.0148888, 'Swiss francs': 1.01940, 'Swedish kroner (SEK)': 0.112174,
# 'Mexican pesos (MXN$)': 0.0517878, 'Australian dollars (A$)': 0.715379, 'Japanese yen (¥)': 0.00917943,
# 'Chinese yuan renminbi (¥)': 0.146269, 'Singapore dollars (S$)': 0.736965, 'South African rands (R)': 0.0721070,
# 'Bitcoin (btc)': 4019.77}
# def convert_salary(col):
# currency = col[0]
# salary = col[1]
# return currency_dict[currency] * salary
# df_2017_keep['Salary'] = df_2017_keep[['Currency','Salary']].apply(convert_salary, axis = 1)
```
### 3.3 - Hobby
Surprisingly, everyone filled out if they program as a hobby or not. Although, in 2017, you could also answer if you contributed to open source projects whereas in 2018, answers constrained to yes or no. I've simplified the 2017 answers so that anything aside from "no" becomes "yes".
```
getMissingPercent('Hobby')
df_2017_keep['Hobby'].unique()
df_2018_keep['Hobby'].unique()
def hobby(col):
for row in col:
if row != 'No':
return 'Yes'
else:
return 'No'
df_2017_keep['Hobby'] = df_2017_keep[['Hobby']].apply(hobby, axis = 1)
```
### 3.4 - Country
Respondents state what country they reside in. Again, no missing data. But respondents had to type in the country name, so watch out for typos later.
```
getMissingPercent('Country')
```
### 3.5 - Student
For both years, this asks if the respondent is currently enrolled in a college or university program. They have the same answer choices of full time, part time, no, or prefer not to say. However, in the 2018 dataset, "I prefer not to say" all comes out as null values. This seems to be true for all of 2018 data. Null values are filled in with "I prefer not to say".
```
getMissingPercent('Student')
df_2017_keep['Student'].unique()
df_2018_keep['Student'].unique()
df_2018_keep['Student'] = df_2018_keep.Student.cat.add_categories('I prefer not to say').fillna('I prefer not to say')
```
### 3.6 - Employment
After removing null salaries, 2017 only has two employment statuses: employed full time or part time. In the 2017 documentation, it states that salary information was only collected if respondents stated they were employed.
There was no such filter for the 2018 data, so I've filtered out anyone unemployed, which includes independent contractors/freelancers/self-employed, those unemployed but looking for work, those unemployed and not looking for work, and retired people.
```
getMissingPercent('Employment')
sorted(df_2017_keep['Employment'].unique())
df_2018_keep['Employment'].unique()
df_2018_keep = df_2018_keep[(df_2018_keep['Employment']=='Employed full-time') | (df_2018_keep['Employment']=='Employed part-time')]
```
### 3.7 - Education
Education refers to the highest level of formal education that the respondent has completed. In 2018, a category was added for associates degree. I've added that to 2017 categories and converted null values in 2018 to "I prefer not to say".
```
getMissingPercent('Education')
list(df_2017_keep['Education'].unique())
list(df_2018_keep['Education'].unique())
df_2017_keep['Education'] = df_2017_keep.Education.cat.add_categories('Associate degree')
df_2018_keep['Education'] = df_2018_keep.Education.cat.add_categories('I prefer not to answer').fillna('I prefer not to answer')
```
### 3.8 - Undergraduate Major
As one would expect this column asks what is/was the respondent's undergraduate major. The two years have the same options to choose from, and it encompasses a wide variety of majors, with heavy emphasis on different types of computer science majors.
```
getMissingPercent('UndergradMajor')
list(df_2017_keep['UndergradMajor'].unique())==list(df_2017_keep['UndergradMajor'].unique())
df_2017_keep['UndergradMajor'] = df_2017_keep.UndergradMajor.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['UndergradMajor'] = df_2018_keep.UndergradMajor.cat.add_categories('NaN').fillna('NaN')
```
### 3.9 - Company Size
Company size options range from fewer than 10 employees to greater than 10k employees.
```
getMissingPercent('CompanySize')
df_2017_keep['CompanySize'] = df_2017_keep.CompanySize.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['CompanySize'] = df_2018_keep.CompanySize.cat.add_categories('NaN').fillna('NaN')
sorted(df_2017_keep['UndergradMajor'].unique())==sorted(df_2017_keep['UndergradMajor'].unique())
```
### 3.10 - Years Coded
This section asks how many years the respondent has been coding, including for school, for fun, or for work (professionally). The answer choices for 2017 were a little confusing. For example, answer choices included 1 to 2 years, 2 to 3 years, and so forth. 2018 choices were less ambiguous, with example choices of 0-2 years and 3-5 years.
For the 2017 choices, I've reworked the answer choices so that the first number is included, and the second is excluded. To clarify, if the respondent chose answer choice 1 to to years, it means they have been coding anywhere between 1 to 1.99 years. With this method, I am able to make the same answer choices between the two datasets.
```
getMissingPercent('YearsCoding')
df_2017_keep['YearsCoding'] = df_2017_keep.YearsCoding.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['YearsCoding'] = df_2018_keep.YearsCoding.cat.add_categories('NaN').fillna('NaN')
YearsCoding2017_dict ={'Less than a year': '0-2 years', '1 to 2 years': '0-2 years', '2 to 3 years': '3-5 years',
'3 to 4 years': '3-5 years', '4 to 5 years': '3-5 years', '5 to 6 years': '6-8 years',
'6 to 7 years': '6-8 years', '7 to 8 years': '6-8 years', '8 to 9 years': '9-11 years',
'9 to 10 years': '9-11 years', '10 to 11 years': '9-11 years', '11 to 12 years': '12-14 years',
'12 to 13 years': '12-14 years', '13 to 14 years': '12-14 years', '14 to 15 years': '15-17 years',
'15 to 16 years': '15-17 years', '16 to 17 years': '15-17 years', '17 to 18 years': '18-20 years',
'18 to 19 years': '18-20 years', '19 to 20 years': '18-20 years', '20 or more years': '20 or more years',
'NaN': 'NaN'}
def convert_YearsCoding2017(col):
return YearsCoding2017_dict[col]
df_2017_keep['YearsCoding'] = df_2017_keep['YearsCoding'].apply(convert_YearsCoding2017)
YearsCoding2018_dict = {'21-23 years': '20 or more years', '24-26 years': '20 or more years', '27-29 years': '20 or more years',
'30 or more years': '20 or more years', 'NaN': 'NaN'}
def convert_YearsCoding2018(col):
try:
return YearsCoding2018_dict[col]
except:
return col
df_2018_keep['YearsCoding'] = df_2018_keep['YearsCoding'].apply(convert_YearsCoding2018)
```
### 3.11 - Years Coded Professionally
Similar to section 3.10's Years Coded, but only applies to the years that the respondent has coded for work. I was able to reuse the years coding dictionary.
```
getMissingPercent('YearsCodingProf')
df_2017_keep['YearsCodingProf'] = df_2017_keep.YearsCodingProf.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['YearsCodingProf'] = df_2018_keep.YearsCodingProf.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['YearsCodingProf'] = df_2017_keep['YearsCodingProf'].apply(convert_YearsCoding2017)
df_2018_keep['YearsCodingProf'] = df_2018_keep['YearsCodingProf'].apply(convert_YearsCoding2018)
```
### 3.12 - Software Developer Type
This question asks the respondent what type of software developer they are. Multiple responses are allowed, which has resulted in ~900 and 4800 unique responses for 2017 and 2018 respectively. For now, I will fill in the null values as "NaN", and create a new column that indicates how many options the respondent chose, as written in get_count().
```
getMissingPercent('DevType')
df_2017_keep['DevType']= df_2017_keep.DevType.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['DevType']= df_2018_keep.DevType.cat.add_categories('NaN').fillna('NaN')
len(df_2017_keep['DevType'][36].split(";"))
def get_count(col):
count = len(col.split(';'))
if col == 'NaN':
count = 0
return count
df_2017_keep['DevType_Count'] = df_2017_keep['DevType'].apply(get_count)
df_2018_keep['DevType_Count'] = df_2018_keep['DevType'].apply(get_count)
```
### 3.13 - Career Satisfaction
This question asks responded how satisfied they are with their career so far. Again, between 2017 and 2018, completely different answer systems are used, where 2017 uses a 0 to 10 scale (where 0 is most dissatisfied and 10 is most satisfied), and in 2018, answers range from extremely dissatisfied to extremely satisfied. To combine the two, I've anchored it so that a 0 correlates to extremely dissatisfied, 5 to neither satisfied nor dissatisfied, and 10 to extremely satisfied.
```
getMissingPercent('CareerSatisfaction')
list(df_2017_keep['CareerSatisfaction'].unique())
list(df_2018_keep['CareerSatisfaction'].unique())
satisfaction_dict = {0.0: 'Extremely dissatisfied', 1.0: 'Moderately dissatisfied', 2.0: 'Moderately dissatisfied',
3.0: 'Slightly dissatisfied', 4.0: 'Slightly dissatisfied', 5.0: 'Neither satisfied nor dissatisfied',
6.0: 'Slightly satisfied', 7.0: 'Slightly satisfied', 8.0: 'Moderately satisfied', 9.0: 'Moderately satisfied',
10.0: 'Extremely satisfied', 'NaN': 'NaN'}
def convert_satisfaction(col):
return satisfaction_dict[col]
df_2017_keep['CareerSatisfaction'] = df_2017_keep['CareerSatisfaction'].astype('category')
df_2017_keep['CareerSatisfaction'] = df_2017_keep.CareerSatisfaction.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['CareerSatisfaction'] = df_2018_keep.CareerSatisfaction.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['CareerSatisfaction'] = df_2017_keep['CareerSatisfaction'].apply(convert_satisfaction)
```
### 3.14 - Job Satisfaction
This question is similar to section 3.13, but specific to the respondent's current job. Data processing is also similar to career satisfaction.
```
getMissingPercent('JobSatisfaction')
df_2017_keep['JobSatisfaction'].unique()
df_2018_keep['JobSatisfaction'].unique()
df_2017_keep['JobSatisfaction'] = df_2017_keep['JobSatisfaction'].astype('category')
df_2017_keep['JobSatisfaction'] = df_2017_keep.JobSatisfaction.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['JobSatisfaction'] = df_2018_keep.JobSatisfaction.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['JobSatisfaction'] = df_2017_keep['JobSatisfaction'].apply(convert_satisfaction)
```
### 3.15 - Kinship with Developers
Kinship refers to the sense of connection the respondent feels with other developers. The two years have the same 5 point scale ranging from strongly disagree to strongly agree. A minor difference is 2017's somewhat agree is equivalent to 2018's neither agree nor disagree. I've decided to change this to a numerical scale where strongly disagree is a 1 and strongly agree is a 5.
```
getMissingPercent('KinshipDevelopers')
list(df_2017_keep['KinshipDevelopers'].unique())
list(df_2018_keep['KinshipDevelopers'].unique())
agree_dict = {'Strongly disagree': 1, 'Disagree': 2, 'Somewhat agree': 3, 'Neither Agree nor Disagree': 3, 'Agree': 4,
'Strongly agree': 5}
def convert_agreement(col):
return agree_dict[col]
df_2017_keep['KinshipDevelopers'] = df_2017_keep['KinshipDevelopers'].apply(convert_agreement)
df_2018_keep['KinshipDevelopers'] = df_2018_keep['KinshipDevelopers'].apply(convert_agreement)
df_2017_keep['KinshipDevelopers'] = pd.to_numeric(df_2017_keep['KinshipDevelopers'], downcast='unsigned')
df_2018_keep['KinshipDevelopers'] = pd.to_numeric(df_2018_keep['KinshipDevelopers'], downcast='unsigned')
df_2017_keep['KinshipDevelopers'] = df_2017_keep['KinshipDevelopers'].fillna(0)
df_2018_keep['KinshipDevelopers'] = df_2018_keep['KinshipDevelopers'].fillna(0)
```
### 3.16 - Compete with Peers
For the philosophers, would this be the opposite or similar emotion to kinship with other developers (section 3.15)? As the title suggests, the survey is questioning if the respondent thinks of themselves in competition with their peers. It uses the same 5 point scale as section 3.15.
```
getMissingPercent('CompetePeers')
df_2017_keep['CompetePeers'].unique()
df_2018_keep['CompetePeers'].unique()
df_2017_keep['CompetePeers'] = df_2017_keep['CompetePeers'].apply(convert_agreement)
df_2018_keep['CompetePeers'] = df_2018_keep['CompetePeers'].apply(convert_agreement)
df_2017_keep['CompetePeers'] = pd.to_numeric(df_2017_keep['CompetePeers'], downcast='unsigned')
df_2018_keep['CompetePeers'] = pd.to_numeric(df_2018_keep['CompetePeers'], downcast='unsigned')
df_2017_keep['CompetePeers'] = df_2017_keep['CompetePeers'].fillna(0)
df_2018_keep['CompetePeers'] = df_2018_keep['CompetePeers'].fillna(0)
```
### 3.17 - Last New Job
This section asks when was the last time the respondent took a job with a new employer. Responses range from never to more than four years ago. I changed 2018's 'I've never had a job' to 'Not applicable/ never' to match 2017's response.
```
getMissingPercent('LastNewJob')
list(df_2017_keep['LastNewJob'].unique())
list(df_2018_keep['LastNewJob'].unique())
df_2017_keep['LastNewJob'] = df_2017_keep.LastNewJob.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['LastNewJob'] = df_2018_keep.LastNewJob.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['LastNewJob'] = df_2018_keep['LastNewJob'].replace("I've never had a job", 'Not applicable/ never')
```
### 3.18 - Assessing Jobs: Industry
The subsequent assessing jobs sections are based on if the respondent is assessing a potential job to apply to, how important is each category. In this section, the category is the industry.
For all of the assessing jobs columns, 2017 potential responses range from not at all important to very important, whereas 2018's responses range from 1 to 10. I've anchored it so that a 1 corresponds to not at all important, 5 is somewhat important, and 10 corresponds to very important.
```
getMissingPercent('AssessJobIndustry')
list(df_2017_keep['AssessJobIndustry'].unique())
list(df_2018_keep['AssessJobIndustry'].unique())
df_2018_keep['AssessJobIndustry'] = df_2018_keep['AssessJobIndustry'].astype('category')
df_2018_keep['AssessJobIndustry'] = df_2018_keep.AssessJobIndustry.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['AssessJobIndustry'] = df_2017_keep.AssessJobIndustry.cat.add_categories('NaN').fillna('NaN')
importance_dict = {1: 'Not at all important', 2: 'Not at all important', 3: 'Not very important', 4: 'Not very important',
5: 'Somewhat important', 6: 'Somewhat important', 7: 'Important', 8: 'Important', 9: 'Very important',
10: 'Very important', 'NaN': 'NaN'}
def convert_importance(col):
return importance_dict[col]
df_2018_keep['AssessJobIndustry'] = df_2018_keep['AssessJobIndustry'].apply(convert_importance)
```
### 3.19 - Assessing Jobs: Department
How important is the specific team or department when assessing potential jobs?
```
getMissingPercent('AssessJobDept')
list(df_2017_keep['AssessJobDept'].unique())
df_2018_keep['AssessJobDept'].unique()
df_2018_keep['AssessJobDept'] = df_2018_keep['AssessJobDept'].astype('category')
df_2017_keep['AssessJobDept'] = df_2017_keep.AssessJobDept.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['AssessJobDept'] = df_2018_keep.AssessJobDept.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['AssessJobDept'] = df_2018_keep['AssessJobDept'].apply(convert_importance)
```
### 3.20 - Assessing Jobs: Technology
How important is the language, frameworks, and/or other technologies when assessing a potential job?
```
getMissingPercent('AssessJobTech')
df_2018_keep['AssessJobTech'] = df_2018_keep['AssessJobTech'].astype('category')
df_2017_keep['AssessJobTech'] = df_2017_keep.AssessJobTech.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['AssessJobTech'] = df_2018_keep.AssessJobTech.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['AssessJobTech'] = df_2018_keep['AssessJobTech'].apply(convert_importance)
```
### 3.21 - Assessing Jobs: Compensation
How important are the benefits and compensation when assessing a potential job?
```
getMissingPercent('AssessJobCompensation')
df_2018_keep['AssessJobCompensation'] = df_2018_keep['AssessJobCompensation'].astype('category')
df_2017_keep['AssessJobCompensation'] = df_2017_keep.AssessJobCompensation.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['AssessJobCompensation'] = df_2018_keep.AssessJobCompensation.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['AssessJobCompensation'] = df_2018_keep['AssessJobCompensation'].apply(convert_importance)
```
### 3.22 - Assessing Jobs: Office
How important is the office environment/company culture when assessing a potential job?
```
getMissingPercent('AssessJobOffice')
df_2018_keep['AssessJobOffice'] = df_2018_keep['AssessJobOffice'].astype('category')
df_2017_keep['AssessJobOffice'] = df_2017_keep.AssessJobOffice.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['AssessJobOffice'] = df_2018_keep.AssessJobOffice.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['AssessJobOffice'] = df_2018_keep['AssessJobOffice'].apply(convert_importance)
```
### 3.23 - Assessing Jobs: Work Remotely
How important is working from home/remotely when assessing a potential job?
```
getMissingPercent("AssessJobRemote")
df_2018_keep['AssessJobRemote'] = df_2018_keep['AssessJobRemote'].astype('category')
df_2017_keep['AssessJobRemote'] = df_2017_keep.AssessJobRemote.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['AssessJobRemote'] = df_2018_keep.AssessJobRemote.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['AssessJobRemote'] = df_2018_keep['AssessJobRemote'].apply(convert_importance)
```
### 3.24 - Assessing Jobs: Professional Development
How important are opportunities for professional development when assessing a potential job?
```
getMissingPercent('AssessJobProfDevel')
df_2018_keep['AssessJobProfDevel'] = df_2018_keep['AssessJobProfDevel'].astype('category')
df_2017_keep['AssessJobProfDevel'] = df_2017_keep.AssessJobProfDevel.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['AssessJobProfDevel'] = df_2018_keep.AssessJobProfDevel.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['AssessJobProfDevel'] = df_2018_keep['AssessJobProfDevel'].apply(convert_importance)
```
### 3.25 - Assessing Jobs: Diversity
How important is the diversity of the company or organization when assessing a potential job?
```
getMissingPercent('AssessJobDiversity')
df_2018_keep['AssessJobDiversity'] = df_2018_keep['AssessJobDiversity'].astype('category')
df_2017_keep['AssessJobDiversity'] = df_2017_keep.AssessJobDiversity.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['AssessJobDiversity'] = df_2018_keep.AssessJobDiversity.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['AssessJobDiversity'] = df_2018_keep['AssessJobDiversity'].apply(convert_importance)
```
### 3.26 - Assessing Jobs: Product Impact
How important is the impactfulness of the product or service the respondent would be working on when assessing a potential job?
```
getMissingPercent('AssessJobProduct')
df_2018_keep['AssessJobProduct'] = df_2018_keep['AssessJobProduct'].astype('category')
df_2017_keep['AssessJobProduct'] = df_2017_keep.AssessJobProduct.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['AssessJobProduct'] = df_2018_keep.AssessJobProduct.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['AssessJobProduct'] = df_2018_keep['AssessJobProduct'].apply(convert_importance)
```
### 3.27 - Assessing Jobs: Finances
How important is the financial performance or funding status of the company when assessing a potential job?
```
getMissingPercent('AssessJobFinances')
df_2018_keep['AssessJobFinances'] = df_2018_keep['AssessJobFinances'].astype('category')
df_2017_keep['AssessJobFinances'] = df_2017_keep.AssessJobFinances.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['AssessJobFinances'] = df_2018_keep.AssessJobFinances.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['AssessJobFinances'] = df_2018_keep['AssessJobFinances'].apply(convert_importance)
```
### 3.28 - Reason for Updated CV
What was the reason the respondent last updated their resume/CV? They could only pick one response, but between the two years, the responses were vastly different. I added categories as appropriate to each year.
```
getMissingPercent('UpdateCV')
list(df_2017_keep['UpdateCV'].unique())
list(df_2018_keep['UpdateCV'].unique())
df_2017_keep['UpdateCV'] = df_2017_keep.UpdateCV.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['UpdateCV'] = df_2018_keep.UpdateCV.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['UpdateCV'] = df_2017_keep.UpdateCV.cat.add_categories(['My job status or other personal status changed',
'I did not receive an expected change in compensation',
'I had a negative experience or interaction at work',
'I received bad news about the future of my company or department'])
df_2018_keep['UpdateCV'] = df_2018_keep.UpdateCV.cat.add_categories(['I completed a major project, assignment, or contract',
'Something else',
'I was just giving it a regular update'])
```
### 3.29 - Informal Schooling Education Types
The respondent is asked what types of activities they have participated in outside of their formal schooling. Multiple answers are allowed, resulting in ~425 unique responses for each year. Answers include anything from taking an online programming course, participating in coding competitions or hackathons, and contributing to open source software.
```
getMissingPercent('EducationTypes')
df_2017_keep['EducationTypes'].unique()
df_2017_keep['EducationTypes']= df_2017_keep.EducationTypes.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['EducationTypes']= df_2018_keep.EducationTypes.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['EducationTypes_Count'] = df_2017_keep['EducationTypes'].apply(get_count)
df_2018_keep['EducationTypes_Count'] = df_2018_keep['EducationTypes'].apply(get_count)
```
### 3.30 - Resources for the Self Taught
Respondents who indicated they taught themselves a programming technology without taking a course are asked what resources they went to. Sources include books, Stack Overflow, and official documentation.
```
getMissingPercent('SelfTaughtTypes')
df_2017_keep['SelfTaughtTypes']= df_2017_keep.SelfTaughtTypes.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['SelfTaughtTypes']= df_2018_keep.SelfTaughtTypes.cat.add_categories('NaN').fillna('NaN')
```
### 3.31 - Time After Bootcamp to get Hired
For respondents who indicated they went to a bootcamp, this question asks how long did it take for each person to get hired after the camp. Both years have essentially the same options, but the wording is slightly different.
Also note that there is an extremely high number of missing data for both years.
```
getMissingPercent('TimeAfterBootcamp')
list(df_2017_keep['TimeAfterBootcamp'].unique())
list(df_2018_keep['TimeAfterBootcamp'].unique())
df_2017_keep['TimeAfterBootcamp']= df_2017_keep.TimeAfterBootcamp.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['TimeAfterBootcamp']= df_2018_keep.TimeAfterBootcamp.cat.add_categories('NaN').fillna('NaN')
def convert_TimeAfterBootcamp(col):
if col == 'I already had a full-time job as a developer when I began the program':
return 'I already had a job as a developer when I started the program'
elif col == 'Immediately after graduating':
return 'Immediately upon graduating'
elif col == 'I haven’t gotten a developer job':
return "I haven't gotten a job as a developer yet"
else:
return col
df_2018_keep['TimeAfterBootcamp'] = df_2018_keep['TimeAfterBootcamp'].apply(convert_TimeAfterBootcamp)
df_2018_keep['TimeAfterBootcamp'] = df_2018_keep.TimeAfterBootcamp.cat.add_categories('I got a job as a developer before completing the program')
```
### 3.32 - Languages Worked With
What programming languages has the respondent worked with extensively in the past year? Multiple languages are allowed, which gives many unique variables.
```
getMissingPercent('LanguageWorkedWith')
df_2017_keep['LanguageWorkedWith']= df_2017_keep.LanguageWorkedWith.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['LanguageWorkedWith']= df_2018_keep.LanguageWorkedWith.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['LanguageWorkedWith_Count'] = df_2017_keep['LanguageWorkedWith'].apply(get_count)
df_2018_keep['LanguageWorkedWith_Count'] = df_2018_keep['LanguageWorkedWith'].apply(get_count)
```
### 3.33 - Languages Want to Work With
Similar to section 3.32, but with languages the respondent would like to learn in the next year.
```
getMissingPercent('LanguageDesireNextYear')
df_2017_keep['LanguageDesireNextYear']= df_2017_keep.LanguageDesireNextYear.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['LanguageDesireNextYear']= df_2018_keep.LanguageDesireNextYear.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['LanguageDesireNextYear_Count'] = df_2017_keep['LanguageDesireNextYear'].apply(get_count)
df_2018_keep['LanguageDesireNextYear_Count'] = df_2018_keep['LanguageDesireNextYear'].apply(get_count)
```
### 3.34 - Frameworks Worked With
Similar to section 3.32, but with frameworks (ex: Django, TensorFlow, Angular, etc)
```
getMissingPercent('FrameworkWorkedWith')
df_2017_keep['FrameworkWorkedWith']= df_2017_keep.FrameworkWorkedWith.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['FrameworkWorkedWith']= df_2018_keep.FrameworkWorkedWith.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['FrameworkWorkedWith_Count'] = df_2017_keep['FrameworkWorkedWith'].apply(get_count)
df_2018_keep['FrameworkWorkedWith_Count'] = df_2018_keep['FrameworkWorkedWith'].apply(get_count)
```
### 3.35 - Frameworks Want to Work With
Similar to section 3.33 and 3.34, but with frameworks the respondent would like to learn next year.
```
getMissingPercent('FrameworkDesireNextYear')
df_2017_keep['FrameworkDesireNextYear']= df_2017_keep.FrameworkDesireNextYear.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['FrameworkDesireNextYear']= df_2018_keep.FrameworkDesireNextYear.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['FrameworkDesireNextYear_Count'] = df_2017_keep['FrameworkDesireNextYear'].apply(get_count)
df_2018_keep['FrameworkDesireNextYear_Count'] = df_2018_keep['FrameworkDesireNextYear'].apply(get_count)
```
### 3.36 - Databases Worked With
Similar to section 3.32, but with databases (ex: Microsoft Azure, MySQL, MongoDB, etc.)
```
getMissingPercent('DatabaseWorkedWith')
df_2017_keep['DatabaseWorkedWith']= df_2017_keep.DatabaseWorkedWith.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['DatabaseWorkedWith']= df_2018_keep.DatabaseWorkedWith.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['DatabaseWorkedWith_Count'] = df_2017_keep['DatabaseWorkedWith'].apply(get_count)
df_2018_keep['DatabaseWorkedWith_Count'] = df_2018_keep['DatabaseWorkedWith'].apply(get_count)
```
### 3.37 - Databases Want to Work With
Similar to section 3.33, but with databases.
```
getMissingPercent('DatabaseDesireNextYear')
df_2017_keep['DatabaseDesireNextYear']= df_2017_keep.DatabaseDesireNextYear.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['DatabaseDesireNextYear']= df_2018_keep.DatabaseDesireNextYear.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['DatabaseDesireNextYear_Count'] = df_2017_keep['DatabaseDesireNextYear'].apply(get_count)
df_2018_keep['DatabaseDesireNextYear_Count'] = df_2018_keep['DatabaseDesireNextYear'].apply(get_count)
```
### 3.38 - Platforms Worked With
Similar to section 3.32 but with platforms (ex: Linux, Microsoft Azure, AWS, etc.)
```
getMissingPercent('PlatformWorkedWith')
df_2017_keep['PlatformWorkedWith']= df_2017_keep.PlatformWorkedWith.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['PlatformWorkedWith']= df_2018_keep.PlatformWorkedWith.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['PlatformWorkedWith_Count'] = df_2017_keep['PlatformWorkedWith'].apply(get_count)
df_2018_keep['PlatformWorkedWith_Count'] = df_2018_keep['PlatformWorkedWith'].apply(get_count)
```
### 3.39 - Platforms Want to Work With
Similar to section 3.33, but with platforms.
```
getMissingPercent('PlatformDesireNextYear')
df_2017_keep['PlatformDesireNextYear']= df_2017_keep.PlatformDesireNextYear.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['PlatformDesireNextYear']= df_2018_keep.PlatformDesireNextYear.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['PlatformDesireNextYear_Count'] = df_2017_keep['PlatformDesireNextYear'].apply(get_count)
df_2018_keep['PlatformDesireNextYear_Count'] = df_2018_keep['PlatformDesireNextYear'].apply(get_count)
```
### 3.40 - IDE
What development environment does the respondent use on a regular basis? Examples include Sublime, RStudio, PyCharm, etc. Multiple answers allowed.
```
getMissingPercent('IDE')
df_2017_keep['IDE']= df_2017_keep.IDE.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['IDE']= df_2018_keep.IDE.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['IDE_Count'] = df_2017_keep['IDE'].apply(get_count)
df_2018_keep['IDE_Count'] = df_2018_keep['IDE'].apply(get_count)
```
### 3.41 - Methodology
Asks the respondent what types of methodology they are familiar with. Examples include pair programming, lean, and scrum. Multiple answers allowed.
```
getMissingPercent('Methodology')
df_2017_keep['Methodology']= df_2017_keep.Methodology.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['Methodology']= df_2018_keep.Methodology.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['Methodology_Count'] = df_2017_keep['Methodology'].apply(get_count)
df_2018_keep['Methodology_Count'] = df_2018_keep['Methodology'].apply(get_count)
```
### 3.42 - Version Control
Asks the respondent what version control (if at all) they use. Multiple answers allowed.
```
getMissingPercent('VersionControl')
df_2017_keep['VersionControl']= df_2017_keep.VersionControl.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['VersionControl']= df_2018_keep.VersionControl.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['VersionControl_Count'] = df_2017_keep['VersionControl'].apply(get_count)
df_2018_keep['VersionControl_Count'] = df_2018_keep['VersionControl'].apply(get_count)
```
### 3.43 - Frequency of Checking in Code
Asks the respondent over the past year, how often they checked in or committed code. Answers are similarly worded for the two years.
```
getMissingPercent('CheckInCode')
list(df_2017_keep['CheckInCode'].unique())
list(df_2018_keep['CheckInCode'].unique())
df_2017_keep['CheckInCode']= df_2017_keep.CheckInCode.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['CheckInCode']= df_2018_keep.CheckInCode.cat.add_categories('NaN').fillna('NaN')
checkInCode_dict = { 'Just a few times over the year': 'Less than once per month', 'A few times a month': 'Weekly or a few times per month',
'Multiple times a day': 'Multiple times per day', 'Once a day': 'Once a day', 'A few times a week': 'A few times per week',
'Never': 'Never', 'NaN': 'NaN'}
def convert_checkInCode(col):
return checkInCode_dict[col]
df_2017_keep['CheckInCode'] = df_2017_keep['CheckInCode'].apply(convert_checkInCode)
```
### 3.44 - Stack Overflow Jobs
Respondents are asked if they have ever used or visited the Stack Overflow Jobs webpage. It was difficult to combine responses between the two years, so I simplified answers to yes or no.
```
getMissingPercent('StackOverflowJobs')
list(df_2017_keep['StackOverflowJobs'].unique())
list(df_2018_keep['StackOverflowJobs'].unique())
df_2017_keep['StackOverflowJobs']= df_2017_keep.StackOverflowJobs.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['StackOverflowJobs']= df_2018_keep.StackOverflowJobs.cat.add_categories('NaN').fillna('NaN')
SOJobs_dict = {'Yes': 'Yes', 'No, I knew that Stack Overflow had a jobs board but have never used or visited it': 'No',
"No, I didn't know that Stack Overflow had a jobs board": 'No', "Haven't done at all": 'No', 'Several times': 'Yes',
'Once or twice': 'Yes', 'At least once each week': 'Yes', 'At least once each day': 'Yes', 'NaN': 'NaN'}
def convert_SOJobs(col):
return SOJobs_dict[col]
df_2017_keep['StackOverflowJobs'] = df_2017_keep['StackOverflowJobs'].apply(convert_SOJobs)
df_2018_keep['StackOverflowJobs'] = df_2018_keep['StackOverflowJobs'].apply(convert_SOJobs)
```
### 3.45 - Gender
Respondents are asked what gender(s) they identify with. Multiple answers allowed, so there are more unique answers than just your typical male/female binary.
```
getMissingPercent('Gender')
df_2017_keep['Gender'].unique()
df_2018_keep['Gender'].unique()
df_2017_keep['Gender']= df_2017_keep.Gender.cat.add_categories('I prefer not to answer').fillna('I prefer not to answer')
df_2018_keep['Gender']= df_2018_keep.Gender.cat.add_categories('I prefer not to answer').fillna('I prefer not to answer')
df_2017_keep['Gender_Count'] = df_2017_keep['Gender'].apply(get_count)
df_2018_keep['Gender_Count'] = df_2018_keep['Gender'].apply(get_count)
```
### 3.46 - Parents' Highest Education
Asks what is the respondent's' parents highest level of education. Both years had similar answers but different wording.
```
getMissingPercent('EducationParents')
list(df_2017_keep['EducationParents'].unique())
list(df_2018_keep['EducationParents'].unique())
df_2017_keep['EducationParents']= df_2017_keep.EducationParents.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['EducationParents']= df_2018_keep.EducationParents.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['EducationParents']= df_2017_keep.EducationParents.cat.add_categories('Associate degree')
df_2018_keep['EducationParents']= df_2018_keep.EducationParents.cat.add_categories(['I don\'t know/not sure', 'I prefer not to answer'])
educationParents_dict = {'NaN': 'NaN', 'A professional degree': 'A professional degree',
'Professional degree (JD, MD, etc.)':'A professional degree',
"A bachelor's degree": "A bachelor's degree",
'Bachelor’s degree (BA, BS, B.Eng., etc.)': "A bachelor's degree",
"A master's degree": "A master's degree",
'Master’s degree (MA, MS, M.Eng., MBA, etc.)': "A master's degree",
'High school': 'High school',
'Secondary school (e.g. American high school, German Realschule or Gymnasium, etc.)': 'High school',
'A doctoral degree': 'A doctoral degree',
'Other doctoral degree (Ph.D, Ed.D., etc.)': 'A doctoral degree',
'Some college/university study, no bachelor\'s degree': 'Some college/university',
'Some college/university study without earning a degree': 'Some college/university',
'Primary/elementary school': 'Primary/elementary school',
'No education': 'No education', 'They never completed any formal education': 'No education',
'I prefer not to answer': 'I prefer not to answer', "I don't know/not sure": "I don't know/not sure",
'Associate degree': 'Associate degree'}
def convert_educationParents(col):
return educationParents_dict[col]
df_2017_keep['EducationParents'] = df_2017_keep['EducationParents'].apply(convert_educationParents)
df_2018_keep['EducationParents'] = df_2018_keep['EducationParents'].apply(convert_educationParents)
```
### 3.47 - Race
Respondents are asked what race(s) they identify with. Multiple answers allowed.
```
getMissingPercent('Race')
df_2017_keep['Race']= df_2017_keep.Race.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['Race']= df_2018_keep.Race.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['Race_Count'] = df_2017_keep['Race'].apply(get_count)
df_2018_keep['Race_Count'] = df_2018_keep['Race'].apply(get_count)
```
### 3.48 - Survey too Long
Lastly, respondents are asked if the survey was too long.
```
getMissingPercent('SurveyLong')
list(df_2017_keep['SurveyLong'].unique())
list(df_2018_keep['SurveyLong'].unique())
surveyLength_dict = {'Strongly disagree': 'The survey was too short', 'Disagree': 'The survey was too short',
'Somewhat agree': 'The survey was an appropriate length', 'Agree': 'The survey was too long',
'Strongly agree': 'The survey was too long', 'NaN': 'NaN'}
def convert_survey(col):
return surveyLength_dict[col]
df_2017_keep['SurveyLong']= df_2017_keep.SurveyLong.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['SurveyLong']= df_2018_keep.SurveyLong.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['SurveyLong'] = df_2017_keep['SurveyLong'].apply(convert_survey)
```
# 4 - Saving the Data
Before saving the data, lets check the data types for each column is what we want.
### 4.1 - Checking Datatypes
```
df_2017_keep.info()
df_2018_keep.info()
```
I see lots of object categories, and I'm fairly certain the 'Count' columns don't need to be 64 bit. Lets downcast both dataframes again.
```
downcast(df_2017_keep)
downcast(df_2018_keep)
get_memoryUsage(df_2017_keep)
get_memoryUsage(df_2018_keep)
```
As expected, objects were converted to categories, and the 'count' columns were converted to int8.
### 4.2 - Saving to Feather
Finally, we can save the updated columns to feather format and it should be able to run through a random forest model without a problem. Since I deleted some rows, the index is not in sequential order. The index must be reset before saving to feather.
```
df_2017_keep.reset_index(inplace = True)
df_2018_keep.reset_index(inplace = True)
df_2017_keep.to_feather('tmp/df_2017_2keep')
df_2018_keep.to_feather('tmp/df_2018_2keep')
df_2017_keep.columns
```
### test data
```
df_2017_keep['DevType'][:5]
test = pd.DataFrame({'A':['adlkfslkfd', 'Nan', 'NaN', 'joke;asdlfk;asdf', 'adsf;dsf;asdf;dsa;fds;;fd;faf;ds'],
'B': [np.nan, 'No', 'Yes, fdas', 'Yes', 'No'], 'C':[45, 65,23,45,74]})
test
test['A'].apply(get_count)
def test_func(col):
return len(col.split(';'))
# col_suffix = '_count'
# for row in df[col]:
# df[col + col_suffix] = row.split(';')
test['A_Count'] = test['A'].apply(test_func)
test
len('web;asdf'.split(';'))
df_2018[df_2018['Respondent']==21]
```
| true |
code
| 0.230616 | null | null | null | null |
|
```
import pandas as pd
import numpy as np
from scipy import stats
from statsmodels.stats.weightstats import ztest
import matplotlib.pyplot as plt
import seaborn as sns
from IPython.display import display, Markdown
df = pd.read_csv("../data/raw/train.csv")
```
### Dataset inspection
- Should we worry about computational complexity? (No, small dataset and small number of features)
- Should we use sampling techniques to reduce the size of the dataset? (No)
```
def display_df_memory_usage(df):
"""
Display the memory usage of a dataframe.
"""
md_table_str = '|Column Name|Size (MB)|\n|---|---|\n'
mem_mb_total = 0
for col_name, mem_bytes in df.memory_usage(deep=True).items():
mem_mb = mem_bytes / 1024**2
mem_mb_total += mem_mb
md_table_str += '|{}|{:.2f}|\n'.format(col_name, mem_mb)
md_table_str += '|Total|{:.2f}|\n'.format(mem_mb_total)
display(Markdown(md_table_str))
display_df_memory_usage(df)
```
### Conclusion:
- We're working with a small dataset. Thus we can use all the data without worrying about computational resources or sampling the data.
## Data Quality Checks
- Are there too many missing values? (Just in some columns)
- Any there any columns with many values missing? (Yes, cabin)
- Should we drop any columns? (Maybe, cabin)
- Are there duplicate values? (No)
- Any there strange behavior or corelation in the data? (No, it's seems to be ok. But we should investigate with more sophisticated methods)
- At first glance, we can think that the embarked port affect the survival rate. But the initial analysis showed that maybe it's not the case.
- Survival rate it seems correlated with the Pclass
- Should we stop the analysis? (No, we should continue)
```
df.info()
# create a series with the percentage of missing values for each column
missing_values = df.isnull().sum() / len(df)*100
missing_values = missing_values.sort_values(ascending=False)
missing_values.rename("% missing values", inplace=True)
display(Markdown('**Missing values**'))
display(Markdown(missing_values.to_markdown()))
del missing_values
# print a markdown table with the col , the number of unique values and the unique values list
def unique_values_table(df):
"""Print a markdown table
with the col, the number of unique values and the unique values
list if there are more than 4 unique values.
"""
md_table_str = '|Column Name|Unique Values||\n|---|---|---|\n'
for col_name, unique_values in df.nunique().items():
if unique_values > 3:
md_table_str += '|{}|{}|\n'.format(col_name, unique_values)
else:
md_unique_str = ' '.join([
f'{name}: {value*100:.1f}\%'
for name, value in
df[col_name].value_counts(normalize=True).items()
])
md_table_str += '|{}|{}|{}\n'.format(
col_name, unique_values, md_unique_str)
display(Markdown(md_table_str))
unique_values_table(df)
# drop PassengerId column
df.drop(columns=['PassengerId'], inplace=True)
df.describe()
# check for duplicate rows
display(Markdown('**Duplicate rows**'))
display(Markdown(f'{df.duplicated().sum()} duplicate rows'))
df.hist('Age', bins=100)
plt.show()
```
- The `Age` feature distribution seems to be skewed. We should take this into account if we will perform any kind of replacement of missing values.
- The values are between 0 and 80 which seems to be a reasonable range.
```
fig, axes = plt.subplots(nrows=1, ncols=3)
for a, col in zip(axes, ['Pclass', 'Sex', 'Embarked']):
sns.countplot(x=col ,hue='Survived',data=df, ax=a)
plt.show()
```
- The `Pclass` seems to affect the survival rate. Which seems reasonable.
- The discrepancy between female/male rates can be related to the code of conduct
"*Women and children first*". However, we must to investigate this better. Because this discrepancy can be caused by other factors.
- At first glance it seems that the passenger that embarked in the `S` point are more likely to die. Obviously, is unrealistic that where the passenger chose to embark affect the chance of survival.
- Almost $72\%$ of the passengers embarked at the S point.
```
fig, axes = plt.subplots(nrows=1, ncols=2)
for a, col in zip(axes, ['Pclass', 'Survived']):
sns.countplot(x=col ,hue='Sex',data=df, ax=a)
plt.show()
```
- We can notice that the third class is composed mostly of male passengers. So perhaps the discrepancy in survival rates between male and female passengers could be also related to this. We must investigate this more carefully.
```
fig, axes = plt.subplots(nrows=1, ncols=2)
for a, col in zip(axes, ['Pclass', 'Sex']):
sns.countplot(x=col ,hue='Embarked',data=df, ax=a)
plt.show()
def show_dist_table(
df, col_a='Embarked', col_b='Pclass',
col_by='Pclass', how='count'
):
sce = df[
[col_a, col_b]].groupby(
[col_a, col_b]
).agg(
{col_by: how}
)
sce['Percentage'] = sce.groupby(
level=0
).apply(
lambda x: 100 * x / float(x.sum())
)
sce['Percentage'] = sce['Percentage'].map(
lambda x: f'{x:.1f}%')
return sce
show_dist_table(df)
```
- We can notice that mostly of the passengers that emabrked in the `S` point came from the third class.
- The `Q` point also has a higher rate of third class of passengers. But there is a diffrence because contrary to the `S` point, the number of passengers that embarked in the `Q` point is much lower.
## More EDA and statistics
Let's take a look at the `Age` feature distribution.
```
# plot histogram of age by Pclass
plt.figure()
for col in [1, 2, 3]:
df_age = df[df['Pclass'] == col]['Age']
sns.distplot(df_age, label=f'Pclass {col}')
plt.legend()
plt.show()
(df[df['Pclass'] == 1]['Age'].describe(), df[df['Pclass'] == 2]['Age'].describe())
```
- The first class passengers are older than the second and third class. We know that the first class passengers has a higher chance of survival than the second and third class.
```
def z_test(df, col='Age'):
df_survivors = df[df['Survived'] == 1][col].dropna()
df_nonsurvivors = df[df['Survived'] == 0][col].dropna()
t_stat, p_value = ztest(df_survivors, df_nonsurvivors)
print("Z Test")
print(20*'-')
print(f"T stat. = {t_stat:.3f}")
print(f"P value = {p_value:.3f}\n")
print(20*'=')
z_test(df)
sns.histplot(df[df['Survived'] == 0]['Age'], kde=True)
```
## EDA through SHAP
| true |
code
| 0.49408 | null | null | null | null |
|
## Torch Core
This module contains all the basic functions we need in other modules of the fastai library (split with [`core`](/core.html#core) that contains the ones not requiring pytorch). Its documentation can easily be skipped at a first read, unless you want to know what a given function does.
```
from fastai.imports import *
from fastai.gen_doc.nbdoc import *
from fastai.layers import *
from fastai.torch_core import *
```
## Global constants
`AdamW = partial(optim.Adam, betas=(0.9,0.99))` <div style="text-align: right"><a href="https://github.com/fastai/fastai/blob/master/fastai/torch_core.py#L43">[source]</a></div>
`bn_types = (nn.BatchNorm1d, nn.BatchNorm2d, nn.BatchNorm3d)` <div style="text-align: right"><a href="https://github.com/fastai/fastai/blob/master/fastai/torch_core.py#L41">[source]</a></div>
`defaults.device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')` <div style="text-align: right"><a href="https://github.com/fastai/fastai/blob/master/fastai/torch_core.py#L62">[source]</a></div>
If you are trying to make fastai run on the CPU, simply change the default device: `defaults.device = 'cpu'`.
Alternatively, if not using wildcard imports: `fastai.torch_core.defaults.device = 'cpu'`.
## Functions that operate conversions
```
show_doc(batch_to_half)
show_doc(flatten_model, full_name='flatten_model')
```
Flattens all the layers of `m` into an array. This allows for easy access to the layers of the model and allows you to manipulate the model as if it was an array.
```
m = simple_cnn([3,6,12])
m
flatten_model(m)
show_doc(model2half)
```
Converting model parameters to half precision allows us to leverage fast `FP16` arithmetic which can speed up the computations by 2-8 times. It also reduces memory consumption allowing us to train deeper models.
**Note**: Batchnorm layers are not converted to half precision as that may lead to instability in training.
```
m = simple_cnn([3,6,12], bn=True)
def show_params_dtype(state_dict):
"""Simple function to pretty print the dtype of the model params"""
for wt_name, param in state_dict.items():
print("{:<30}: {}".format(wt_name, str(param.dtype)))
print()
print("dtypes of model parameters before model2half: ")
show_params_dtype(m.state_dict())
# Converting model to half precision
m_half = model2half(m)
print("dtypes of model parameters after model2half: ")
show_params_dtype(m_half.state_dict())
show_doc(np2model_tensor)
```
It is a wrapper on top of Pytorch's `torch.as_tensor` which converts numpy array to torch tensor, and additionally attempts to map all floats to `torch.float32` and all integers to `torch.int64` for consistencies in model data. Below is an example demonstrating it's functionality for floating number, similar functionality applies to integer as well.
```
a1 = np.ones((2, 3)).astype(np.float16)
a2 = np.ones((2, 3)).astype(np.float32)
a3 = np.ones((2, 3)).astype(np.float64)
b1 = np2model_tensor(a1) # Maps to torch.float32
b2 = np2model_tensor(a2) # Maps to torch.float32
b3 = np2model_tensor(a3) # Maps to torch.float32
print(f"Datatype of as': {a1.dtype}, {a2.dtype}, {a3.dtype}")
print(f"Datatype of bs': {b1.dtype}, {b2.dtype}, {b3.dtype}")
show_doc(requires_grad)
```
Performs both getting and setting of [`requires_grad`](/torch_core.html#requires_grad) parameter of the tensors, which decided whether to accumulate gradients or not.
* If `b` is `None`: The function **gets** the [`requires_grad`](/torch_core.html#requires_grad) for the model parameter, to be more specific it returns the [`requires_grad`](/torch_core.html#requires_grad) of the first element in the model.
* Else if `b` is passed (a boolean value), [`requires_grad`](/torch_core.html#requires_grad) of all parameters of the model is **set** to `b`.
```
# Any Pytorch model
m = simple_cnn([3, 6, 12], bn=True)
# Get the requires_grad of model
print("requires_grad of model: {}".format(requires_grad(m)))
# Set requires_grad of all params in model to false
requires_grad(m, False)
# Get the requires_grad of model
print("requires_grad of model: {}".format(requires_grad(m)))
show_doc(tensor)
```
Handy function when you want to convert any list type object to tensor, initialize your weights manually, and other similar cases.
**NB**: When passing multiple vectors, all vectors must be of same dimensions. (Obvious but can be forgotten sometimes)
```
# Conversion from any numpy array
b = tensor(np.array([1, 2, 3]))
print(b, type(b))
# Passing as multiple parameters
b = tensor(1, 2, 3)
print(b, type(b))
# Passing a single list
b = tensor([1, 2, 3])
print(b, type(b))
# Can work with multiple vectors / lists
b = tensor([1, 2], [3, 4])
print(b, type(b))
show_doc(to_cpu)
```
A wrapper on top of Pytorch's `torch.Tensor.cpu()` function, which creates and returns a copy of a tensor or even a **list** of tensors in the CPU. As described in Pytorch's docs, if the tensor or list of tensor is already on the CPU, the exact data is returned and no copy is made.
Useful to convert all the list of parameters of the model to CPU in a single call.
```
if torch.cuda.is_available():
a = [torch.randn((1, 1)).cuda() for i in range(3)]
print(a)
print("Id of tensors in a: ")
for i in a: print(id(i))
# Getting a CPU version of the tensors in GPU
b = to_cpu(a)
print(b)
print("Id of tensors in b:")
for i in b: print(id(i))
# Trying to perform to_cpu on a list of tensor already in CPU
c = to_cpu(b)
print(c)
# The tensors in c has exact id as that of b. No copy performed.
print("Id of tensors in c:")
for i in c: print(id(i))
show_doc(to_data)
```
Returns the data attribute from the object or collection of objects that inherits from [`ItemBase`](/core.html#ItemBase) class. Useful to examine the exact values of the data, could be used to work with the data outside of `fastai` classes.
```
# Default example examined
from fastai import *
from fastai.vision import *
path = untar_data(URLs.MNIST_SAMPLE)
data = ImageDataBunch.from_folder(path)
# Examin the labels
ys = list(data.y)
print("Category display names: ", [ys[0], ys[-1]])
print("Unique classes internally represented as: ", to_data([ys[0], ys[-1]]))
show_doc(to_detach)
show_doc(to_device)
show_doc(to_half)
```
Converts the tensor or list of `FP16`, resulting in less memory consumption and faster computations with the tensor. It does not convert `torch.int` types to half precision.
```
a1 = torch.tensor([1, 2], dtype=torch.int64)
a2 = torch.tensor([1, 2], dtype=torch.int32)
a3 = torch.tensor([1, 2], dtype=torch.int16)
a4 = torch.tensor([1, 2], dtype=torch.float64)
a5 = torch.tensor([1, 2], dtype=torch.float32)
a6 = torch.tensor([1, 2], dtype=torch.float16)
print("dtype of as: ", a1.dtype, a2.dtype, a3.dtype, a4.dtype, a5.dtype, a6.dtype, sep="\t")
b1, b2, b3, b4, b5, b6 = to_half([a1, a2, a3, a4, a5, a6])
print("dtype of bs: ", b1.dtype, b2.dtype, b3.dtype, b4.dtype, b5.dtype, b6.dtype, sep="\t")
show_doc(to_np)
```
Internally puts the data to CPU, and converts to `numpy.ndarray` equivalent of `torch.tensor` by calling `torch.Tensor.numpy()`.
```
a = torch.tensor([1, 2], dtype=torch.float64)
if torch.cuda.is_available():
a = a.cuda()
print(a, type(a), a.device)
b = to_np(a)
print(b, type(b))
show_doc(try_int)
# Converts floating point numbers to integer
print(try_int(12.5), type(try_int(12.5)))
# This is a Rank-1 ndarray, which ideally should not be converted to int
print(try_int(np.array([1.5])), try_int(np.array([1.5])).dtype)
# Numpy array with a single elements are converted to int
print(try_int(np.array(1.5)), type(try_int(np.array(1.5))))
print(try_int(torch.tensor(2.5)), type(try_int(torch.tensor(2.5))))
# Strings are not converted to int (of course)
print(try_int("12.5"), type(try_int("12.5")))
```
## Functions to deal with model initialization
```
show_doc(apply_init)
show_doc(apply_leaf)
show_doc(cond_init)
show_doc(in_channels)
show_doc(init_default)
```
## Functions to get information of a model
```
show_doc(children)
show_doc(children_and_parameters)
show_doc(first_layer)
show_doc(last_layer)
show_doc(num_children)
show_doc(one_param)
show_doc(range_children)
show_doc(trainable_params)
```
## Functions to deal with BatchNorm layers
```
show_doc(bn2float)
show_doc(set_bn_eval)
show_doc(split_no_wd_params)
```
This is used by the optimizer to determine which params should be applied weight decay when using the option `bn_wd=False` is used in a [`Learner`](/basic_train.html#Learner).
## Functions to get random tensors
```
show_doc(log_uniform)
log_uniform(0.5,2,(8,))
show_doc(rand_bool)
rand_bool(0.5, 8)
show_doc(uniform)
uniform(0,1,(8,))
show_doc(uniform_int)
uniform_int(0,2,(8,))
```
## Other functions
```
show_doc(ModelOnCPU, title_level=3)
show_doc(NoneReduceOnCPU, title_level=3)
show_doc(ParameterModule, title_level=3)
show_doc(data_collate)
show_doc(get_model)
show_doc(grab_idx)
show_doc(logit)
show_doc(logit_)
show_doc(model_type)
show_doc(np_address)
show_doc(split_model)
```
If `splits` are layers, the model is split at those (not included) sequentially. If `want_idxs` is True, the corresponding indexes are returned. If `splits` are lists of layers, the model is split according to those.
```
show_doc(split_model_idx)
show_doc(trange_of)
```
## Undocumented Methods - Methods moved below this line will intentionally be hidden
```
show_doc(tensor__array__)
show_doc(ParameterModule.forward)
```
## New Methods - Please document or move to the undocumented section
```
show_doc(to_float)
show_doc(flatten_check)
```
| true |
code
| 0.605333 | null | null | null | null |
|
# Demonstration of basic image manipulation with SIRF/CIL
This demonstration shows how to create image data objects for MR, CT and PET and how to work with them.
This demo is a jupyter notebook, i.e. intended to be run step by step.
Author: Kris Thielemans, Richard Brown, Christoph Kolbitsch
First version: 8th of September 2016
Second Version: 17th of May 2018
Third Version: 23rd of October 2019
Fourth Version: 23rd of April 2021
CCP SyneRBI Synergistic Image Reconstruction Framework (SIRF).
Copyright 2015 - 2017 Rutherford Appleton Laboratory STFC.
Copyright 2015 - 2019, 2021 University College London.
Copyright 2021 Physikalisch-Technische Bundesanstalt.
This is software developed for the Collaborative Computational
Project in Synergistic Reconstruction for Biomedical Imaging
(http://www.ccpsynerbi.ac.uk/).
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
# Initial set-up
```
# Make sure figures appears inline and animations works
%matplotlib notebook
# We have placed a file in this directory, notebook_setup.py, which will allow us to import the sirf_exercises library
import notebook_setup
# The sirf_exercises defines some handy tools for these notebooks
from sirf_exercises import cd_to_working_dir
# Initial imports etc
import numpy
from numpy.linalg import norm
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import os
import sys
import shutil
import brainweb
from tqdm.auto import tqdm
from sirf.Utilities import examples_data_path
```
## make sure that your installation knows where to read and write data
Later scripts will first have to download data. In addition, the SIRF exercises are set-up to write output in a separate "working directory" to avoid cluttering/overwriting your SIRF files. We need to tell Python where that will be. To do that, you have to run the `download_data.sh` script. You can do that from a terminal, or from this notebook.
The following cell will run the script to simply print a usage message.
```
%%bash
bash ../../scripts/download_data.sh -h
```
Let's now run the script again. The line below will actually not download anything (see further notebooks) but configure the destination directory, which is also used for the "working directory" set-up.
Note that you might want to use the `-d` option to write files somewhere else than the default location. (If you're running this as part of a training session, follow the advice given by your instructors of course!).
```
%%bash
bash ../../scripts/download_data.sh
```
We can now move to a working directory for this notebook.
```
cd_to_working_dir('Introductory', 'introduction')
```
Let's check where we are by using the ipython "magic" command to print the current working directory
```
%pwd
```
# Utilities
First define some handy function definitions to make subsequent code cleaner. You can ignore them when you first see this demo.
They have (minimal) documentation using Python docstrings such that you can do for instance `help(plot_2d_image)`
```
def plot_2d_image(idx,vol,title,clims=None,cmap="viridis"):
"""Customized version of subplot to plot 2D image"""
plt.subplot(*idx)
plt.imshow(vol,cmap=cmap)
if not clims is None:
plt.clim(clims)
plt.colorbar(shrink=.6)
plt.title(title)
plt.axis("off")
def crop_and_fill(templ_im, vol):
"""Crop volumetric image data and replace image content in template image object"""
# Get size of template image and crop
idim = templ_im.as_array().shape
# Let's make sure everything is centered.
# Because offset is used to index an array it has to be of type integer, so we do an integer division using '//'
offset = (numpy.array(vol.shape) - numpy.array(idim)) // 2
vol = vol[offset[0]:offset[0]+idim[0], offset[1]:offset[1]+idim[1], offset[2]:offset[2]+idim[2]]
# Make a copy of the template to ensure we do not overwrite it
templ_im_out = templ_im.clone()
# Fill image content
templ_im_out.fill(numpy.reshape(vol, idim))
return(templ_im_out)
```
Note that SIRF and CIL have their own `show*` functions which will be used on other demos.
# Get brainweb data
We will download and use Brainweb data, which is made more convenient by using the Python brainweb module. We will use a FDG image for PET. MR usually provides qualitative images with an image contrast proportional to difference in T1, T2 or T2* depending on the sequence parameters. Nevertheless, we will make our life easy, by directly using the T1 map provided by the brainweb for MR.
```
fname, url= sorted(brainweb.utils.LINKS.items())[0]
files = brainweb.get_file(fname, url, ".")
data = brainweb.load_file(fname)
brainweb.seed(1337)
for f in tqdm([fname], desc="mMR ground truths", unit="subject"):
vol = brainweb.get_mmr_fromfile(f, petNoise=1, t1Noise=0.75, t2Noise=0.75, petSigma=1, t1Sigma=1, t2Sigma=1)
FDG_arr = vol['PET']
T1_arr = vol['T1']
uMap_arr = vol['uMap']
```
## Display it
The convention for the image dimensions in the brainweb images is [z, y, x]. If we want to
display the central slice (i.e. z), we therefore have to use the 0th dimension of the array.
We are using an integer division using '//' to ensure we can use the value to index the array.
```
plt.figure();
slice_show = FDG_arr.shape[0]//2
# The images are very large, so we only want to visualise the central part of the image. In Python this can be
# achieved by using e.g. 100:-100 as indices. This will "crop" the first 100 and last 100 voxels of the array.
plot_2d_image([1,3,1], FDG_arr[slice_show, 100:-100, 100:-100], 'FDG', cmap="hot")
plot_2d_image([1,3,2], T1_arr[slice_show, 100:-100, 100:-100], 'T1', cmap="Greys_r")
plot_2d_image([1,3,3], uMap_arr[slice_show, 100:-100, 100:-100], 'uMap', cmap="bone")
```
More than likely, this image came out a bit small for your set-up. You can check the default image size as follows (note: units are inches)
```
plt.rcParams['figure.figsize']
```
You can then change them to a size more suitable for your situation, e.g.
```
plt.rcParams['figure.figsize']=[10,7]
```
Now execute the cell above that plots the images again to see if that helped.
You can make this change permanent by changing your `matplotlibrc` file (this might be non-trivial when running on Docker or JupyterHub instance!). You will need to search for `figure.figsize` in that file. Its location can be found as follows:
```
import matplotlib
matplotlib.matplotlib_fname()
```
# SIRF/CIL ImageData based on Brainweb
In order to create an __MR__, __PET__ or __CT__ `ImageData` object, we need some information about the modality, the hardware used for scanning and the to some extent also the acquisition and reconstruction process. Most of this information is contained in the raw data files which can be exported from the __MR__ and __PET__ scanners. For __CT__ the parameters can be defined manually.
In the following we will now go through each modality separately and show how a simple `ImageData` object can be created. In the last part of the notebook we will then show examples about how to display the image data with python or how to manipulate the image data (e.g. multiply it with a constant or calculate its norm).
In order to make our life easier, we will assume that the voxel size and image orientation for __MR__, __PET__ and __CT__ are all the same and they are the same as the brainweb data. This is of course not true, real-life applications and/or synergistic image reconstruction we would need to resample the brainweb images before using them as input to the `ImageData` objects.
# MR
Use the 'mr' prefix for all Gadgetron-based SIRF functions.
This is done here to explicitly differentiate between SIRF mr functions and
anything else.
```
import sirf.Gadgetron as mr
```
We'll need a template MR acquisition data object
```
templ_mr = mr.AcquisitionData(os.path.join(examples_data_path('MR'), 'simulated_MR_2D_cartesian.h5'))
```
In MR the dimensions of the image data depend of course on the data acquisition but they are also influenced by the reconstruction process. Therefore, we need to carry out an example reconstruction, in order to have all the information about the image.
```
# Simple reconstruction
preprocessed_data = mr.preprocess_acquisition_data(templ_mr)
recon = mr.FullySampledReconstructor()
recon.set_input(preprocessed_data)
recon.process()
im_mr = recon.get_output()
```
If the above failed with an error 'Server running Gadgetron not accessible', you probably still have to start a Gadgetron server. Check the [DocForParticipants](https://github.com/SyneRBI/SIRF-Exercises/blob/master/DocForParticipants.md#start-a-Gadgetron-server).
Now we have got an MR image object and can fill it with the brainweb data. The dimensions won't fit, but we will simply crop the image.
```
im_mr = crop_and_fill(im_mr, T1_arr)
# im_mr is an MR image object. In order to visualise it we need access to the underlying data array. This is
# provided by the function as_array(). This yields a numpy array which can then be easily displayed. More
# information on this is also provided at the end of the notebook.
plt.figure();
plot_2d_image([1,1,1], numpy.abs(im_mr.as_array())[im_mr.as_array().shape[0]//2, :, :], 'MR', cmap="Greys_r")
```
# CT
Use the 'ct' prefix for all CIL-based functions.
This is done here to explicitly differentiate between CIL ct functions and
anything else.
```
import cil.framework as ct
```
Create a template Cone Beam CT acquisition geometry
```
N = 120
angles = numpy.linspace(0, 360, 50, True, dtype=numpy.float32)
offset = 0.4
channels = 1
ag = ct.AcquisitionGeometry.create_Cone3D((offset,-100, 0), (offset,100,0))
ag.set_panel((N,N-2))
ag.set_channels(channels)
ag.set_angles(angles, angle_unit=ct.AcquisitionGeometry.DEGREE);
```
Now we can create a template CT image object
```
ig = ag.get_ImageGeometry()
im_ct = ig.allocate(None)
```
Now we have got an CT image object and can fill it with the brainweb data. The dimensions won't fit, but we will simply crop the image.
```
im_ct = crop_and_fill(im_ct, uMap_arr)
plt.figure();
plot_2d_image([1,1,1], im_ct.as_array()[im_ct.as_array().shape[0]//2, :, :], 'CT', cmap="bone")
```
# PET
Use the 'pet' prefix for all STIR-based SIRF functions.
This is done here to explicitly differentiate between SIRF pet functions and
anything else.
```
import sirf.STIR as pet
```
We'll need a template sinogram
```
templ_sino = pet.AcquisitionData(os.path.join(examples_data_path('PET'), 'mMR','mMR_template_span11.hs'))
```
Now we can create a template PET image object that would fit dimensions for that sinogram
```
im_pet = pet.ImageData(templ_sino)
```
Now we have got a PET image object and can fill it with the brainweb data. The dimensions won't fit, but we will simply crop the image.
```
im_pet = crop_and_fill(im_pet, FDG_arr)
plt.figure();
plot_2d_image([1,1,1], im_pet.as_array()[im_pet.as_array().shape[0]//2, :, :], 'PET', cmap="hot")
```
# Basic image manipulations
Images (like most other things in SIRF and CIL) are represented as *objects*, in this case of type `ImageData`.
In practice, this means that you can only manipulate its data via *methods*.
Image objects contain the actual voxel values, but also information on the number of voxels,
voxel size, etc. There are methods to get this information.
There are additional methods for other manipulations, such as basic image arithmetic (e.g.,
you can add image objects).
Because we created an `ImageData` object for each modality we can now simply select which modality we want to look at. Because SIRF is implemented to make the transition from one modality to the next very easy, many of the *methods* and *attributes* are exactly the same between __MR__, __PET__ or __CT__ . There are of course *methods* and *attributes* which are modality-specific but the basic handling of the `ImageData` objects is very similar between __MR__, __PET__ or __CT__ .
```
# Make a copy of the image of a specific modality
image_data_object = im_ct.clone()
```
What is an ImageData?
Images are represented by objects with several methods. The most important method
is `as_array()` which we'll use below.
```
# Let's see what all the methods are.
help(pet.ImageData)
# Use as_array to extract an array of voxel values
# The resulting array as a `numpy` array, as standard in Python.
image_array=image_data_object.as_array()
# We can use the standard `numpy` methods on this array, such as getting its `shape` (i.e. dimensions).
print(image_array.shape)
# Whenever we want to do something with the image-values, we have to do it via this array.
# Let's print a voxel-value roughly in the centre of the object.
# We will not use the centre because the intensity here happens to be 0.
centre = numpy.array(image_array.shape)//2
print(image_array[centre[0], centre[1]+20, centre[2]+20])
```
Manipulate the image data for illustration
```
# Multiply the data with a factor
image_array *= 0.01
# Stick this new data into the original image object.
# (This will not modify the file content, only the variable in memory.)
image_data_object.fill(image_array)
print(image_array[centre[0], centre[1]+20, centre[2]+20])
```
You can do basic math manipulations with ImageData objects
So the above lines can be done directly on the `image` object
```
image_data_object *= 0.01
# Let's check
image_array=image_data_object.as_array()
print(image_array[centre[0], centre[1]+20, centre[2]+20])
```
Display the middle slice of the image (which is really a 3D volume)
We will use our own `plot_2d_image` function (which was defined above) for brevity.
```
# Create a new figure
plt.figure()
# Display the slice (numpy.absolute is only necessary for MR but doesn't matter for PET or CT)
plot_2d_image([1,1,1], numpy.absolute(image_array[centre[0], :, :]), 'image data', cmap="viridis")
```
Some other things to do with ImageData objects
```
print(image_data_object.norm())
another_image=image_data_object*3+8.3
and_another=another_image+image_data_object
```
| true |
code
| 0.609698 | null | null | null | null |
|
## **GRIP - TSF | Data Science & Business Analytics Internship**
### **Task 2 : K-Means Clustering**
### Author : AYOUB EL AAMRI.
# 1. Setup the environment
PANDAS,NUMPY for data manuplation.
Matplotlib,seaborn module for Data Visualisation.
sklearn for modelling
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set(style="white", color_codes=True)
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import KMeans
import sklearn.metrics as metrics
from mpl_toolkits.mplot3d import Axes3D
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
warnings.filterwarnings('ignore')
```
# 2 .Importing data
```
iris =pd.read_csv('Iris.csv')
print(iris .head())
print('Data shape -->', iris .shape)
iris["Species"].value_counts()
```
# 3.Data preprocessing
```
data = iris.drop(['Species'], axis=1)
y = iris['Species']
```
### (i).Missing values
```
data.isnull().sum()
```
### (ii) Data Visualisation
```
f,ax=plt.subplots(1,2,figsize=(8,5))
iris['Species'].value_counts().plot.pie(explode=[0.1,0.1,0.1],autopct='%1.1f%%',ax=ax[0],shadow=True)
ax[0].set_title('Iris Species Count')
sns.countplot('Species',data=iris,ax=ax[1])
ax[1].set_title('Iris Species Count')
plt.show()
```
We can see that there are 50 samples each of all the Iris Species in the data set.
### FacetGrid Plot
```
# Plotting species for Sepal
sns.FacetGrid(iris, hue="Species", size=4) \
.map(plt.scatter, "SepalLengthCm", "SepalWidthCm") \
.add_legend()
# Plotting species for petals
sns.FacetGrid(iris, hue="Species", size=4) \
.map(plt.scatter, "PetalLengthCm", "PetalWidthCm") \
.add_legend()
```
Observed that the species are nearly linearly separable with petal size, but sepal sizes are more mixed.This is an indication that the Petals can help in better and accurate Predictions over the Sepal
### Boxplot
```
fig=plt.gcf()
fig.set_size_inches(10,7)
fig=sns.boxplot(x='Species',y='SepalLengthCm',data=iris)
fig=sns.stripplot(x='Species',y='SepalLengthCm',data=iris,jitter=True,edgecolor='gray')
```
We can observe from the box plot of Iris-Virginica , there are some outliers
```
tmp = iris.drop('Id', axis=1)
tmp.hist(edgecolor='black', linewidth=1.2)
fig=plt.gcf()
fig.set_size_inches(12,6)
plt.show()
plt.figure(figsize=(10,10))
plt.subplot(2,2,1)
sns.violinplot(x='Species',y='PetalLengthCm',data=iris)
plt.subplot(2,2,2)
sns.violinplot(x='Species',y='PetalWidthCm',data=iris)
plt.subplot(2,2,3)
sns.violinplot(x='Species',y='SepalLengthCm',data=iris)
plt.subplot(2,2,4)
sns.violinplot(x='Species',y='SepalWidthCm',data=iris)
sns.pairplot(tmp, hue="Species", diag_kind="hist", size=1.6)
```
This shows how similar versicolor and virginica are, at least with the given features.But there could be features that you didn't measure that would more clearly separate the species.It's the same for any unsupervised learning - you need to have the right features to separate the groups in the best way.
### Converting Species to numeric
```
def y_label (invalue):
if invalue == 'Iris-setosa' :
return 1
elif invalue == 'Iris-virginica' :
return 0
else :
return 2
df1 = pd.DataFrame(data=y.values, columns=['species'])
df1['index']=df1['species'].apply(y_label)
```
# 4 Data Preparation
The data we are using to build a clustering should
1. Always be numeric and
2.should always be on same scale
### (i) Data Type
```
data.dtypes
```
The features we are using for clustering are numeric
### (ii).Scaling the data
```
std_scale = StandardScaler().fit(data)
data_scaled = std_scale.transform(data)
X_scaled = pd.DataFrame(data_scaled, columns = data.columns)
X_scaled.sample(5)
```
Hence before we feed a data to a clustering algorithm it becomes imperative to bring our data on the same scale by using StandardScaler
# 4 (a) K-Means algorithm
Lets try to visulaize the data if we can segreggate into clusters
### (i).Scatter plot to visualise the scaled data and intial centriods for given K -clusters (K-Means)
```
def plot_kmeans_scale(k) :
kmeans_model = KMeans(n_clusters=k, random_state=123)
kmeans_model.fit(data_scaled)
#Make predictions
labels=kmeans_model.predict(data_scaled)
#to get centroids
centroid=kmeans_model.cluster_centers_
colors=['r','g','p','b','o','y','m','w']
fig = plt.figure(1, figsize=(3,3))
kx = Axes3D(fig, rect=[0, 0, 1, 1], elev=50, azim=120)
for i in range(k) :
points=np.array([data_scaled[j]for j in range(len(data_scaled))if labels[j]>=i])
kx.scatter(points[:, 3], points[:, 0], points[:, 2],s=5, cmap='jet')#colors[i])
kx.scatter(centroid[:,0],centroid[:,1],marker='*',s=200,c='red')
#plt.title('Number of clusters = {}'.format(k))
plt.show()
k=5
for i in range(k+1):
if i>1 :
plot_kmeans_scale(i)
```
Initial centroids are indicated as red stars.
Starting with k =2 to 5 we ran the code,
Squared Euclidean distance measures the distance between each data point and the centroid, then the centroid will be re-calculated until the stop criteria and the following are screen shots of the results.
A good choice of number of clusters will lead to compact and well separated clusters.
That is to maximize intra-cluster similarity, and minimize inter-cluster similarity.
Measure the compactness of clusters( inter-cluster similarity), We can compute a measure called "Within Sum of Squares for this cluster(WSS)"for each cluster or we can take an average.
### (ii) Finding optimal number of clusters - Plot Scree pot/Elbow Plot
The technique we use to determine optimum K, the number of clusters, is called the elbow method.
```
k=9
WSS = []
for k in range(1,9):
kmeans_model = KMeans(n_clusters=k, random_state=123)
kmeans_model.fit(data_scaled)
WSS.append(kmeans_model.inertia_)
plt.plot(range(1,9), WSS, marker='o')
plt.xlabel("Number of clusters")
plt.ylabel("Within-cluster WSS")
plt.title("Scree Plot")
plt.plot([3]*6000, range(1,6001), ",")
plt.text(3.1, 5001, "optimal number of clusters = 3")
```
By plotting the number of centroids and the average distance between a data point and the centroid within the cluster we arrive at the above graph.
inertia_ :The sum of squared distances within (WSS) the cluster to its centroid.
Higher inertia_ refers to higher spread of data points from its own centroid. Lower inertia_ refers to higher concentration of datapoints at its own centroid.
From the Scree plot, inertia_ is decreasing with higher number of clusters. However, decrease in inertia_ got flattened from 4 clusters onwards.
To finalise the optimum number of clusters need to the similarity of data points in its own cluster compared to other clusters. This can be measured using Silhouette Score.
The Silhouette score is maximum for 3 clusters. Also, it is evident from Scree curve ,inertia_ got flattened for 4 clusters onwards.
Hence, based on Silhouette score and scree plot 3 clusters were considered as optimal clusters
```
for i in range(2,8):
labels=KMeans(n_clusters=i,random_state=123).fit(data_scaled).labels_
print ("Silhoutte score for k= "+str(i)+" is "+str(metrics.silhouette_score(data_scaled,labels,metric="euclidean",random_state=123)))
```
The silhouette score is a measure of how similar an object is to its own cluster compared to other clusters (separation). The value of this messure range from -1 to 1 and higher the value indicates maximum similarity in its own cluster.
```
scores = metrics.silhouette_samples(data_scaled, labels)
sns.distplot(scores)
df_scores = pd.DataFrame()
df_scores['SilhouetteScore'] = scores
df_scores['Species'] = iris['Species']
df_scores.hist(by='Species', column='SilhouetteScore', range=(0,1.0), bins=20)
sns.pairplot(df_scores, hue="Species", size=3)
```
### (iii). K-means clustering with 3 optimal Clusters
```
km = KMeans(n_clusters=3, random_state=123)
km.fit(data_scaled)
print('inertia with clusters=3 -->' ,km.inertia_)
km.cluster_centers_
```
### (iv) Make predictions on the lables using K=3
```
predicted_cluster = km.predict(data_scaled)
predicted_labels = km.labels_
```
### (v). Plot the scaled data partitioned into optimal clusters K=3
```
fig = plt.figure(1, figsize=(7,7))
ax = Axes3D(fig, rect=[0, 0, 1, 1], elev=50, azim=120)
ax.scatter(data_scaled[:, 3], data_scaled[:, 0], data_scaled[:, 2],
c=predicted_labels.astype(np.float), cmap='jet',edgecolor="k", s=150)
ax.set_xlabel("Petal width")
ax.set_ylabel("Sepal length")
ax.set_zlabel("Petal length")
plt.title("K Means", fontsize=14)
```
### (vi) Comparing Clustered data with original data for defining boundaries of 3 clusters(k-means)
```
from matplotlib import cm
fig = plt.figure(figsize=plt.figaspect(0.25))
ax = fig.add_subplot(1, 2, 1, projection='3d')
surf =ax.scatter(data_scaled[:, 3], data_scaled[:, 0],data_scaled[:, 2],
c=df1['index'], cmap='gist_rainbow',edgecolor="k", s=150)
ax.set_xlabel("Petal width")
ax.set_ylabel("Sepal length")
ax.set_zlabel("Petal length")
plt.title("Original data-IRIS", fontsize=14)
fig.colorbar(surf, shrink=0.5, aspect=10)
ax = fig.add_subplot(1, 2, 2, projection='3d')
ax.scatter(data_scaled[:, 3], data_scaled[:, 0], data_scaled[:, 2],
c=predicted_labels.astype(np.float), cmap='jet',edgecolor='k', s=150)
ax.set_xlabel("Petal width")
ax.set_ylabel("Sepal length")
ax.set_zlabel("Petal length")
plt.title("K Means Clustering -IRIS", fontsize=14)
plt.show()
```
### Observations
In conclusion, from Silhouette score and Scree plot, we can be clustered to 3 main groups of species :
we can infer that the sepal sizes are more mixed, so the clustering algorithm cant vary much difference between two species i.e versicolor and virginica species. And also note that the species are nearly linearly separable with petal size,
we can compare this range of values with the original data (which is not scaled)
### (vi) Create cluster profiles compare with Original Data labels
```
def predict_species (invalue):
if invalue == 1:
return 'Iris-setosa'
elif invalue == 0 :
return 'Iris-virginica'
else :
return 'Iris-versicolor'
df1['predict_label']= pd.DataFrame(data=predicted_labels, columns=['predict_label'])
df1['predict_species']=df1['predict_label'].apply(predict_species)
sum(np.where((df1['species']!=df1['predict_species']),1,0))
df1[df1['species']!=df1['predict_species']]
```
By K-means clustering with number of clusters=3 , we are able to cluster 143 species correctly out of 150 species.The cluster Iris-versicolor and Iris-virginica are misclustered.
| true |
code
| 0.431824 | null | null | null | null |
|
```
import random
from mesa import Agent, Model
from mesa.time import RandomActivation
from mesa.space import MultiGrid
from mesa.datacollection import DataCollector
def compute_gini(model):
agent_wealths = [agent.wealth for agent in model.schedule.agents]
x = sorted(agent_wealths)
N = model.num_agents
B = sum(xi * (N - i) for i, xi in enumerate(x)) / (N * sum(x))
return (1 + (1 / N) - 2 * B)
class MoneyModel(Model):
"""A simple model of an economy where agents exchange currency at random.
All the agents begin with one unit of currency, and each time step can give
a unit of currency to another agent. Note how, over time, this produces a
highly skewed distribution of wealth.
"""
def __init__(self, N, width, height):
self.num_agents = N
self.grid = MultiGrid(height, width, True)
self.schedule = RandomActivation(self)
self.datacollector = DataCollector(
model_reporters={"Gini": compute_gini},
agent_reporters={"Wealth": "wealth"}
)
# Create agents
for i in range(self.num_agents):
a = MoneyAgent(i, self)
self.schedule.add(a)
# Add the agent to a random grid cell
x = random.randrange(self.grid.width)
y = random.randrange(self.grid.height)
self.grid.place_agent(a, (x, y))
self.running = True
self.datacollector.collect(self)
def step(self):
self.schedule.step()
# collect data
self.datacollector.collect(self)
def run_model(self, n):
for i in range(n):
self.step()
class MoneyAgent(Agent):
""" An agent with fixed initial wealth."""
def __init__(self, unique_id, model):
super().__init__(unique_id, model)
self.wealth = 1
def move(self):
possible_steps = self.model.grid.get_neighborhood(
self.pos, moore=True, include_center=False
)
new_position = random.choice(possible_steps)
self.model.grid.move_agent(self, new_position)
def give_money(self):
cellmates = self.model.grid.get_cell_list_contents([self.pos])
if len(cellmates) > 1:
other = random.choice(cellmates)
other.wealth += 1
self.wealth -= 1
def step(self):
self.move()
if self.wealth > 0:
self.give_money()
```
## Process for adding interactive display
```
from mesa.visualization.ModularVisualization import ModularServer
from mesa.visualization.modules import CanvasGrid
from mesa.visualization.modules import ChartModule
from mesa.visualization.UserParam import UserSettableParameter
def agent_portrayal(agent):
portrayal = {"Shape": "circle",
"Filled": "true",
"r": 0.5}
if agent.wealth > 0:
portrayal["Color"] = "red"
portrayal["Layer"] = 0
else:
portrayal["Color"] = "grey"
portrayal["Layer"] = 1
portrayal["r"] = 0.2
return portrayal
grid = CanvasGrid(agent_portrayal, 10, 10, 500, 500)
chart = ChartModule([
{"Label": "Gini", "Color": "#0000FF"}],
data_collector_name='datacollector')
model_params = {
"N": UserSettableParameter('slider', "Number of agents", 100, 2, 200, 1,
description="Choose how many agents to include in the model"),
"width": 10,
"height": 10
}
server = ModularServer(MoneyModel, [grid, chart], "Money Model", model_params)
server.port = 8521
server.launch()
```
| true |
code
| 0.648578 | null | null | null | null |
|
# House Prices: Advanced Regression Techniques
## Table of Contents
- <b>Introduction</b>
- <b>Data Processing</b>
- Outliers
- Target variable
- <b>Feature engineering</b>
- Missing data
- <i>Exploration</i>
- <i>Imputation</i>
- Converting features
- <b>Machine Learning</b>
- Set up
- Initiating algorithms
- <i>Generalized linear models</i>
- <i>Ensemble methods (Gradient tree boosting)</i>
- Fitting algorithms
- <i>Fit all models</i>
- <i>Rank model performance</i>
- Stacking algorithms
- <b>Final predictions</b>
## Introduction
Hello Kagglers! In this kernel i'll be taking on the Kaggle Competition: 'House Prices: Advanced Regression Techniques'. This competition uses the Ames Housing Dataset, which itself contains 1460 observations in both training and tests sets, and 80 features to boot. The challenge is to predict property Sale Price, hence this is a Regression problem.
Throughout this kernel I will provide explanations about my code so you can understand the logic behind each action. While i'll conduct some feature engineering, my main focus will be to explore the predictive models and hopefully build an effective stacked model for final prediction.
At the time of posting, this model achieved a score within the top 12% of the Leaderboard, achieved through a simple approach to stacking.
Well that's enough from me - enjoy the read and please feel free to share with me any feedback regarding my code or overall approach! I'm always looking to improve :).
```
# All project packages imported at the start
# Project packages
import pandas as pd
import numpy as np
# Visualisations
import matplotlib.pyplot as plt
import seaborn as sns
# Statistics
from scipy import stats
from scipy.stats import norm, skew
from statistics import mode
from scipy.special import boxcox1p
# Machine Learning
from sklearn.preprocessing import LabelEncoder
from sklearn.linear_model import Lasso, Ridge, RidgeCV, ElasticNet
import xgboost as xgb
import lightgbm as lgb
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import RobustScaler
from sklearn.base import BaseEstimator, TransformerMixin, RegressorMixin, clone
from sklearn.model_selection import KFold, cross_val_score, train_test_split
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import LinearRegression
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.kernel_ridge import KernelRidge
from catboost import Pool, CatBoostRegressor, cv
import sys
import warnings
if not sys.warnoptions:
warnings.simplefilter("ignore")
# Reading in the data
train = pd.read_csv('../input/train.csv')
test = pd.read_csv('../input/test.csv')
# Inspecting the train dataset
train.info()
# And now the test data
test.info()
```
There a lot of object dtypes and a lot of missing values within this dataset. We'll need to consider these during data processing.
TO add, a lot of features have been abbreviated. For reference, here are their full names along with a brief explanation:
- SalePrice - the property's sale price in dollars. This is the target variable that you're trying to predict.
- MSSubClass: The building class
- MSZoning: The general zoning classification
- LotFrontage: Linear feet of street connected to property
- LotArea: Lot size in square feet
- Street: Type of road access
- Alley: Type of alley access
- LotShape: General shape of property
- LandContour: Flatness of the property
- Utilities: Type of utilities available
- LotConfig: Lot configuration
- LandSlope: Slope of property
- Neighborhood: Physical locations within Ames city limits
- Condition1: Proximity to main road or railroad
- Condition2: Proximity to main road or railroad (if a second is present)
- BldgType: Type of dwelling
- HouseStyle: Style of dwelling
- OverallQual: Overall material and finish quality
- OverallCond: Overall condition rating
- YearBuilt: Original construction date
- YearRemodAdd: Remodel date
- RoofStyle: Type of roof
- RoofMatl: Roof material
- Exterior1st: Exterior covering on house
- Exterior2nd: Exterior covering on house (if more than one material)
- MasVnrType: Masonry veneer type
- MasVnrArea: Masonry veneer area in square feet
- ExterQual: Exterior material quality
- ExterCond: Present condition of the material on the exterior
- Foundation: Type of foundation
- BsmtQual: Height of the basement
- BsmtCond: General condition of the basement
- BsmtExposure: Walkout or garden level basement walls
- BsmtFinType1: Quality of basement finished area
- BsmtFinSF1: Type 1 finished square feet
- BsmtFinType2: Quality of second finished area (if present)
- BsmtFinSF2: Type 2 finished square feet
- BsmtUnfSF: Unfinished square feet of basement area
- TotalBsmtSF: Total square feet of basement area
- Heating: Type of heating
- HeatingQC: Heating quality and condition
- CentralAir: Central air conditioning
- Electrical: Electrical system
- 1stFlrSF: First Floor square feet
- 2ndFlrSF: Second floor square feet
- LowQualFinSF: Low quality finished square feet (all floors)
- GrLivArea: Above grade (ground) living area square feet
- BsmtFullBath: Basement full bathrooms
- BsmtHalfBath: Basement half bathrooms
- FullBath: Full bathrooms above grade
- HalfBath: Half baths above grade
- Bedroom: Number of bedrooms above basement level
- Kitchen: Number of kitchens
- KitchenQual: Kitchen quality
- TotRmsAbvGrd: Total rooms above grade (does not include bathrooms)
- Functional: Home functionality rating
- Fireplaces: Number of fireplaces
- FireplaceQu: Fireplace quality
- GarageType: Garage location
- GarageYrBlt: Year garage was built
- GarageFinish: Interior finish of the garage
- GarageCars: Size of garage in car capacity
- GarageArea: Size of garage in square feet
- GarageQual: Garage quality
- GarageCond: Garage condition
- PavedDrive: Paved driveway
- WoodDeckSF: Wood deck area in square feet
- OpenPorchSF: Open porch area in square feet
- EnclosedPorch: Enclosed porch area in square feet
- 3SsnPorch: Three season porch area in square feet
- ScreenPorch: Screen porch area in square feet
- PoolArea: Pool area in square feet
- PoolQC: Pool quality
- Fence: Fence quality
- MiscFeature: Miscellaneous feature not covered in other categories
- MiscVal: $Value of miscellaneous feature
- MoSold: Month Sold
- YrSold: Year Sold
- SaleType: Type of sale
- SaleCondition: Condition of sale
```
# Viewing the first 10 observations
train.head(10)
# Let's get confirmation on the dataframe shapes
print("\nThe train data size is: {} ".format(train.shape))
print("The test data size is: {} ".format(test.shape))
```
That gives a better feel for what we are initally working with. As one final step pre-data processing, I'm going to take a copy of the ID column and remove it from both dataframes, since this is only needed when submitting final predictions to the Kaggle leaderboard, as opposed to be helpful within any predictive model.
```
#Save the 'Id' column
train_ID = train['Id']
test_ID = test['Id']
# Now drop the 'Id' colum since it's unnecessary for the prediction process
train.drop("Id", axis = 1, inplace = True)
test.drop("Id", axis = 1, inplace = True)
```
# Data Processing
## Outliers
The Ames dataset documentation reveals two outliers in the feature GrLivArea (Above grade (ground) living area square feet) - let's inspect these with a quick graph:
```
# Checking for outliers in GrLivArea as indicated in dataset documentation
sns.regplot(x=train['GrLivArea'], y=train['SalePrice'], fit_reg=True)
plt.show()
```
Yep, two pretty clear outliers in the bottom right hand corner. It's not always appropriate to delete outliers - removing too many can actually detriment the model's quality. These two however look relatively safe, and with backing from the documentation i'm going to go ahead and clear them.
```
# Removing two very extreme outliers in the bottom right hand corner
train = train.drop(train[(train['GrLivArea']>4000) & (train['SalePrice']<300000)].index)
# Re-check graph
sns.regplot(x=train['GrLivArea'], y=train['SalePrice'], fit_reg=True)
plt.show()
```
The updated graph is looking better now. Praise to the documentation!
## Target Variable
Let's now learn more about the Target Variable - Sale Price. I'm particularly interested in detecting any skew which would become problematic during the modelling phase.
```
(mu, sigma) = norm.fit(train['SalePrice'])
# 1. Plot Sale Price
sns.distplot(train['SalePrice'] , fit=norm);
plt.ylabel('Frequency')
plt.title('SalePrice distribution')
plt.legend(['Normal dist. ($\mu=$ {:.2f} and $\sigma=$ {:.2f} )'.format(mu, sigma)],
loc='best')
# Get the fitted parameters used by the function
print( '\n mu = {:.2f} and sigma = {:.2f}\n'.format(mu, sigma))
# 2. Plot SalePrice as a QQPlot
fig = plt.figure()
res = stats.probplot(train['SalePrice'], plot=plt)
plt.show()
```
We can see here the Target Variable is right skewed. A log transformation should help bring it back to normality. The code below will complete this.
```
# Applying a log(1+x) transformation to SalePrice
train["SalePrice"] = np.log1p(train["SalePrice"])
# 1. Plot Sale Price
sns.distplot(train['SalePrice'] , fit=norm);
plt.ylabel('Frequency')
plt.title('SalePrice distribution')
plt.legend(['Normal dist. ($\mu=$ {:.2f} and $\sigma=$ {:.2f} )'.format(mu, sigma)],
loc='best')
# Get the fitted parameters used by the function
(mu, sigma) = norm.fit(train['SalePrice'])
print( '\n mu = {:.2f} and sigma = {:.2f}\n'.format(mu, sigma))
# 2. Plot SalePrice as a QQPlot
fig = plt.figure()
res = stats.probplot(train['SalePrice'], plot=plt)
plt.show()
```
A thing of beauty - the target variable now looks far more amenable for modelling. Let's move on now to some feature engineering.
# Feature Engineering
Firstly, I will compile all data into a single dataset to save code duplication across both train & test sets:
```
# Saving train & test shapes
ntrain = train.shape[0]
ntest = test.shape[0]
# Creating y_train variable
y_train = train.SalePrice.values
# New all encompassing dataset
all_data = pd.concat((train, test)).reset_index(drop=True)
# Dropping the target
all_data.drop(['SalePrice'], axis=1, inplace=True)
# Printing all_data shape
print("all_data size is: {}".format(all_data.shape))
```
## Missing data
### Exploration
As was evident when initially inspecting the data, many feature variable are missing values. To get a better sense of this, I will compile a ranked table of missing values by the % of data missing.
```
# Getting a missing % count
all_data_missing = (all_data.isnull().sum() / len(all_data)) * 100
all_data_missing = all_data_missing.drop(all_data_missing[all_data_missing == 0].index).sort_values(ascending=False)
missing_data = pd.DataFrame({'Missing Percentage':all_data_missing})
missing_data.head(30)
```
Let's now make this data clearer by plotting it in a graph - enter barplot:
```
# Visualising missing data
f, ax = plt.subplots(figsize=(10, 6))
plt.xticks(rotation='90')
sns.barplot(x=missing_data.index, y=missing_data['Missing Percentage'])
plt.xlabel('Features', fontsize=15)
plt.ylabel('Percent of missing values', fontsize=15)
plt.title('Percent missing data by feature', fontsize=15)
```
A couple of features look severely depleted, but the rest only suffer a few omissions which means imputing these blank variables certainly becomes an option. To get a better sense for how each feature correlates to the target variable, i'll draw up a correlation matrix, before then tackling the missing data. See below!
```
# Initiate correlation matrix
corr = train.corr()
# Set-up mask
mask = np.zeros_like(corr, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
# Set-up figure
plt.figure(figsize=(14, 8))
# Title
plt.title('Overall Correlation of House Prices', fontsize=18)
# Correlation matrix
sns.heatmap(corr, mask=mask, annot=False,cmap='RdYlGn', linewidths=0.2, annot_kws={'size':20})
plt.show()
```
Lots of strong correlations on show, especially Overall Quality (not surprising)! Features regarding the Garage are also relating strongly. Right, let's impute the missing values ready for modelling.
### Imputation
I have bundled features into a few different operations depending on what best fits their structure, whether that is replacing with a string or integer to denote zero, or imputation via a specific value. I have spared a lot of the trial and erroring with the final code used to achieve 0 missing values across both datasets.
```
# All columns where missing values can be replaced with 'None'
for col in ('PoolQC', 'MiscFeature', 'Alley', 'Fence', 'FireplaceQu', 'GarageType', 'GarageFinish', 'GarageQual', 'GarageCond', 'BsmtQual', 'BsmtCond', 'BsmtExposure', 'BsmtFinType1', 'BsmtFinType2', 'MasVnrType', 'MSSubClass'):
all_data[col] = all_data[col].fillna('None')
# All columns where missing values can be replaced with 0
for col in ('GarageYrBlt', 'GarageArea', 'GarageCars', 'BsmtFinSF1', 'BsmtFinSF2', 'BsmtUnfSF','TotalBsmtSF', 'BsmtFullBath', 'BsmtHalfBath', 'MasVnrArea'):
all_data[col] = all_data[col].fillna(0)
# All columns where missing values can be replaced with the mode (most frequently occurring value)
for col in ('MSZoning', 'Electrical', 'KitchenQual', 'Exterior1st', 'Exterior2nd', 'SaleType', 'Functional', 'Utilities'):
all_data[col] = all_data[col].fillna(all_data[col].mode()[0])
# Imputing LotFrontage with the median (middle) value
all_data['LotFrontage'] = all_data.groupby('Neighborhood')['LotFrontage'].apply(lambda x: x.fillna(x.median()))
# Checking the new missing % count
all_data_missing = (all_data.isnull().sum() / len(all_data)) * 100
all_data_missing = all_data_missing.drop(all_data_missing[all_data_missing == 0].index).sort_values(ascending=False)
missing_data = pd.DataFrame({'Missing Ratio':all_data_missing})
missing_data.head(30)
```
Another check on the Missing data table reveals exactly the desired outcome - nothing.
## Converting variables
### Amending dtypes
I am going to perform a few further actions before modelling the data. This will be not an exhaustive engineering process, but instead some simple steps that will hopefully support more powerful future models.
Firstly, there are some variables that should in fact be categorical rather than numeric, so i'll complete this step below.
```
# Converting those variables which should be categorical, rather than numeric
for col in ('MSSubClass', 'OverallCond', 'YrSold', 'MoSold'):
all_data[col] = all_data[col].astype(str)
all_data.info()
```
### Transforming skewed feature variables
Ok, the dataset is starting to look better. I considered and fixed for skew within the Target variable earlier on, let's now do the same for all remaining numeric Feature variables.
```
# Applying a log(1+x) transformation to all skewed numeric features
numeric_feats = all_data.dtypes[all_data.dtypes != "object"].index
# Compute skewness
skewed_feats = all_data[numeric_feats].apply(lambda x: skew(x.dropna())).sort_values(ascending=False)
skewness = pd.DataFrame({'Skew' :skewed_feats})
skewness.head(15)
```
<b>Box Cox Transformation of (highly) skewed features</b>
Skewed features are a formality when dealing with real-world data. Transformation techniques can help to stabilize variance, make data more normal distribution-like and improve the validity of measures of association.
The problem with the Box-Cox Transformation is estimating lambda. This value will depend on the existing data, and as such should be considered when performing cross validation on out of sample datasets.
```
# Check on number of skewed features above 75% threshold
skewness = skewness[abs(skewness) > 0.75]
print("Total number of features requiring a fix for skewness is: {}".format(skewness.shape[0]))
# Now let's apply the box-cox transformation to correct for skewness
skewed_features = skewness.index
lam = 0.15
for feature in skewed_features:
all_data[feature] = boxcox1p(all_data[feature], lam)
```
### New feature
I'm also going to create a new feature to bring together a few similar Features, into an overall 'Total Square Footage'.
```
# Creating a new feature: Total Square Footage
all_data['TotalSF'] = all_data['TotalBsmtSF'] + all_data['1stFlrSF'] + all_data['2ndFlrSF']
```
### Class imbalance
Lastly, a test for any significance class imbalance. Any variable that is represented by a single class by greater than 97% will be removed from the datasets. I also explored the same strategy at the 95% level, but found that model performance decreased ever so slightly with the removal of two further features - LandSlope & MiscFeature. Thus, I will stick at the 97% level.
```
# Identifying features where a class is over 97% represented
low_var_cat = [col for col in all_data.select_dtypes(exclude=['number']) if 1 - sum(all_data[col] == mode(all_data[col]))/len(all_data) < 0.03]
low_var_cat
# Dropping these columns from both datasets
all_data = all_data.drop(['Street', 'Utilities', 'Condition2', 'RoofMatl', 'Heating', 'PoolQC'], axis=1)
```
### Label encoding
This step build on the previous step whereby all text data will become numeric. This is a requirement for Machine Learning, that is, only numerical data can be fed into a predictive model. There are many other encoding techniques available, some of which more powerful than Label Encoding which does incur the risk of falsely ranking variables, e.g. coding three locations into 0, 1 and 2 might imply that 2 is a higher value than 0, which is incorrect as the numbers just represent different categories (locations). This is a simple approach, however, and therefore I'm going to stick with it for the current kernel.
Check out this link for more on encoding data:
https://www.kdnuggets.com/2015/12/beyond-one-hot-exploration-categorical-variables.html
```
# List of columns to Label Encode
cols = ('FireplaceQu', 'BsmtQual', 'BsmtCond', 'GarageQual', 'GarageCond',
'ExterQual', 'ExterCond','HeatingQC', 'KitchenQual', 'BsmtFinType1',
'BsmtFinType2', 'Functional', 'Fence', 'BsmtExposure', 'GarageFinish', 'LandSlope',
'LotShape', 'PavedDrive', 'Alley', 'CentralAir', 'MSSubClass', 'OverallCond',
'YrSold', 'MoSold')
# Process columns, apply LabelEncoder to categorical features
for c in cols:
lbl = LabelEncoder()
lbl.fit(list(all_data[c].values))
all_data[c] = lbl.transform(list(all_data[c].values))
# Check on data shape
print('Shape all_data: {}'.format(all_data.shape))
```
### Get dummies
I will now round up the feature engineering stage of this project by creating dummy variables ready for model building.
```
# Get dummies
all_data = pd.get_dummies(all_data)
all_data.shape
# Now to return to separate train/test sets for Machine Learning
train = all_data[:ntrain]
test = all_data[ntrain:]
```
# Machine Learning
## Set-up
Before modelling I am going to define a function that returns the cross-validation 'rmse' error, following 10-folds. This will ensure that all rmse scores produced have been smoothed out across the entire dataset and are not a result of any irregularities, which otherwise would provide a misleading representation of model performance. And that, we do not want.
```
# Set up variables
X_train = train
X_test = test
# Defining two rmse_cv functions
def rmse_cv(model):
rmse = np.sqrt(-cross_val_score(model, X_train, y_train, scoring="neg_mean_squared_error", cv = 10))
return(rmse)
```
With the rmse_cv function in place, I am going to tackle modelling in three phases - hopefully making it easy to follow:
1. Initiating algorithms
2. Fitting algorithms
3. Stacking algorithms
## 1. Initiating algorithms
I'm going to be working with two broad sets of algorithms within this kernel:
1. Generalized linear models
2. Ensemble methods (specifically Gradient Tree Boosting)
### A. Generalized linear models
I'm going to specifically focus on 'regularised' regression models within this section. <b>Regularisation</b> is a form of regression that shrinks (or 'regularises') the coefficient estimates towards zero. In other words, this technique discourages learning a more complex or flexible model, so as to avoid the risk of overfitting. This will be particularly helpful for the current dataset where the model needs to account for ~80 features.
There are different types of regularised regressions - I will now explore each of them.
#### 1. Ridge Regression (<i>L2 Regularisation</i>)
Ridge regression shrinks the regression coefficients, so that variables, with minor contribution to the outcome, have their coefficients <b>close to zero.</b>
The shrinkage of the coefficients is achieved by penalizing the regression model with a penalty term called L2-norm, which is the sum of the squared coefficients.
For regularised regression models, the key tuning parameter is <b>alpha</b> - a regularization parameter that measures how flexible our model is. The higher the regularization the less prone our model will be to overfit. However it will also lose flexibility and might not capture all of the signal in the data. Thus I will define multiple alpha's, iterate over them and plot the result so we can easily see the optimal alpha level.
```
# Setting up list of alpha's
alphas = [0.05, 0.1, 0.3, 1, 3, 5, 10, 15, 30]
# Iterate over alpha's
cv_ridge = [rmse_cv(Ridge(alpha = alpha)).mean() for alpha in alphas]
# Plot findings
cv_ridge = pd.Series(cv_ridge, index = alphas)
cv_ridge.plot(title = "Validation")
plt.xlabel("Alpha")
plt.ylabel("Rmse")
# 5 looks like the optimal alpha level, so let's fit the Ridge model with this value
model_ridge = Ridge(alpha = 5)
```
#### 2. Lasso Regression <i>(L1 regularisation)</i>
Lasso stands for Least Absolute Shrinkage and Selection Operator. It shrinks the regression coefficients toward zero by penalizing the regression model with a penalty term called L1-norm, which is the sum of the absolute coefficients.
In the case of lasso regression, the penalty has the effect of forcing some of the coefficient estimates, with a minor contribution to the model, to be <b>exactly equal to zero</b>. This means that, lasso can be also seen as an alternative to the subset selection methods for performing variable selection in order to reduce the complexity of the model. For this reason, I usually prefer working with the Lasso algorithm over Ridge.
Let's take the same appraoch to alpha selection, before initiating the Lasso model.
```
# Setting up list of alpha's
alphas = [0.01, 0.005, 0.001, 0.0005, 0.0001]
# Iterate over alpha's
cv_lasso = [rmse_cv(Lasso(alpha = alpha)).mean() for alpha in alphas]
# Plot findings
cv_lasso = pd.Series(cv_lasso, index = alphas)
cv_lasso.plot(title = "Validation")
plt.xlabel("Alpha")
plt.ylabel("Rmse")
```
An addition to the Lasso model - I will use a Pipeline to scale features. For the L1 norm to work properly, it's essential this step is taken before fitting the model.
```
# Initiating Lasso model
model_lasso = make_pipeline(RobustScaler(), Lasso(alpha = 0.0005))
```
#### 3. ElasticNet Regression
Elastic Net produces a regression model that is penalized with both the L1-norm and L2-norm. The consequence of this is to effectively shrink coefficients (like in ridge regression) and to set some coefficients to zero (as in LASSO).
```
# Setting up list of alpha's
alphas = [0.01, 0.005, 0.001, 0.0005, 0.0001]
# Iterate over alpha's
cv_elastic = [rmse_cv(ElasticNet(alpha = alpha)).mean() for alpha in alphas]
# Plot findings
cv_elastic = pd.Series(cv_elastic, index = alphas)
cv_elastic.plot(title = "Validation")
plt.xlabel("Alpha")
plt.ylabel("Rmse")
```
Again, i'll be using RobustScaler to scale all features before initiating the ElasticNet model.
```
# Initiating ElasticNet model
model_elastic = make_pipeline(RobustScaler(), ElasticNet(alpha = 0.0005))
```
#### 4. Kernel ridge regression
OK, this is not strictly a generalized linear model. Kernel ridge regression (KRR) combines Ridge Regression (linear least squares with l2-norm regularization) with the 'kernel trick'. It thus learns a linear function in the space induced by the respective kernel and the data. For non-linear kernels, this corresponds to a non-linear function in the original space.
```
# Setting up list of alpha's
alphas = [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]
# Iterate over alpha's
cv_krr = [rmse_cv(KernelRidge(alpha = alpha)).mean() for alpha in alphas]
# Plot findings
cv_krr = pd.Series(cv_krr, index = alphas)
cv_krr.plot(title = "Validation")
plt.xlabel("Alpha")
plt.ylabel("Rmse")
```
As well as scaling features again for the Kernel ridge regression, I've defined a few more parameters within this algorithm:
- Kernel: Polynomial
- <i>This means that the algorithm will not just consider similarity between features, but also similarity between combinations of features.</i>
- Degree & Coef0:
- <i>These are used to define the precise structure of the Polynomial kernel. I arrived at the below numbers through a bit of trial and error. Implementing a GridSearchCV would probably yield a better overall fit.</i>
```
# Initiatiing KernelRidge model
model_krr = make_pipeline(RobustScaler(), KernelRidge(alpha=6, kernel='polynomial', degree=2.65, coef0=6.9))
```
### B. Ensemble methods (Gradient tree boosting)
Boosting is an ensemble technique in which the predictors are not made independently, but sequentially.
This technique employs the logic in which the subsequent predictors learn from the mistakes of the previous predictors. Therefore, the observations have an unequal probability of appearing in subsequent models and ones with the highest error appear most. The predictors can be chosen from a range of models like decision trees, regressors, classifiers etc. Because new predictors are learning from mistakes committed by previous predictors, it takes less time/iterations to reach close to actual predictions. But we have to choose the stopping criteria carefully or it could lead to overfitting on training data. Gradient Boosting is an example of a boosting algorithm, and these are what i'll be applying to the current data next.
#### 5. Gradient Boosting
For the Gradient Boosting algorithm I will use 'huber' as the loss function as this is robust to outliers. The other parameters on display originate from other kernels tackling this challenge, followed by trial and error to refine them to this specific dataset. Again, applying GridSearchCV will help to define a better set of parameters than those currently on display.
For the Gradient Boosting model I will use 'huber' as the loss function as this is robust to outliers.
```
# Initiating Gradient Boosting Regressor
model_gbr = GradientBoostingRegressor(n_estimators=1200,
learning_rate=0.05,
max_depth=4,
max_features='sqrt',
min_samples_leaf=15,
min_samples_split=10,
loss='huber',
random_state=5)
```
#### 6. XGBoost
Another gradient boosting algorithm; one that's well documented as being the key to many winning solutions on Kaggle.
```
# Initiating XGBRegressor
model_xgb = xgb.XGBRegressor(colsample_bytree=0.2,
learning_rate=0.06,
max_depth=3,
n_estimators=1150)
```
#### 7. LightGBM
A more recent gradient boosting algorithm which boasts significantly faster runtime than XGBoost, while still offering best-in-class predictive power.
```
# Initiating LGBMRegressor model
model_lgb = lgb.LGBMRegressor(objective='regression',
num_leaves=4,
learning_rate=0.05,
n_estimators=1080,
max_bin=75,
bagging_fraction=0.80,
bagging_freq=5,
feature_fraction=0.232,
feature_fraction_seed=9,
bagging_seed=9,
min_data_in_leaf=6,
min_sum_hessian_in_leaf=11)
```
#### 8. CatBoost
All the way from Russia, CatBoost is a new gradient boosting algorithm able to work with categorical features <b>without</b> any prior processing needed. I am still finding my feet with implementing the CatBoostRegressor - thus this section of the kernel is very much a work in progress. Any guidance on working with this algorithm would be greatly appreciated - especially with regards to performing cross-validation and hyperparameter tuning. The below parameters again came from my own trial & error.
```
# Initiating CatBoost Regressor model
model_cat = CatBoostRegressor(iterations=2000,
learning_rate=0.10,
depth=3,
l2_leaf_reg=4,
border_count=15,
loss_function='RMSE',
verbose=200)
# Initiating parameters ready for CatBoost's CV function, which I will use below
params = {'iterations':2000,
'learning_rate':0.10,
'depth':3,
'l2_leaf_reg':4,
'border_count':15,
'loss_function':'RMSE',
'verbose':200}
```
## 2. Fitting algorithms
### Fit all models
I'll now run the custom rmse_cv function on each algorithm to understand each model's performance. This function doesn't work for the CatBoost algorithm, so I will just fit this for now and will return with a solution at a later date.
```
# Fitting all models with rmse_cv function, apart from CatBoost
cv_ridge = rmse_cv(model_ridge).mean()
cv_lasso = rmse_cv(model_lasso).mean()
cv_elastic = rmse_cv(model_elastic).mean()
cv_krr = rmse_cv(model_krr).mean()
cv_gbr = rmse_cv(model_gbr).mean()
cv_xgb = rmse_cv(model_xgb).mean()
cv_lgb = rmse_cv(model_lgb).mean()
# Define pool
pool = Pool(X_train, y_train)
# CV Catboost algorithm
cv_cat = cv(pool=pool, params=params, fold_count=10, shuffle=True)
# Select best model
cv_cat = cv_cat.at[1999, 'train-RMSE-mean']
```
### Rank model performance
The moment of truth - let's see how each algorithm has performed, and which one tops the pile.
```
# Creating a table of results, ranked highest to lowest
results = pd.DataFrame({
'Model': ['Ridge',
'Lasso',
'ElasticNet',
'Kernel Ridge',
'Gradient Boosting Regressor',
'XGBoost Regressor',
'Light Gradient Boosting Regressor',
'CatBoost'],
'Score': [cv_ridge,
cv_lasso,
cv_elastic,
cv_krr,
cv_gbr,
cv_xgb,
cv_lgb,
cv_cat]})
# Build dataframe of values
result_df = results.sort_values(by='Score', ascending=True).reset_index(drop=True)
result_df.head(8)
# Plotting model performance
f, ax = plt.subplots(figsize=(10, 6))
plt.xticks(rotation='90')
sns.barplot(x=result_df['Model'], y=result_df['Score'])
plt.xlabel('Models', fontsize=15)
plt.ylabel('Model performance', fontsize=15)
plt.ylim(0.10, 0.116)
plt.title('RMSE', fontsize=15)
```
We can see from the above graph that the LASSO and ElasticNet are the best cross-validated models, scoring very closely to one another. Gradient boosting hasn't fared quite as well, however each algorithm still obtains a very respectable RMSE. The CatBoost model has not been cross-validated so I am not going to consider this algorithm (for the time being).
## 3. Stacking algorithms
I've ran eight models thus far, and they've all performed pretty well. I'm now quite keen to explore stacking as a means of achieving an even higher score. In a nutshell, stacking uses as a first-level (base) the predictions of a few basic classifiers and then uses another model at the second-level to predict the output from the earlier first-level predictions. Stacking can be beneficial as combining models allows the best elements of their predictive power on the given challenged to be pooled, thus smoothing over any gaps left from an individual model and increasing the likelihood of stronger overall model performance.
Ok, let's get model predictions and then stack the results!
```
# Fit and predict all models
model_lasso.fit(X_train, y_train)
lasso_pred = np.expm1(model_lasso.predict(X_test))
model_elastic.fit(X_train, y_train)
elastic_pred = np.expm1(model_elastic.predict(X_test))
model_ridge.fit(X_train, y_train)
ridge_pred = np.expm1(model_ridge.predict(X_test))
model_xgb.fit(X_train, y_train)
xgb_pred = np.expm1(model_xgb.predict(X_test))
model_gbr.fit(X_train, y_train)
gbr_pred = np.expm1(model_gbr.predict(X_test))
model_lgb.fit(X_train, y_train)
lgb_pred = np.expm1(model_lgb.predict(X_test))
model_krr.fit(X_train, y_train)
krr_pred = np.expm1(model_krr.predict(X_test))
model_cat.fit(X_train, y_train)
cat_pred = np.expm1(model_cat.predict(X_test))
```
## Final predictions
Now to create the stacked model! I'm going to keep this very simple by equally weighting every model. This is done by summing together the models and then dividing by the total count. Weighted averages could be a means of gaining a slightly better final predictions, whereby the best performing models take a bigger cut of the stacked model. One of the more important considerations when undertaking any kind of model stacking is model independence. Stacking models that draw similar conclusions from the data is quite unlikely to yield a better score compared to a single model, because there's no additional insight being drawn out. Rather, model's that tackle the dataset in different ways, and that are able to detect unique aspects within it stand a better chance of contributing to a more powerful overall stacked model, since as a whole, more of the nuances within the data have been recognised and accounted for.
Please note, I am not going to include the CatBoost model as I found the model prediction declined when this was included - looks at the output it appears as though it is overfitting the data (visible through the differing learn/test scores). I will return to this model later with a view to improve it's application to the current dataset.
```
# Create stacked model
stacked = (lasso_pred + elastic_pred + ridge_pred + xgb_pred + lgb_pred + krr_pred + gbr_pred) / 7
# Setting up competition submission
sub = pd.DataFrame()
sub['Id'] = test_ID
sub['SalePrice'] = stacked
sub.to_csv('house_price_predictions.csv',index=False)
```
And there you have it! Within this kernel I have performed simple data preparation techniques before applying several models, and then combining their performance into a single stacked model. This achieved a final RMSE that pitched me within the top 12% of the leaderboard.
I hope the approach and techniques on display in this kernel have been helpful in terms of not just solving the current challenges, but other regression and broader machine learning challenges.
If this kernel has indeed helped you - i'd very much like to hear it :). Please also share with me any suggestions that could improve my final model, i'm always looking to learn more. In terms of future version, I aim to tackle the following:
- Perfecting the CatBoost model
- Performing a more rigorous GridSearchCV
- Exploring more complex methods of model stacking for better final prediction.
Thank you for reading :).
| true |
code
| 0.638272 | null | null | null | null |
|
Walk-through
============
This walk-through guides users through several key concepts for using the nervana graph. The corresponding jupyter notebook is found [here](https://github.com/NervanaSystems/ngraph-neon/blob/master/examples/walk_through/Graph_Introduction.ipynb).
Let's begin with a very simple example: computing ``x+1`` for several values of ``x`` using the ``ngraph``
API. We should think of the computation as being invoked from the *host*, but possibly taking place
somewhere else, which we will refer to as *the device.*
The nervana graph currently uses a compilation model. Users first define the computations by building a graph of operations, then they are compiled and run. In the future, we plan an even more compiler-like approach, where an executable is produced that can later be run on various platforms, in addition to an interactive version.
Our first program will use ngraph to compute ``x+1`` for each ``x`` provided.
The x+1 program
---------------
The complete program, which we will walk through, is:
```
from __future__ import print_function
from contextlib import closing
import neon as ng
import neon.transformers as ngt
# Build the graph
x = ng.placeholder(axes=())
x_plus_one = x + 1
# Select a transformer
with closing(ngt.make_transformer()) as transformer:
# Define a computation
plus_one = transformer.computation(x_plus_one, x)
# Run the computation
for i in range(5):
print(plus_one(i))
```
We begin by importing ``ngraph``, the Python module for graph construction, and ``ngraph.transformers``, the module for transformer operations.
```
import neon as ng
import neon.transformers as ngt
```
Next, we create a computational graph, which we refer to as ngraph, for the computation. Following TensorFlow terminology, we use ``placeholder`` to define a port for transferring tensors between the host and the device. ``Axes`` are used to tell the graph the tensor shape. In this example, ``x`` is a scalar so the axes are empty.
```
x = ng.placeholder(axes=())
```
x can be thought as a dummy node of the ngraph, providing an entry point for data into the computational graph. The ``ngraph`` graph construction API uses functions to build a graph of ``Op`` objects, the ngraph. Each function may add operations to the ngraph, and will return an ``Op`` that represents the computation. Here below, using implicitly ngraph as it will be made evident at the next step, we are adding an ``Op`` to the ngraph that takes as input the variable tensor x just defined, and the constant number 1.
```
x_plus_one = x + 1
```
A bit of behind the scenes magic occurs with the Python number ``1`` in the expression above, which is not an ``Op``. When an argument to a graph constructor is not an ``Op``, nervana graph will attempt to convert it to an ``Op`` using ``ng.constant``, the graph function for creating a constant.
Thus, what it is really happening (when we are defining x_plus_one as above) is:
```
x_plus_one = ng.add(x, ng.constant(1))
```
For more information about the Op hierarchy please visit: https://ngraph.nervanasys.com/docs/latest/building_graphs.html <br>
<br>At this point, our computational graph has been defined with only one function to compute represented by x_plus_one. Once the ngraph is defined, we can compile it with a *transformer*. Here we use ``make_transformer`` to make a default transformer. We tell the transformer the function to compute, ``x_plus_one``, and the associated input parameters, only ``x`` in our example. The constant needs not to be repeated here, as it is part of the definition of the function to compute. The current default transformer uses NumPy for execution.
```
# Select a transformer
with closing(ngt.make_transformer()) as transformer:
# Define a computation
plus_one = transformer.computation(x_plus_one, x)
# Run the computation
for i in range(5):
print(plus_one(i))
```
The first time the transformer executes a computation, the ngraph is analyzed and compiled, and storage is allocated and initialized on the device. Once compiled, the computations are callable Python objects residing on the host. On each call to ``plus_one`` the value of ``x`` is copied to the device, 1 is added, and then the result is copied
back from the device to the host.
### The Compiled x + 1 Program
The compiled code, to be executed on the device, can be examined (currently located in ``/tmp`` folder) to view the runtime device model. Here we show the code with some clarifying comments.
```
class Model(object):
def __init__(self):
self.a_AssignableTensorOp_0_0 = None
self.a_AssignableTensorOp_0_0_v_AssignableTensorOp_0_0_ = None
self.a_AssignableTensorOp_1_0 = None
self.a_AssignableTensorOp_1_0_v_AssignableTensorOp_1_0_ = None
self.a_AddZeroDim_0_0 = None
self.a_AddZeroDim_0_0_v_AddZeroDim_0_0_ = None
self.be = NervanaObject.be
def alloc_a_AssignableTensorOp_0_0(self):
self.update_a_AssignableTensorOp_0_0(np.empty(1, dtype=np.dtype('float32')))
def update_a_AssignableTensorOp_0_0(self, buffer):
self.a_AssignableTensorOp_0_0 = buffer
self.a_AssignableTensorOp_0_0_v_AssignableTensorOp_0_0_ = np.ndarray(shape=(), dtype=np.float32,
buffer=buffer, offset=0, strides=())
def alloc_a_AssignableTensorOp_1_0(self):
self.update_a_AssignableTensorOp_1_0(np.empty(1, dtype=np.dtype('float32')))
def update_a_AssignableTensorOp_1_0(self, buffer):
self.a_AssignableTensorOp_1_0 = buffer
self.a_AssignableTensorOp_1_0_v_AssignableTensorOp_1_0_ = np.ndarray(shape=(), dtype=np.float32,
buffer=buffer, offset=0, strides=())
def alloc_a_AddZeroDim_0_0(self):
self.update_a_AddZeroDim_0_0(np.empty(1, dtype=np.dtype('float32')))
def update_a_AddZeroDim_0_0(self, buffer):
self.a_AddZeroDim_0_0 = buffer
self.a_AddZeroDim_0_0_v_AddZeroDim_0_0_ = np.ndarray(shape=(), dtype=np.float32,
buffer=buffer, offset=0, strides=())
def allocate(self):
self.alloc_a_AssignableTensorOp_0_0()
self.alloc_a_AssignableTensorOp_1_0()
self.alloc_a_AddZeroDim_0_0()
def Computation_0(self):
np.add(self.a_AssignableTensorOp_0_0_v_AssignableTensorOp_0_0_,
self.a_AssignableTensorOp_1_0_v_AssignableTensorOp_1_0_,
out=self.a_AddZeroDim_0_0_v_AddZeroDim_0_0_)
def init(self):
pass
```
Tensors have two components:
- storage for their elements (using the convention ``a_`` for the allocated storage of a tensor) and
- views of that storage (denoted as ``a_...v_``).
The ``alloc_`` methods allocate storage and then create the views of the storage that will be needed. The view creation is separated from the allocation because storage may be allocated in multiple ways.
Each allocated storage can also be initialized to, for example, random Gaussian variables. In this example, there are no initializations, so the method ``init``, which performs the one-time device
initialization, is empty. Constants, such as 1, are copied to the device as part of the allocation process.
The method ``Computation_0`` handles the ``plus_one`` computation. Clearly this is not the optimal way to add 1 to a scalar,
so let's look at a more complex example next in the Logistic Regression walk-through.
| true |
code
| 0.705202 | null | null | null | null |
|
# Geography as Feature
```
import pandas as pd
import geopandas as gpd
import libpysal as lp
import matplotlib.pyplot as plt
import rasterio as rio
import numpy as np
import contextily as ctx
import shapely.geometry as geom
%matplotlib inline
```
Today, we'll talk about representing spatial relationships in Python using PySAL's *spatial weights* functionality. This provides a unified way to express the spatial relationships between observations.
First, though, we'll need to read in our data built in the `relations.ipynb` notebook: Airbnb listings & nightly prices for neighbourhoods in Austin.
```
listings = gpd.read_file('../data/listings.gpkg').to_crs(epsg=3857)
neighborhoods = gpd.read_file('../data/neighborhoods.gpkg').to_crs(epsg=3857)
listings.head()
listings.hood
```
Further, we'll grab a basemap for our study area using `contextily`. Contextily is package designed to provide basemaps for data. It's best used for data in webmercator or raw WGS longitude-latitude coordinates.
Below, we are going to grab the basemap images for the `total_bounds` of our study area at a given zoom level. Further, we are specifying a different tile server from the default, the [Stamen Maps `toner-lite` tiles](http://maps.stamen.com/m2i/#toner-lite/1500:1000/12/47.5462/7.6196), to use since we like its aesthetics.
```
basemap, bounds = ctx.bounds2img(*listings.total_bounds, zoom=10,
url=ctx.tile_providers.ST_TONER_LITE)
```
Spatial plotting has come a long way since we first started in spatial data science. But, a few tricks for `geopandas` are still somewhat arcane, so it's useful to know them.
```
f = plt.figure(figsize=(8,8))
ax = plt.gca()
# TRICK 1: when you only want to plot the boundaries, not the polygons themselves:
neighborhoods.boundary.plot(color='k', ax=ax)
ax.imshow(basemap, extent=bounds, interpolation='bilinear')
ax.axis(neighborhoods.total_bounds[np.asarray([0,2,1,3])])
# TRICK 2: Sorting the data before plotting it will ensure that
# the highest (or lowest) categories are prioritized in the plot.
# Use this to mimick blending or control the order in which alpha blending might occur.
listings.sort_values('price').plot('price', ax=ax, marker='o', cmap='plasma', alpha=.5)
```
# Spatial Weights: expressing spatial relationships mathematically
Spatial weights matrices are mathematical objects that are designed to express the inter-relationships between sites in a given geolocated frame of analysis.
This means that the relationships between each site (of which there are usually $N$) to every other site is *represented* by the weights matrix, which is some $N \times N$ matrix of "weights," which are scalar numerical representations of these relationships.
In a similar fashion to *affinity matrices* in machine learning, spatial weights matrices are used in a wide variety of problems and models in quantitative geography and spatial data science to express the spatial relationships present in our data.
In python, PySAL's `W` class is the main method by which people construct & represent spatial weights. This means that arbitary inter-site linkages can be expressed using one dictionary, and another *optional* dictionary:
- **a `neighbors` dictionary,** which encodes a *focal observation*'s "name" and which other "named" observations the focal is linked.
- **a `weights` dictionary,** which encodes how strongly each of the neighbors are linked to the focal observation.
Usually, these are one-to-many mappings, dictionaries keyed with the "focal" observation and values which are lists of the names to which the key is attached.
An example below shows three observations, `a`,`b`, and `c`, arranged in a straight line:
```
neighbors = dict(a = ['b'],
b = ['a','c'],
c = ['b']
)
```
Connectivity strength is recorded in a separate dictionary whose keys should align with the `neighbors`:
```
weights = dict(a = [1],
b = [.2, .8],
c = [.3]
)
```
To construct the most generic spatial weights object, only the `neighbors` dictionary is required; the `weights` will assumed to be one everywhere.
```
binary = lp.weights.W(neighbors) # assumes all weights are one
binary.weights
weighted = lp.weights.W(neighbors, weights=weights)
weighted.weights
```
# Constructing different types of weights
By itself, this is not really useful; the hardest part of *using* these representations is constructing them from your original spatial data. Thus, we show below how this can be done. First, we cover *contiguity* weights, which are analogues to adjacency matrices . These are nearly always used for polygonal "lattice" data, but can also be used for points as well by examining their voronoi diagram.
Second, we cover *distance* weights, which usually pertain to point data only. These tend to embed notions of distance decay, and are incredibly flexible for multiple forms of spatial data.
# Contiguity
Contiguity weights, or "adjacency matrices," are one common representation of spatial relationships that spring to mind when modeling how polygons relate to one another. In this representation, objects are considered "near" when they touch, and "far" when they don't. adjacency is considered as a "binary" relationship, so all polygons that are near to one another are *as near as they are to any other near polygon*.
We've got fast algos to build these kinds of relationships from `shapely`/`geopandas`, as well as directly from files (without having to read all the data in at once).
```
Qneighbs = lp.weights.Queen.from_dataframe(neighborhoods)
```
The `pysal` library has gone under a bit of restructuring.
The main components of the package are migrated to `libpysal`, which forms the base of a constellation of spatial data science packages.
Given this, we you can plot the adjacency graph for the polygons we showed above as another layer in the plot. We will remove some of the view to make the view simpler to examine:
```
f = plt.figure(figsize=(8,8))
ax = plt.gca()
# when you only want to plot the boundaries:
neighborhoods.boundary.plot(color='k', ax=ax, alpha=.4)
Qneighbs.plot(neighborhoods, edge_kws=dict(linewidth=1.5, color='orangered'),
node_kws=dict(marker='*'), ax=ax)
plt.show()
```
We can check if individual observations are disconnected using the weights object's `islands` argument:
```
Qneighbs.islands
```
This is good news, as each polygon has at least one neighbor, and our graph has a single connected component.
PySAL weights can be used in other packages by converting them into their equivalent matrix representations. Sparse and dense array versions are offered, with `.sparse` providing the sparse matrix representation, and `.full()` providing the ids and dense matrix representing the graphs.
```
spqneighbs = Qneighbs.sparse
spqneighbs.eliminate_zeros()
```
Visualizing the matrix, you can see that the adjacency matrix is very sparse indeed:
```
plt.matshow(spqneighbs.toarray())
```
We can get the number of links as a percentage of all possible $N^2$ links from:
```
Qneighbs.pct_nonzero
```
Which means that there are around 12.3% of all the possible connections between any two observations actually make it into the adjacency graph.
For contiguity matrices, this only has binary elements, recording 1 where two observations are linked. Everywhere else, the array is empty (zero, in a dense representation).
```
np.unique(spqneighbs.data)
```
Fortunately for us, PySAL plays real well with scipy & other things built on top of SciPy. So, the [new compressed sparse graph (`csgraph`)](https://docs.scipy.org/doc/scipy/reference/sparse.csgraph.html) module in SciPy works wonders with the PySAL sparse weights representations. So, we often will jump back and forth between PySAL weights and scipy tools when working with these spatial representations of data.
```
import scipy.sparse.csgraph as csgraph
```
Now, in `csgraph`, there are a ton of tools to work with graphs. For example, we could use `csgraph.connected_components`:
```
number_connected, labels = csgraph.connected_components(spqneighbs)
```
And verify that we have a single connected component:
```
print(number_connected, labels)
Qconnected = lp.weights.Queen.from_dataframe(neighborhoods)
Qconnected.plot(neighborhoods, node_kws=dict(marker='*'), edge_kws=dict(linewidth=.4))
neighborhoods.boundary.plot(color='r', ax=plt.gca())
```
In addition, we could use the `lp.w_subset` function, which would avoid re-constructing the weights again. This might help if they are truly massive, but it's often just as expensive to discover the subset as it is to construct a new weights object from this subset.
```
Qconnected2 = lp.weights.w_subset(Qneighbs, ids=[i for i in range(Qneighbs.n) if labels[i] == 0])
```
Sometimes, if `pandas` rearranges the dataframes, these will appear to be different weights since the ordering is different. To check if two weights objects are identical, a simple test is to check the sparse matrices for **in**equality:
```
(Qconnected2.sparse != Qconnected.sparse).sum()
```
### Alternative Representations
PySAL, by default, tends to focus on a single `W` object, which provides easy tools to construct & work with the accompanying sparse matrix representations.
However, it's often the case we want alternative representations of the same relationships.
One handy one is the weights list. This is an alternative form of expressing a weights matrix, and provides a copy of the underlying `W.sparse.data`, made more regular and put into a pandas dataframe.
```
adjlist = Qconnected.to_adjlist()
adjlist.head()
```
This handy if you'd rather work with the representation in terms of individual edges, rather than in sets of edges.
Also, it is exceptionally handy when you want to ask questions about the data used to generate the spatial weights, since it lets you attach this data to each of the focal pairs and ask questions about the associated data at that level.
For example, say we get the median price of airbnbs within a given neighbourhood:
```
listings.price.dtype
listings.price
price = listings[['price']].replace('[\$,]', '', regex=True).astype(float)
price.mean(), price.max(), price.median(), price.min()
listings['price'] = price
```
Now, we are going to attach that back to the dataframe containing the neighbourhood information.
```
median_prices = gpd.sjoin(listings[['price', 'geometry']], neighborhoods, op='within')\
.groupby('index_right').price.median()
median_prices.head()
neighborhoods = neighborhoods.merge(median_prices.to_frame('median_price'),
left_index=True, right_index=True, how='left')
```
Then, we can map this information at the neighbourhood level, computed from the individual listings within each neighbourhood:
```
f = plt.figure(figsize=(8,8))
ax = plt.gca()
# when you only want to plot the boundaries:
neighborhoods.plot('median_price', cmap='plasma', alpha=.7, ax=ax)
#basemap of the area
ax.imshow(basemap, extent=bounds, interpolation='gaussian')
ax.axis(neighborhoods.total_bounds[np.asarray([0,2,1,3])])
#if you want the highest values to show on top of lower ones
plt.show()
```
Then, to examine the local relationships in price between nearby places, we could merge this information back up with the weights list and get the difference in price between every adjacent neighbourhood.
Usually, these joins involve building links between both the focal and neighbor observation IDs. You can do this simply by piping together two merges: one that focuses on the "focal" index and one that focuses on the "neighbor" index.
Using a suffix in the later merge will give the data joined on the focal index a distinct name from that joined on the neighbor index.
```
adjlist = adjlist.merge(neighborhoods[['hood_id',
'median_price']],
left_on='focal', right_index=True, how='left')\
.merge(neighborhoods[['hood_id',
'median_price']],
left_on='neighbor', right_index=True ,how='left',
suffixes=('_focal', '_neighbor'))
adjlist.head()
adjlist.median_price_neighbor
```
Then, we can group by the `focal` index and take the difference of the prices.
```
pricediff = adjlist[['median_price_focal',
'median_price_neighbor']].diff(axis=1)
pricediff.head()
```
We can link this back up to the original adjacency list, but first let's rename the column we want to `price_difference` and only keep that column:
```
pricediff['price_difference'] = pricediff[['median_price_neighbor']]
adjlist['price_difference'] = pricediff[['price_difference']]
```
And, if we wanted to find the pair of adjacent neighbourhoods with the greatest price difference:
```
adjlist.head()
```
Now, we can group by *both* the focal and neighbor name to get a meaningful list of all the neighborhood boundaries & their difference in median listing price.
```
contrasts = adjlist.groupby(("hood_id_focal", "hood_id_neighbor"))\
.price_difference.median().abs()\
.sort_values().to_frame().reset_index()
```
For about six neighbourhood pairs (since these will be duplicate `(A,B) & (B,A)` links), the median listing price is the same:
```
contrasts.query('price_difference == 0').sort_values(['hood_id_focal','hood_id_neighbor'])
```
On the other end, the 20 largest paired differences in median price between adjacent neighbourhoods is shown below:
```
contrasts.sort_values(['price_difference',
'hood_id_focal'],
ascending=[False,True]).head(40)
```
## Contiguity for points
Contiguity can also make sense for point objects as well, if you think about the corresponding Voronoi Diagram and the Thiessen Polygons's adjacency graph.
Effectively, this connects each point to a set of its nearest neighbouring points, without pre-specifying the number of points.
We can use it to define relationships between airbnb listings in our dataset.
```
listings.sort_values('price').plot('price', cmap='plasma', alpha=.5)
from libpysal.cg.voronoi import voronoi_frames
from libpysal.weights import Voronoi
lp.cg.voronoi_frames
lp.weights.Voronoi?
coordinates = np.vstack((listings.centroid.x, listings.centroid.y)).T
thiessens, points = voronoi_frames(coordinates)
```
However, the "natural" polygons generated by the `scipy.distance.voronoi` object may be excessively big, since some of the nearly-parallel lines in the voronoi diagram may take a long time to intersect.
```
f,ax = plt.subplots(1,2,figsize=(2.16*4,4))
thiessens.plot(ax=ax[0], edgecolor='k')
neighborhoods.plot(ax=ax[0], color='w', edgecolor='k')
ax[0].axis(neighborhoods.total_bounds[np.asarray([0,2,1,3])])
ax[0].set_title("Where we want to work")
thiessens.plot(ax=ax[1])
neighborhoods.plot(ax=ax[1], color='w', edgecolor='k')
ax[1].set_title("The outer limit of the voronoi diagram from SciPy")
ax[0].axis('off')
ax[1].axis('off')
plt.show()
```
Fortunately, PySAL can work with this amount of observations to build weights really quickly. But, the `geopandas` overlay operation is very slow for this many polygons, so even with a spatial index, clipping these polygons to the bounding box can take a bit...
```
thiessens.shape
listings.shape
neighborhoods['dummy']=1
```
So, we've precomputed the clipped version of the thiessen polygons and stored them, so that we can move forward without waiting too long
```
clipper = neighborhoods.dissolve(by='dummy')
clipper.plot()
thiessens.head()
thiessens.crs = clipper.crs
clipped_thiessens = gpd.overlay(thiessens, clipper, how='intersection')
clipped_thiessens.shape
clipped_thiessens.head()
clipped_thiessens.plot()
clipped_thiessens.to_file('../data/thiessens.gpkg')
clipped_thiessens = gpd.read_file('../data/thiessens.gpkg')
```
Note that, whereas the overlay operation to clean up this diagram took quite a bit of computation time if just called regularly ([and there may be plenty faster ways to do these kinds of ops](http://2018.geopython.net/#w4)), constructing the topology for all 11k Thiessen polygons is rather fast:
Just to show what this looks like, we will plot a part of one of the neighbourhoods in Austin: Hyde Park to the North of UT.
```
focal_neighborhood = 'Hyde Park'
focal = clipped_thiessens[listings.hood == focal_neighborhood]
focal = focal.reset_index()
focal.shape
focal.plot()
thiessen_focal_w = lp.weights.Rook.from_dataframe(focal)
f,ax = plt.subplots(1,3,figsize=(15,5),sharex=True,sharey=True)
# plot the airbnbs across the map
listings.plot('price', cmap='plasma', ax=ax[0],zorder=0, marker='.')
#
ax[0].set_xlim(*focal.total_bounds[np.asarray([0,2])])
ax[0].set_ylim(*focal.total_bounds[np.asarray([1,3])])
# Plot the thiessens corresponding to each listing in focal neighbourhood
listings[listings.hood == focal_neighborhood]\
.plot('price', cmap='plasma', marker='.', ax=ax[1], zorder=0)
focal.boundary.plot(ax=ax[1], linewidth=.7)
thiessen_focal_w.plot(focal, node_kws=dict(marker='.',s=0),
edge_kws=dict(linewidth=.5), color='b', ax=ax[2])
focal.boundary.plot(ax=ax[2], linewidth=.7)
# underlay the neighbourhood boundaries
for ax_ in ax:
neighborhoods.boundary.plot(ax=ax_, color='grey',zorder=1)
ax_.set_xticklabels([])
ax_.set_yticklabels([])
ax[0].set_title("All Listings", fontsize=20)
ax[1].set_title("Voronoi for Listings in %s"%focal_neighborhood, fontsize=20)
ax[2].set_title("AdjGraph for Listings Voronoi", fontsize=20)
f.tight_layout()
plt.show()
```
# Distance
Distance weights tend to reflect relationships that work based on distance decay. Often, people think of spatial kernel functions when talking about distance weighting. But, PySAL also recognizes/uses distance-banded weights, which consider any neighbor within a given distance threshold as "near," and K-nearest neighbor weights, which consider any of the $k$-closest points to each point as "near" to that point.
KNN weights, by default, are the only asymmetric weight PySAL will construct. However, using `csgraph`, one could prune/trim any of the contiguity or distance weights to be directed.
### Kernel weights
These weights are one of the most commonly-used kinds of distance weights. They reflect the case where similarity/spatial proximity is assumed or expected to decay with distance.
Many of these are quite a bit more heavy to compute than the contiguity graph discussed above, since the contiguity graph structure embeds simple assumptions about how shapes relate in space that kernel functions cannot assume.
Thus, I'll subset the data to a specific area of Austin before proceeding.
```
listings['hood']=listings['hood'].fillna(value="None").astype(str)
focal_listings = listings[listings.hood.str.startswith("Hyde")].reset_index()
focal_listings.sort_values('price').plot('price', cmap='plasma', zorder=3)
neighborhoods.boundary.plot(color='grey', ax=plt.gca())
plt.axis(focal_listings.total_bounds[np.asarray([0,2,1,3])])
plt.show()
Wkernel = lp.weights.Kernel.from_dataframe(focal_listings)
```
Now, if you wanted to see what these look like on the map:
```
focal_listings.assign(weights=Wkernel.sparse[0,:].toarray().flatten()).plot('weights', cmap='plasma')
neighborhoods.boundary.plot(color='grey', ax=plt.gca())
plt.axis(focal_listings.total_bounds[np.asarray([0,2,1,3])])
plt.show()
```
So, clearly, near things are weighted very highly, and distant things are weighted low.
So, if you're savvy with this, you may wonder:
> Why use PySAL kernel weights when `sklearn.pairwise.kernel_metrics` are so much faster?
Well, PySAL's got a few enhancements over and above scikit kernel functions.
1. **pre-specified bandwidths**: using the `bandwidth=` argument, you can give a specific bandwidth value for the kernel weight. This lets you use them in optimization routines where bandwidth might need to be a parameter that's optimized by another function.
2. **fixed vs. adaptive bandwidths**: adaptive bandwidths adjust the map distanace to make things more "local" in densely-populated areas of the map and less "local" in sparsely-populated areas. This is adjusted by the...
3. **`k`-nearest neighborhood tuning**: this argument adjusts the number of nearby observations to use for the bandwidth.
Also, many of the scikit kernel functions are also implemented. The default is the `triangular` weight, which is a linear decay with distance.
For example, an adaptive Triangular kernel and an adaptive Gaussian kernel are shown below, alongisde the same point above for comparison.
```
Wkernel_adaptive = lp.weights.Kernel.from_dataframe(focal_listings, k=20, fixed=False)
Wkernel_adaptive_gaussian = lp.weights.Kernel.from_dataframe(focal_listings, k=10, fixed=False, function='gaussian')
f,ax = plt.subplots(1,3,figsize=(12,4))
focal_listings.assign(weights=Wkernel.sparse[0,:].toarray().flatten()).plot('weights', cmap='plasma',ax=ax[0])
focal_listings.assign(weights=Wkernel_adaptive.sparse[0,:].toarray().flatten()).plot('weights', cmap='plasma',ax=ax[1])
focal_listings.assign(weights=Wkernel_adaptive_gaussian.sparse[0,:].toarray().flatten()).plot('weights', cmap='plasma',ax=ax[2])
for i in range(3):
neighborhoods.boundary.plot(color='grey', ax=ax[i])
ax[i].axis(focal_listings.total_bounds[np.asarray([0,2,1,3])])
ax[i].set_xticklabels([])
ax[i].set_yticklabels([])
ax[0].set_title("Defaults (Triangular fixed kernel, k=2)")
ax[1].set_title("Adaptive Triangular Kernel, k=20")
ax[2].set_title("Adaptive Gaussian Kernel, k=10")
f.tight_layout()
plt.show()
```
In the adaptive kernels, you also obtain a distinct bandwidth at each site:
```
Wkernel_adaptive.bandwidth[0:5]
```
These are useful in their own right, since they communicate information about the structure of the density of points in the analysis frame:
```
f,ax = plt.subplots(1,2,figsize=(8,4))
focal_listings.assign(bandwidths=Wkernel_adaptive.bandwidth).plot('bandwidths', cmap='plasma',ax=ax[0])
focal_listings.assign(bandwidths=Wkernel_adaptive_gaussian.bandwidth).plot('bandwidths', cmap='plasma',ax=ax[1])
for i in range(2):
neighborhoods.boundary.plot(color='grey', ax=ax[i])
ax[i].axis(focal_listings.total_bounds[np.asarray([0,2,1,3])])
ax[i].set_xticklabels([])
ax[i].set_yticklabels([])
ax[0].set_title("Adaptive Triangular Kernel, k=20")
ax[0].set_ylabel("Site-specific bandwidths", fontsize=16)
ax[1].set_title("Adaptive Gaussian Kernel, k=10")
f.tight_layout()
plt.show()
```
Areas with large adaptive kernel bandwidths are considered in "sparse" regions and areas with small adaptive bandwidths are in "dense" regions; a similar kind of logic is used by clustering algortihms descended from DBSCAN.
### Distance bands
Conceptually, this is a binary kernel weight. All observations that are within a given distance from one another are considered "neighbors," and all that are further than this distance are "not neighbors."
In order for this weighting structure to connect all observations, it's useful to set this to the largest distance connecting on observation to its nearest neighbor. This observation is the "most remote" observation and have at least one neighbor; every other observation is thus guaranteed to have at least this many neighbors.
To get this "m distance to the first nearest neighbor," you can use the PySAL `min_threshold_distance` function, which requires an array of points to find the minimum distance at which all observations are connected to at least one other observation:
```
point_array = np.vstack(focal_listings.geometry.apply(lambda p: np.hstack(p.xy)))
minthresh = lp.weights.min_threshold_distance(point_array)
print(minthresh)
```
This means that the most remote observation is just over 171 meters away from its nearest airbnb. Building a graph from this minimum distance, then, is done by passing this to the weights constructor:
```
dbandW = lp.weights.DistanceBand.from_dataframe(focal_listings, threshold=minthresh)
neighborhoods.boundary.plot(color='grey')
dbandW.plot(focal_listings, ax=plt.gca(), edge_kws=dict(color='r'), node_kws=dict(zorder=10))
plt.axis(focal_listings.total_bounds[np.asarray([0,2,1,3])])
plt.show()
```
This model of spatial relationships will guarantee that each observation has at least one neighbor, and will prevent any disconnected subgraphs from existing.
### KNNW
$K$-nearest neighbor weights are constructed by considering the nearest $k$ points to each observation as neighboring that observation. This is a common way of conceptualizing observations' neighbourhoods in machine learning applications, and it is also common in geographic data science applications.
```
KNNW = lp.weights.KNN.from_dataframe(focal_listings, k=10)
neighborhoods.boundary.plot(color='grey')
KNNW.plot(focal_listings,ax=plt.gca(), edge_kws=dict(color='r'), node_kws=dict(zorder=10))
plt.axis(focal_listings.total_bounds[np.asarray([0,2,1,3])])
plt.show()
```
One exceedingly-common method of analysis using KNN weights is by changing `k` repeatedly and finding better values. Thus, the KNN-weights method provides a specific method to do this in a way that avoids re-constructing its core data structure, the `kdtree`.
Further, this can add additional data to the weights object as well.
By default, this operates in place, but can also provide a copy of the datastructure if `inplace=False`.
```
KNNW20 = KNNW.reweight(k=20, inplace=False)
neighborhoods.boundary.plot(color='grey')
KNNW20.plot(focal_listings,ax=plt.gca(), edge_kws=dict(color='r'), node_kws=dict(zorder=10))
plt.axis(focal_listings.total_bounds[np.asarray([0,2,1,3])])
plt.show()
```
Further, since KNN weights are asymmetric, special methods are provided to make them symmetric:
```
KNNW20sym = KNNW20.symmetrize()
(KNNW20sym.sparse != KNNW20sym.sparse.T).sum()
(KNNW20.sparse != KNNW20.sparse.T).sum()
```
In fact, these symmetrizing methods exist for any other weights type too, so if you've got an arbitrarily-computed weights matrix, it can be used in that case.
### KNN on Polygons
While K-nearest neighbors weighting methods often make more sense for data in point formats, it's also applicable to data in polygons, were a *representative point* for each polygon is used to construct K-nearest neighbors, instead of the polygons as a whole.
For comparison, I'll show this alongside of the Queen weights shown above for neighbourhoods in Berlin.
When the number of nearest neighbours is relatively large compared to the usual cardinality in an adjacency graph, this results in some neighbourhoods being connected to one another more than a single-neigbourhood deep. That is, neighbourhoods are considered spatially connected even if they don't touch, since their *representative points* are so close to one another relative to the nearest alternatives.
```
KNN_neighborhoods = lp.weights.KNN.from_dataframe(neighborhoods, k=10).symmetrize()
f,ax = plt.subplots(1,2,figsize=(8,4))
for i in range(2):
neighborhoods.boundary.plot(color='grey',ax=ax[i])
ax[i].set_xticklabels([])
ax[i].set_yticklabels([])
KNN_neighborhoods.plot(neighborhoods, ax=ax[0], node_kws=dict(s=0), color='orangered')
Qconnected.plot(neighborhoods, ax=ax[1], node_kws=dict(s=0), color='skyblue')
ax[0].set_title("KNN(10)", fontsize=16)
ax[1].set_title("Queen Contiguity", fontsize=16)
f.tight_layout()
plt.show()
```
In conrast, very sparse K-nearest neighbours graphs will result in significantly different connectivity structure than the contiguity graph, since the relative position of large areas' *representative points* matters significantly for which observations it touches will be considered "connected." Further, this often reduces the density of areas in the map with small elementary units, where cardinality is often higher.
```
KNN_neighborhoods = lp.weights.KNN.from_dataframe(neighborhoods, k=2).symmetrize()
f,ax = plt.subplots(1,2,figsize=(8,4))
for i in range(2):
neighborhoods.boundary.plot(color='grey',ax=ax[i])
ax[i].set_xticklabels([])
ax[i].set_yticklabels([])
KNN_neighborhoods.plot(neighborhoods, ax=ax[0], node_kws=dict(s=0), color='orangered')
Qconnected.plot(neighborhoods, ax=ax[1], node_kws=dict(s=0), color='skyblue')
ax[0].set_title("KNN(2)", fontsize=16)
ax[1].set_title("Queen Contiguity", fontsize=16)
f.tight_layout()
plt.show()
```
## More representations
There are similarly more representations available and currently under development, such as a networkx interface in `W.to_networkx/W.from_networkx`. Further, we're always willing to add additional constructors or methods to provide new and interesting ways to represent geographic relationships.
| true |
code
| 0.613237 | null | null | null | null |
|
This lab on Polynomial Regression and Step Functions is a python adaptation of p. 288-292 of "Introduction to Statistical Learning with Applications in R" by Gareth James, Daniela Witten, Trevor Hastie and Robert Tibshirani. Original adaptation by J. Warmenhoven, updated by R. Jordan Crouser at Smith College for SDS293: Machine Learning (Spring 2016).
```
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from sklearn.preprocessing import PolynomialFeatures
import statsmodels.api as sm
import statsmodels.formula.api as smf
from patsy import dmatrix
%matplotlib inline
```
# 7.8.1 Polynomial Regression and Step Functions
In this lab, we'll explore how to generate the ${\tt Wage}$ dataset models we saw in class.
```
df = pd.read_csv('Wage.csv')
df.head(3)
```
We first fit the polynomial regression model using the following commands:
```
X1 = PolynomialFeatures(1).fit_transform(df.age.reshape(-1,1))
X2 = PolynomialFeatures(2).fit_transform(df.age.reshape(-1,1))
X3 = PolynomialFeatures(3).fit_transform(df.age.reshape(-1,1))
X4 = PolynomialFeatures(4).fit_transform(df.age.reshape(-1,1))
X5 = PolynomialFeatures(5).fit_transform(df.age.reshape(-1,1))
```
This syntax fits a linear model, using the ${\tt PolynomialFeatures()}$ function, in order to predict
wage using up to a fourth-degree polynomial in ${\tt age}$. The ${\tt PolynomialFeatures()}$ command
allows us to avoid having to write out a long formula with powers
of ${\tt age}$. We can then fit our linear model:
```
fit2 = sm.GLS(df.wage, X4).fit()
fit2.summary().tables[1]
```
Next we consider the task of predicting whether an individual earns more
than \$250,000 per year. We proceed much as before, except that first we
create the appropriate response vector, and then we fit a logistic model using the ${\tt GLM()}$ function from ${\tt statsmodels}$:
```
# Create response matrix
y = (df.wage > 250).map({False:0, True:1}).as_matrix()
# Fit logistic model
clf = sm.GLM(y, X4, family=sm.families.Binomial(sm.families.links.logit))
res = clf.fit()
```
We now create a grid of values for ${\tt age}$ at which we want predictions, and
then call the generic ${\tt predict()}$ function for each model:
```
# Generate a sequence of age values spanning the range
age_grid = np.arange(df.age.min(), df.age.max()).reshape(-1,1)
# Generate test data
X_test = PolynomialFeatures(4).fit_transform(age_grid)
# Predict the value of the generated ages
pred1 = fit2.predict(X_test) # salary
pred2 = res.predict(X_test) # Pr(wage>250)
```
Finally, we plot the data and add the fit from the degree-4 polynomial.
```
# creating plots
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(16,5))
fig.suptitle('Degree-4 Polynomial', fontsize=14)
# Scatter plot with polynomial regression line
ax1.scatter(df.age, df.wage, facecolor='None', edgecolor='k', alpha=0.3)
ax1.plot(age_grid, pred1, color = 'b')
ax1.set_ylim(ymin=0)
# Logistic regression showing Pr(wage>250) for the age range.
ax2.plot(age_grid, pred2, color='b')
# Rug plot showing the distribution of wage>250 in the training data.
# 'True' on the top, 'False' on the bottom.
ax2.scatter(df.age, y/5, s=30, c='grey', marker='|', alpha=0.7)
ax2.set_ylim(-0.01,0.21)
ax2.set_xlabel('age')
ax2.set_ylabel('Pr(wage>250|age)')
```
# Deciding on a degree
In performing a polynomial regression we must decide on the degree of
the polynomial to use. One way to do this is by using hypothesis tests. We
now fit models ranging from linear to a degree-5 polynomial and seek to
determine the simplest model which is sufficient to explain the relationship
between ${\tt wage}$ and ${\tt age}$.
We can do this using the ${\tt anova\_lm()}$ function, which performs an
analysis of variance (ANOVA, using an F-test) in order to test the null
hypothesis that a model $M_1$ is sufficient to explain the data against the
alternative hypothesis that a more complex model $M_2$ is required. In order
to use the ${\tt anova\_lm()}$ function, $M_1$ and $M_2$ must be **nested models**: the
predictors in $M_1$ must be a subset of the predictors in $M_2$. In this case,
we fit five different models and sequentially compare the simpler model to
the more complex model:
```
fit_1 = fit = sm.GLS(df.wage, X1).fit()
fit_2 = fit = sm.GLS(df.wage, X2).fit()
fit_3 = fit = sm.GLS(df.wage, X3).fit()
fit_4 = fit = sm.GLS(df.wage, X4).fit()
fit_5 = fit = sm.GLS(df.wage, X5).fit()
print(sm.stats.anova_lm(fit_1, fit_2, fit_3, fit_4, fit_5, typ=1))
```
The $p$-value comparing the linear Model 1 to the quadratic Model 2 is
essentially zero $(<10^{-32})$, indicating that a linear fit is not sufficient. Similarly
the $p$-value comparing the quadratic Model 2 to the cubic Model 3
is very low (0.0017), so the quadratic fit is also insufficient. The $p$-value
comparing the cubic and degree-4 polynomials, Model 3 and Model 4, is approximately
0.05 while the degree-5 polynomial Model 5 seems unnecessary
because its $p$-value is 0.37. Hence, either a cubic or a quartic polynomial
appear to provide a reasonable fit to the data, but lower- or higher-order
models are not justified.
As an alternative to using hypothesis tests and ANOVA, we could choose
the polynomial degree using cross-validation as we have in previous labs.
# Step functions
In order to fit a step function, we use the ${\tt cut()}$ function:
```
df_cut, bins = pd.cut(df.age, 4, retbins=True, right=True)
df_cut.value_counts(sort=False)
```
Here ${\tt cut()}$ automatically picked the cutpoints at 33.5, 49, and 64.5 years
of age. We could also have specified our own cutpoints directly. Now let's create a set of dummy variables for use in the regression:
```
df_steps = pd.concat([df.age, df_cut, df.wage], keys=['age','age_cuts','wage'], axis=1)
# Create dummy variables for the age groups
df_steps_dummies = pd.get_dummies(df_steps['age_cuts'])
# Statsmodels requires explicit adding of a constant (intercept)
df_steps_dummies = sm.add_constant(df_steps_dummies)
```
An now to fit the models! The ${\tt age<33.5}$ category is left out, so the intercept coefficient of
\$94,160 can be interpreted as the average salary for those under 33.5 years
of age, and the other coefficients can be interpreted as the average additional
salary for those in the other age groups.
```
fit3 = sm.GLM(df_steps.wage, df_steps_dummies.drop(['(17.938, 33.5]'], axis=1)).fit()
fit3.summary().tables[1]
```
We can produce predictions
and plots just as we did in the case of the polynomial fit.
```
# Put the test data in the same bins as the training data.
bin_mapping = np.digitize(age_grid.ravel(), bins)
# Get dummies, drop first dummy category, add constant
X_test2 = sm.add_constant(pd.get_dummies(bin_mapping).drop(1, axis=1))
# Predict the value of the generated ages using the linear model
pred2 = fit3.predict(X_test2)
# And the logistic model
clf2 = sm.GLM(y, df_steps_dummies.drop(['(17.938, 33.5]'], axis=1),
family=sm.families.Binomial(sm.families.links.logit))
res2 = clf2.fit()
pred3 = res2.predict(X_test2)
# Plot
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(12,5))
fig.suptitle('Piecewise Constant', fontsize=14)
# Scatter plot with polynomial regression line
ax1.scatter(df.age, df.wage, facecolor='None', edgecolor='k', alpha=0.3)
ax1.plot(age_grid, pred2, c='b')
ax1.set_xlabel('age')
ax1.set_ylabel('wage')
ax1.set_ylim(ymin=0)
# Logistic regression showing Pr(wage>250) for the age range.
ax2.plot(np.arange(df.age.min(), df.age.max()).reshape(-1,1), pred3, color='b')
# Rug plot showing the distribution of wage>250 in the training data.
# 'True' on the top, 'False' on the bottom.
ax2.scatter(df.age, y/5, s=30, c='grey', marker='|', alpha=0.7)
ax2.set_ylim(-0.01,0.21)
ax2.set_xlabel('age')
ax2.set_ylabel('Pr(wage>250|age)')
```
To get credit for this lab, post your responses to the following questions:
- What is one real-world example where you might try polynomial regression?
- What is one real-world example where you might try using a step function?
to Piazza: https://piazza.com/class/igwiv4w3ctb6rg?cid=48
| true |
code
| 0.674399 | null | null | null | null |
|
# Analysis of schemes for the diffusion equation
<div id="diffu:pde1:analysis"></div>
The numerical experiments in the sections [diffu:pde1:FE:experiments](#diffu:pde1:FE:experiments) and [diffu:pde1:theta:experiments](#diffu:pde1:theta:experiments)
reveal that there are some
numerical problems with the Forward Euler and Crank-Nicolson schemes:
sawtooth-like noise is sometimes present in solutions that are,
from a mathematical point of view, expected to be smooth.
This section presents a mathematical analysis that explains the
observed behavior and arrives at criteria for obtaining numerical
solutions that reproduce the qualitative properties of the exact
solutions. In short, we shall explain what is observed in
Figures [diffu:pde1:FE:fig:F=0.5](#diffu:pde1:FE:fig:F=0.5)-[diffu:pde1:CN:fig:F=10](#diffu:pde1:CN:fig:F=10).
<!-- [diffu:pde1:FE:fig:F=0.5](#diffu:pde1:FE:fig:F=0.5), -->
<!-- [diffu:pde1:FE:fig:F=0.25](#diffu:pde1:FE:fig:F=0.25), -->
<!-- [diffu:pde1:FE:fig:F=0.51](#diffu:pde1:FE:fig:F=0.51), -->
<!-- [diffu:pde1:FE:fig:gauss:F=0.5](#diffu:pde1:FE:fig:gauss:F=0.5), -->
<!-- [diffu:pde1:BE:fig:F=0.5](#diffu:pde1:BE:fig:F=0.5), -->
<!-- [diffu:pde1:CN:fig:F=3](#diffu:pde1:CN:fig:F=3), -->
<!-- and -->
<!-- [diffu:pde1:CN:fig:F=10](#diffu:pde1:CN:fig:F=10). -->
## Properties of the solution
<div id="diffu:pde1:analysis:uex"></div>
A particular characteristic of diffusive processes, governed
by an equation like
<!-- Equation labels as ordinary links -->
<div id="diffu:pde1:eq"></div>
$$
\begin{equation}
u_t = \dfc u_{xx},
\label{diffu:pde1:eq} \tag{1}
\end{equation}
$$
is that the initial shape $u(x,0)=I(x)$ spreads out in space with
time, along with a decaying amplitude. Three different examples will
illustrate the spreading of $u$ in space and the decay in time.
### Similarity solution
The diffusion equation ([1](#diffu:pde1:eq)) admits solutions
that depend on $\eta = (x-c)/\sqrt{4\dfc t}$ for a given value
of $c$. One particular solution
is
<!-- Equation labels as ordinary links -->
<div id="diffu:pdf1:erf:sol"></div>
$$
\begin{equation}
u(x,t) = a\,\mbox{erf}(\eta) + b,
\label{diffu:pdf1:erf:sol} \tag{2}
\end{equation}
$$
where
<!-- Equation labels as ordinary links -->
<div id="diffu:analysis:erf:def"></div>
$$
\begin{equation}
\mbox{erf}(\eta) = \frac{2}{\sqrt{\pi}}\int_0^\eta e^{-\zeta^2}d\zeta,
\label{diffu:analysis:erf:def} \tag{3}
\end{equation}
$$
is the *error function*, and $a$ and $b$ are arbitrary constants.
The error function lies in $(-1,1)$, is odd around $\eta =0$, and
goes relatively quickly to $\pm 1$:
$$
\begin{align*}
\lim_{\eta\rightarrow -\infty}\mbox{erf}(\eta) &=-1,\\
\lim_{\eta\rightarrow \infty}\mbox{erf}(\eta) &=1,\\
\mbox{erf}(\eta) &= -\mbox{erf}(-\eta),\\
\mbox{erf}(0) &=0,\\
\mbox{erf}(2) &=0.99532227,\\
\mbox{erf}(3) &=0.99997791
\thinspace .
\end{align*}
$$
As $t\rightarrow 0$, the error function approaches a step function centered
at $x=c$. For a diffusion problem posed on the unit interval $[0,1]$,
we may choose the step at $x=1/2$ (meaning $c=1/2$), $a=-1/2$, $b=1/2$.
Then
<!-- Equation labels as ordinary links -->
<div id="diffu:analysis:pde1:step:erf:sol"></div>
$$
\begin{equation}
u(x,t) = \frac{1}{2}\left(1 -
\mbox{erf}\left(\frac{x-\frac{1}{2}}{\sqrt{4\dfc t}}\right)\right) =
\frac{1}{2}\mbox{erfc}\left(\frac{x-\frac{1}{2}}{\sqrt{4\dfc t}}\right),
\label{diffu:analysis:pde1:step:erf:sol} \tag{4}
\end{equation}
$$
where we have introduced the *complementary error function*
$\mbox{erfc}(\eta) = 1-\mbox{erf}(\eta)$.
The solution ([4](#diffu:analysis:pde1:step:erf:sol))
implies the boundary conditions
<!-- Equation labels as ordinary links -->
<div id="diffu:analysis:pde1:p1:erf:uL"></div>
$$
\begin{equation}
u(0,t) = \frac{1}{2}\left(1 - \mbox{erf}\left(\frac{-1/2}{\sqrt{4\dfc t}}\right)\right),
\label{diffu:analysis:pde1:p1:erf:uL} \tag{5}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<div id="diffu:analysis:pde1:p1:erf:uR"></div>
$$
\begin{equation}
u(1,t) = \frac{1}{2}\left(1 - \mbox{erf}\left(\frac{1/2}{\sqrt{4\dfc t}}\right)\right)
\label{diffu:analysis:pde1:p1:erf:uR} \tag{6}
\thinspace .
\end{equation}
$$
For small enough $t$, $u(0,t)\approx 1$ and $u(1,t)\approx 0$, but as
$t\rightarrow\infty$, $u(x,t)\rightarrow 1/2$ on $[0,1]$.
### Solution for a Gaussian pulse
The standard diffusion equation $u_t = \dfc u_{xx}$ admits a
Gaussian function as solution:
<!-- Equation labels as ordinary links -->
<div id="diffu:pde1:sol:Gaussian"></div>
$$
\begin{equation}
u(x,t) = \frac{1}{\sqrt{4\pi\dfc t}} \exp{\left({-\frac{(x-c)^2}{4\dfc t}}\right)}
\label{diffu:pde1:sol:Gaussian} \tag{7}
\thinspace .
\end{equation}
$$
At $t=0$ this is a Dirac delta function, so for computational
purposes one must start to view the solution at some time $t=t_\epsilon>0$.
Replacing $t$ by $t_\epsilon +t$ in ([7](#diffu:pde1:sol:Gaussian))
makes it easy to operate with a (new) $t$ that starts at $t=0$
with an initial condition with a finite width.
The important feature of ([7](#diffu:pde1:sol:Gaussian)) is that
the standard deviation $\sigma$ of a sharp initial Gaussian pulse
increases in time according to $\sigma = \sqrt{2\dfc t}$, making
the pulse diffuse and flatten out.
<!-- Mention combinations of such kernels to build up a general analytical sol? -->
<!-- Or maybe an exercise for verification. -->
### Solution for a sine component
Also, ([1](#diffu:pde1:eq)) admits a solution of the form
<!-- Equation labels as ordinary links -->
<div id="diffu:pde1:sol1"></div>
$$
\begin{equation}
u(x,t) = Qe^{-at}\sin\left( kx\right)
\label{diffu:pde1:sol1} \tag{8}
\thinspace .
\end{equation}
$$
The parameters $Q$ and $k$ can be freely chosen, while
inserting ([8](#diffu:pde1:sol1)) in ([1](#diffu:pde1:eq)) gives the constraint
$$
a = -\dfc k^2
\thinspace .
$$
A very important feature is that the initial shape $I(x)=Q\sin\left( kx\right)$
undergoes a damping $\exp{(-\dfc k^2t)}$, meaning that
rapid oscillations in space, corresponding to large $k$, are very much
faster dampened than slow oscillations in space, corresponding to small
$k$. This feature leads to a smoothing of the initial condition with time.
(In fact, one can use a few steps of the diffusion equation as
a method for removing noise in signal processing.)
To judge how good a numerical method is, we may look at its ability to
smoothen or dampen the solution in the same way as the PDE does.
The following example illustrates the damping properties of
([8](#diffu:pde1:sol1)). We consider the specific problem
$$
\begin{align*}
u_t &= u_{xx},\quad x\in (0,1),\ t\in (0,T],\\
u(0,t) &= u(1,t) = 0,\quad t\in (0,T],\\
u(x,0) & = \sin (\pi x) + 0.1\sin(100\pi x)
\thinspace .
\end{align*}
$$
The initial condition has been chosen such that adding
two solutions like ([8](#diffu:pde1:sol1)) constructs
an analytical solution to the problem:
<!-- Equation labels as ordinary links -->
<div id="diffu:pde1:sol2"></div>
$$
\begin{equation}
u(x,t) = e^{-\pi^2 t}\sin (\pi x) + 0.1e^{-\pi^2 10^4 t}\sin (100\pi x)
\label{diffu:pde1:sol2} \tag{9}
\thinspace .
\end{equation}
$$
[Figure](#diffu:pde1:fig:damping) illustrates the rapid damping of
rapid oscillations $\sin (100\pi x)$ and the very much slower damping of the
slowly varying $\sin (\pi x)$ term. After about $t=0.5\cdot10^{-4}$ the rapid
oscillations do not have a visible amplitude, while we have to wait
until $t\sim 0.5$ before the amplitude of the long wave $\sin (\pi x)$
becomes very small.
<!-- dom:FIGURE: [fig-diffu/diffusion_damping.png, width=800] Evolution of the solution of a diffusion problem: initial condition (upper left), 1/100 reduction of the small waves (upper right), 1/10 reduction of the long wave (lower left), and 1/100 reduction of the long wave (lower right). <div id="diffu:pde1:fig:damping"></div> -->
<!-- begin figure -->
<div id="diffu:pde1:fig:damping"></div>
<p>Evolution of the solution of a diffusion problem: initial condition (upper left), 1/100 reduction of the small waves (upper right), 1/10 reduction of the long wave (lower left), and 1/100 reduction of the long wave (lower right).</p>
<img src="fig-diffu/diffusion_damping.png" width=800>
<!-- end figure -->
<!-- x/sqrt(t) solution, kernel with integral -->
## Analysis of discrete equations
A counterpart to ([8](#diffu:pde1:sol1)) is the complex representation
of the same function:
$$
u(x,t) = Qe^{-at}e^{ikx},
$$
where $i=\sqrt{-1}$ is the imaginary unit.
We can add such functions, often referred to as wave components,
to make a Fourier representation
of a general solution of the diffusion equation:
<!-- Equation labels as ordinary links -->
<div id="diffu:pde1:u:Fourier"></div>
$$
\begin{equation}
u(x,t) \approx \sum_{k\in K} b_k e^{-\dfc k^2t}e^{ikx},
\label{diffu:pde1:u:Fourier} \tag{10}
\end{equation}
$$
where $K$ is a set of an infinite number of $k$ values needed to construct
the solution. In practice, however, the series is truncated and
$K$ is a finite set of $k$ values
needed to build a good approximate solution.
Note that ([9](#diffu:pde1:sol2)) is a special case of
([10](#diffu:pde1:u:Fourier)) where $K=\{\pi, 100\pi\}$, $b_{\pi}=1$,
and $b_{100\pi}=0.1$.
The amplitudes $b_k$ of the individual Fourier waves must be determined
from the initial condition. At $t=0$ we have $u\approx\sum_kb_k\exp{(ikx)}$
and find $K$ and $b_k$ such that
<!-- Equation labels as ordinary links -->
<div id="_auto1"></div>
$$
\begin{equation}
I(x) \approx \sum_{k\in K} b_k e^{ikx}\thinspace .
\label{_auto1} \tag{11}
\end{equation}
$$
(The relevant formulas for $b_k$ come from Fourier analysis, or
equivalently, a least-squares method for approximating $I(x)$
in a function space with basis $\exp{(ikx)}$.)
Much insight about the behavior of numerical methods can be obtained
by investigating how a wave component $\exp{(-\dfc k^2
t)}\exp{(ikx)}$ is treated by the numerical scheme. mathcal{I}_t appears that
such wave components are also solutions of the schemes, but the
damping factor $\exp{(-\dfc k^2 t)}$ varies among the schemes. To
ease the forthcoming algebra, we write the damping factor as
$A^n$. The exact amplification factor corresponding to $A$ is $\Aex =
\exp{(-\dfc k^2\Delta t)}$.
## Analysis of the finite difference schemes
<div id="diffu:pde1:analysis:details"></div>
We have seen that a general solution of the diffusion equation
can be built as a linear combination of basic components
$$
e^{-\dfc k^2t}e^{ikx} \thinspace .
$$
A fundamental question is whether such components are also solutions of
the finite difference schemes. This is indeed the case, but the
amplitude $\exp{(-\dfc k^2t)}$ might be modified (which also happens when
solving the ODE counterpart $u'=-\dfc u$).
We therefore look for numerical solutions of the form
<!-- Equation labels as ordinary links -->
<div id="diffu:pde1:analysis:uni"></div>
$$
\begin{equation}
u^n_q = A^n e^{ikq\Delta x} = A^ne^{ikx},
\label{diffu:pde1:analysis:uni} \tag{12}
\end{equation}
$$
where the amplification factor $A$
must be determined by inserting the component into an actual scheme.
Note that $A^n$ means $A$ raised to the power of $n$, $n$ being the
index in the time mesh, while the superscript $n$ in $u^n_q$ just
denotes $u$ at time $t_n$.
### Stability
The exact amplification factor is $\Aex=\exp{(-\dfc^2 k^2\Delta t)}$.
We should therefore require $|A| < 1$ to have a decaying numerical
solution as well. If
$-1\leq A<0$, $A^n$ will change sign from time level to
time level, and we get stable, non-physical oscillations in the numerical
solutions that are not present in the exact solution.
### Accuracy
To determine how accurately a finite difference scheme treats one
wave component ([12](#diffu:pde1:analysis:uni)), we see that the basic
deviation from the exact solution is reflected in how well
$A^n$ approximates $\Aex^n$,
or how well $A$ approximates $\Aex$.
We can plot $\Aex$ and the various expressions for $A$, and we can
make Taylor expansions of $A/\Aex$ to see the error more analytically.
<!-- We shall in particular investigate the error $\Aex - A$ in the -->
<!-- amplification factor. -->
### Truncation error
As an alternative to examining the accuracy of the damping of a wave
component, we can perform a general truncation error analysis as
explained in "Truncation error analysis": ""
[[Langtangen_deqbook_trunc]](#Langtangen_deqbook_trunc). Such results are more general, but
less detailed than what we get from the wave component analysis. The
truncation error can almost always be computed and represents the
error in the numerical model when the exact solution is substituted
into the equations. In particular, the truncation error analysis tells
the order of the scheme, which is of fundamental importance when
verifying codes based on empirical estimation of convergence rates.
## Analysis of the Forward Euler scheme
<div id="diffu:pde1:analysis:FE"></div>
<!-- 2DO: refer to vib and wave -->
The Forward Euler finite difference scheme for $u_t = \dfc u_{xx}$ can
be written as
$$
[D_t^+ u = \dfc D_xD_x u]^n_q\thinspace .
$$
Inserting a wave component ([12](#diffu:pde1:analysis:uni))
in the scheme demands calculating the terms
$$
e^{ikq\Delta x}[D_t^+ A]^n = e^{ikq\Delta x}A^n\frac{A-1}{\Delta t},
$$
and
$$
A^nD_xD_x [e^{ikx}]_q = A^n\left( - e^{ikq\Delta x}\frac{4}{\Delta x^2}
\sin^2\left(\frac{k\Delta x}{2}\right)\right)
\thinspace .
$$
Inserting these terms in the discrete equation and
dividing by $A^n e^{ikq\Delta x}$ leads to
$$
\frac{A-1}{\Delta t} = -\dfc \frac{4}{\Delta x^2}\sin^2\left(
\frac{k\Delta x}{2}\right),
$$
and consequently
<!-- Equation labels as ordinary links -->
<div id="_auto2"></div>
$$
\begin{equation}
A = 1 -4F\sin^2 p
\label{_auto2} \tag{13}
\end{equation}
$$
where
<!-- Equation labels as ordinary links -->
<div id="_auto3"></div>
$$
\begin{equation}
F = \frac{\dfc\Delta t}{\Delta x^2}
\label{_auto3} \tag{14}
\end{equation}
$$
is the *numerical Fourier number*, and $p=k\Delta x/2$.
The complete numerical solution is then
<!-- Equation labels as ordinary links -->
<div id="_auto4"></div>
$$
\begin{equation}
u^n_q = \left(1 -4F\sin^2 p\right)^ne^{ikq\Delta x}
\thinspace .
\label{_auto4} \tag{15}
\end{equation}
$$
### Stability
We easily see that $A\leq 1$. However, the $A$ can be less than $-1$,
which will lead
to growth of a numerical wave component. The criterion $A\geq -1$ implies
$$
4F\sin^2 (p/2)\leq 2
\thinspace .
$$
The worst case is when $\sin^2 (p/2)=1$, so a sufficient criterion for
stability is
<!-- Equation labels as ordinary links -->
<div id="_auto5"></div>
$$
\begin{equation}
F\leq {\frac{1}{2}},
\label{_auto5} \tag{16}
\end{equation}
$$
or expressed as a condition on $\Delta t$:
<!-- Equation labels as ordinary links -->
<div id="_auto6"></div>
$$
\begin{equation}
\Delta t\leq \frac{\Delta x^2}{2\dfc}\thinspace .
\label{_auto6} \tag{17}
\end{equation}
$$
Note that halving the spatial mesh size, $\Delta x \rightarrow {\frac{1}{2}}
\Delta x$, requires $\Delta t$ to be reduced by a factor of $1/4$.
The method hence becomes very expensive for fine spatial meshes.
<!-- 2DO: verification based on exact solutions -->
### Accuracy
Since $A$ is expressed in terms of $F$ and the parameter we now call
$p=k\Delta x/2$, we should also express $\Aex$ by $F$ and $p$. The exponent
in $\Aex$ is $-\dfc k^2\Delta t$, which equals $-F k^2\Delta x^2=-F4p^2$.
Consequently,
$$
\Aex = \exp{(-\dfc k^2\Delta t)} = \exp{(-4Fp^2)}
\thinspace .
$$
All our $A$ expressions as well as $\Aex$ are now functions of the two
dimensionless parameters $F$ and $p$.
Computing
the Taylor series expansion of $A/\Aex$ in terms of $F$
can easily be done with aid of `sympy`:
```
def A_exact(F, p):
return exp(-4*F*p**2)
def A_FE(F, p):
return 1 - 4*F*sin(p)**2
from sympy import *
F, p = symbols('F p')
A_err_FE = A_FE(F, p)/A_exact(F, p)
print(A_err_FE.series(F, 0, 6))
```
The result is
$$
\frac{A}{\Aex} = 1 - 4 F \sin^{2}p + 2F p^{2} - 16F^{2} p^{2} \sin^{2}p + 8 F^{2} p^{4} + \cdots
$$
Recalling that $F=\dfc\Delta t/\Delta x^2$, $p=k\Delta x/2$, and that
$\sin^2p\leq 1$, we
realize that the dominating terms in $A/\Aex$ are at most
$$
1 - 4\dfc \frac{\Delta t}{\Delta x^2} +
\dfc\Delta t - 4\dfc^2\Delta t^2
+ \dfc^2 \Delta t^2\Delta x^2 + \cdots
\thinspace .
$$
### Truncation error
We follow the theory explained in
"Truncation error analysis": ""
[[Langtangen_deqbook_trunc]](#Langtangen_deqbook_trunc). The recipe is to set up the
scheme in operator notation and use formulas from
"Overview of leading-order error terms in finite difference formulas": ""
[[Langtangen_deqbook_trunc]](#Langtangen_deqbook_trunc) to derive an expression for
the residual. The details are documented in
"Linear diffusion equation in 1D": ""
[[Langtangen_deqbook_trunc]](#Langtangen_deqbook_trunc). We end up with a truncation error
$$
R^n_i = \Oof{\Delta t} + \Oof{\Delta x^2}\thinspace .
$$
Although this is not the true error $\uex(x_i,t_n) - u^n_i$, it indicates
that the true error is of the form
$$
E = C_t\Delta t + C_x\Delta x^2
$$
for two unknown constants $C_t$ and $C_x$.
## Analysis of the Backward Euler scheme
<div id="diffu:pde1:analysis:BE"></div>
Discretizing $u_t = \dfc u_{xx}$ by a Backward Euler scheme,
$$
[D_t^- u = \dfc D_xD_x u]^n_q,
$$
and inserting a wave component ([12](#diffu:pde1:analysis:uni)),
leads to calculations similar to those arising from the Forward Euler scheme,
but since
$$
e^{ikq\Delta x}[D_t^- A]^n = A^ne^{ikq\Delta x}\frac{1 - A^{-1}}{\Delta t},
$$
we get
$$
\frac{1-A^{-1}}{\Delta t} = -\dfc \frac{4}{\Delta x^2}\sin^2\left(
\frac{k\Delta x}{2}\right),
$$
and then
<!-- Equation labels as ordinary links -->
<div id="diffu:pde1:analysis:BE:A"></div>
$$
\begin{equation}
A = \left(1 + 4F\sin^2p\right)^{-1}
\label{diffu:pde1:analysis:BE:A} \tag{18}
\thinspace .
\end{equation}
$$
The complete numerical solution can be written
<!-- Equation labels as ordinary links -->
<div id="_auto7"></div>
$$
\begin{equation}
u^n_q = \left(1 + 4F\sin^2 p\right)^{-n}
e^{ikq\Delta x} \thinspace .
\label{_auto7} \tag{19}
\end{equation}
$$
### Stability
We see from ([18](#diffu:pde1:analysis:BE:A)) that $0<A<1$, which means
that all numerical wave components are stable and non-oscillatory
for any $\Delta t >0$.
### Truncation error
The derivation of the truncation error for the Backward Euler scheme is almost
identical to that for the Forward Euler scheme. We end up with
$$
R^n_i = \Oof{\Delta t} + \Oof{\Delta x^2}\thinspace .
$$
## Analysis of the Crank-Nicolson scheme
<div id="diffu:pde1:analysis:CN"></div>
The Crank-Nicolson scheme can be written as
$$
[D_t u = \dfc D_xD_x \overline{u}^x]^{n+\frac{1}{2}}_q,
$$
or
$$
[D_t u]^{n+\frac{1}{2}}_q = \frac{1}{2}\dfc\left( [D_xD_x u]^{n}_q +
[D_xD_x u]^{n+1}_q\right)
\thinspace .
$$
Inserting ([12](#diffu:pde1:analysis:uni)) in the time derivative approximation
leads to
$$
[D_t A^n e^{ikq\Delta x}]^{n+\frac{1}{2}} = A^{n+\frac{1}{2}} e^{ikq\Delta x}\frac{A^{\frac{1}{2}}-A^{-\frac{1}{2}}}{\Delta t} = A^ne^{ikq\Delta x}\frac{A-1}{\Delta t}
\thinspace .
$$
Inserting ([12](#diffu:pde1:analysis:uni)) in the other terms
and dividing by
$A^ne^{ikq\Delta x}$ gives the relation
$$
\frac{A-1}{\Delta t} = -\frac{1}{2}\dfc\frac{4}{\Delta x^2}
\sin^2\left(\frac{k\Delta x}{2}\right)
(1 + A),
$$
and after some more algebra,
<!-- Equation labels as ordinary links -->
<div id="_auto8"></div>
$$
\begin{equation}
A = \frac{ 1 - 2F\sin^2p}{1 + 2F\sin^2p}
\thinspace .
\label{_auto8} \tag{20}
\end{equation}
$$
The exact numerical solution is hence
<!-- Equation labels as ordinary links -->
<div id="_auto9"></div>
$$
\begin{equation}
u^n_q = \left(\frac{ 1 - 2F\sin^2p}{1 + 2F\sin^2p}\right)^ne^{ikq\Delta x}
\thinspace .
\label{_auto9} \tag{21}
\end{equation}
$$
### Stability
The criteria $A>-1$ and $A<1$ are fulfilled for any $\Delta t >0$.
Therefore, the solution cannot grow, but it will oscillate if
$1-2F\sin^p < 0$. To avoid such non-physical oscillations, we must demand
$F\leq\frac{1}{2}$.
### Truncation error
The truncation error is derived in
"Linear diffusion equation in 1D": ""
[[Langtangen_deqbook_trunc]](#Langtangen_deqbook_trunc):
$$
R^{n+\frac{1}{2}}_i = \Oof{\Delta x^2} + \Oof{\Delta t^2}\thinspace .
$$
## Analysis of the Leapfrog scheme
<div id="diffu:pde1:analysis:leapfrog"></div>
An attractive feature of the Forward Euler scheme is the explicit
time stepping and no need for solving linear systems. However, the
accuracy in time is only $\Oof{\Delta t}$. We can get an explicit
*second-order* scheme in time by using the Leapfrog method:
$$
[D_{2t} u = \dfc D_xDx u + f]^n_q\thinspace .
$$
Written out,
$$
u_q^{n+1} = u_q^{n-1} + \frac{2\dfc\Delta t}{\Delta x^2}
(u^{n}_{q+1} - 2u^n_q + u^n_{q-1}) + f(x_q,t_n)\thinspace .
$$
We need some formula for the first step, $u^1_q$, but for that we can use
a Forward Euler step.
Unfortunately, the Leapfrog scheme is always unstable for the
diffusion equation. To see this, we insert a wave component $A^ne^{ikx}$
and get
$$
\frac{A - A^{-1}}{\Delta t} = -\dfc \frac{4}{\Delta x^2}\sin^2 p,
$$
or
$$
A^2 + 4F \sin^2 p\, A - 1 = 0,
$$
which has roots
$$
A = -2F\sin^2 p \pm \sqrt{4F^2\sin^4 p + 1}\thinspace .
$$
Both roots have $|A|>1$ so the amplitude always grows, which is not in
accordance with the physics of the problem.
However, for a PDE with a first-order derivative in space, instead of
a second-order one, the Leapfrog scheme performs very well.
## Summary of accuracy of amplification factors
We can plot the various amplification factors against $p=k\Delta x/2$
for different choices of the $F$ parameter. Figures
[diffu:pde1:fig:A:err:C20](#diffu:pde1:fig:A:err:C20), [diffu:pde1:fig:A:err:C0.5](#diffu:pde1:fig:A:err:C0.5), and
[diffu:pde1:fig:A:err:C0.1](#diffu:pde1:fig:A:err:C0.1) show how long and small waves are
damped by the various schemes compared to the exact damping. As long
as all schemes are stable, the amplification factor is positive,
except for Crank-Nicolson when $F>0.5$.
<!-- dom:FIGURE: [fig-diffu/diffusion_A_F20_F2.png, width=800] Amplification factors for large time steps. <div id="diffu:pde1:fig:A:err:C20"></div> -->
<!-- begin figure -->
<div id="diffu:pde1:fig:A:err:C20"></div>
<p>Amplification factors for large time steps.</p>
<img src="fig-diffu/diffusion_A_F20_F2.png" width=800>
<!-- end figure -->
<!-- dom:FIGURE: [fig-diffu/diffusion_A_F05_F025.png, width=800] Amplification factors for time steps around the Forward Euler stability limit. <div id="diffu:pde1:fig:A:err:C0.5"></div> -->
<!-- begin figure -->
<div id="diffu:pde1:fig:A:err:C0.5"></div>
<p>Amplification factors for time steps around the Forward Euler stability limit.</p>
<img src="fig-diffu/diffusion_A_F05_F025.png" width=800>
<!-- end figure -->
<!-- dom:FIGURE: [fig-diffu/diffusion_A_F01_F001.png, width=800] Amplification factors for small time steps. <div id="diffu:pde1:fig:A:err:C0.1"></div> -->
<!-- begin figure -->
<div id="diffu:pde1:fig:A:err:C0.1"></div>
<p>Amplification factors for small time steps.</p>
<img src="fig-diffu/diffusion_A_F01_F001.png" width=800>
<!-- end figure -->
The effect of negative amplification factors is that $A^n$ changes
sign from one time level to the next, thereby giving rise to
oscillations in time in an animation of the solution. We see from
[Figure](#diffu:pde1:fig:A:err:C20) that for $F=20$, waves with
$p\geq \pi/4$ undergo a damping close to $-1$, which means that the
amplitude does not decay and that the wave component jumps up and down
(flips amplitude) in time. For $F=2$ we have a damping of a factor of
0.5 from one time level to the next, which is very much smaller than
the exact damping. Short waves will therefore fail to be effectively
dampened. These waves will manifest themselves as high frequency
oscillatory noise in the solution.
A value $p=\pi/4$ corresponds to four mesh points per wave length of
$e^{ikx}$, while $p=\pi/2$ implies only two points per wave length,
which is the smallest number of points we can have to represent the
wave on the mesh.
To demonstrate the oscillatory behavior of the Crank-Nicolson scheme,
we choose an initial condition that leads to short waves with
significant amplitude. A discontinuous $I(x)$ will in particular serve
this purpose: Figures [diffu:pde1:CN:fig:F=3](#diffu:pde1:CN:fig:F=3) and
[diffu:pde1:CN:fig:F=10](#diffu:pde1:CN:fig:F=10) correspond to $F=3$ and $F=10$,
respectively, and we see how short waves pollute the overall solution.
## Analysis of the 2D diffusion equation
<div id="diffu:2D:analysis"></div>
Diffusion in several dimensions is treated later, but it is appropriate to
include the analysis here. We first consider the 2D diffusion equation
$$
u_{t} = \dfc(u_{xx} + u_{yy}),
$$
which has Fourier component solutions of the form
$$
u(x,y,t) = Ae^{-\dfc k^2t}e^{i(k_x x + k_yy)},
$$
and the schemes have discrete versions of this Fourier component:
$$
u^{n}_{q,r} = A\xi^{n}e^{i(k_x q\Delta x + k_y r\Delta y)}\thinspace .
$$
### The Forward Euler scheme
For the Forward Euler discretization,
$$
[D_t^+u = \dfc(D_xD_x u + D_yD_y u)]_{q,r}^n,
$$
we get
$$
\frac{\xi - 1}{\Delta t}
=
-\dfc\frac{4}{\Delta x^2}\sin^2\left(\frac{k_x\Delta x}{2}\right) -
\dfc\frac{4}{\Delta y^2}\sin^2\left(\frac{k_y\Delta y}{2}\right)\thinspace .
$$
Introducing
$$
p_x = \frac{k_x\Delta x}{2},\quad p_y = \frac{k_y\Delta y}{2},
$$
we can write the equation for $\xi$ more compactly as
$$
\frac{\xi - 1}{\Delta t}
=
-\dfc\frac{4}{\Delta x^2}\sin^2 p_x -
\dfc\frac{4}{\Delta y^2}\sin^2 p_y,
$$
and solve for $\xi$:
<!-- Equation labels as ordinary links -->
<div id="diffu:2D:analysis:xi"></div>
$$
\begin{equation}
\xi = 1 - 4F_x\sin^2 p_x - 4F_y\sin^2 p_y\thinspace .
\label{diffu:2D:analysis:xi} \tag{22}
\end{equation}
$$
The complete numerical solution for a wave component is
<!-- Equation labels as ordinary links -->
<div id="diffu:2D:analysis:FE:numexact"></div>
$$
\begin{equation}
u^{n}_{q,r} = A(1 - 4F_x\sin^2 p_x - 4F_y\sin^2 p_y)^n
e^{i(k_xq\Delta x + k_yr\Delta y)}\thinspace .
\label{diffu:2D:analysis:FE:numexact} \tag{23}
\end{equation}
$$
For stability we demand $-1\leq\xi\leq 1$, and $-1\leq\xi$ is the
critical limit, since clearly $\xi \leq 1$, and the worst case
happens when the sines are at their maximum. The stability criterion
becomes
<!-- Equation labels as ordinary links -->
<div id="diffu:2D:analysis:FE:stab"></div>
$$
\begin{equation}
F_x + F_y \leq \frac{1}{2}\thinspace .
\label{diffu:2D:analysis:FE:stab} \tag{24}
\end{equation}
$$
For the special, yet common, case $\Delta x=\Delta y=h$, the
stability criterion can be written as
$$
\Delta t \leq \frac{h^2}{2d\dfc},
$$
where $d$ is the number of space dimensions: $d=1,2,3$.
### The Backward Euler scheme
The Backward Euler method,
$$
[D_t^-u = \dfc(D_xD_x u + D_yD_y u)]_{q,r}^n,
$$
results in
$$
1 - \xi^{-1} = - 4F_x \sin^2 p_x - 4F_y \sin^2 p_y,
$$
and
$$
\xi = (1 + 4F_x \sin^2 p_x + 4F_y \sin^2 p_y)^{-1},
$$
which is always in $(0,1]$. The solution for a wave component becomes
<!-- Equation labels as ordinary links -->
<div id="diffu:2D:analysis:BN:numexact"></div>
$$
\begin{equation}
u^{n}_{q,r} = A(1 + 4F_x\sin^2 p_x + 4F_y\sin^2 p_y)^{-n}
e^{i(k_xq\Delta x + k_yr\Delta y)}\thinspace .
\label{diffu:2D:analysis:BN:numexact} \tag{25}
\end{equation}
$$
### The Crank-Nicolson scheme
With a Crank-Nicolson discretization,
$$
[D_tu]^{n+\frac{1}{2}}_{q,r} =
\frac{1}{2} [\dfc(D_xD_x u + D_yD_y u)]_{q,r}^{n+1} +
\frac{1}{2} [\dfc(D_xD_x u + D_yD_y u)]_{q,r}^n,
$$
we have, after some algebra,
$$
\xi = \frac{1 - 2(F_x\sin^2 p_x + F_x\sin^2p_y)}{1 + 2(F_x\sin^2 p_x + F_x\sin^2p_y)}\thinspace .
$$
The fraction on the right-hand side is always less than 1, so stability
in the sense of non-growing wave components is guaranteed for all
physical and numerical parameters. However,
the fraction can become negative and result in non-physical
oscillations. This phenomenon happens when
$$
F_x\sin^2 p_x + F_x\sin^2p_y > \frac{1}{2}\thinspace .
$$
A criterion against non-physical oscillations is therefore
$$
F_x + F_y \leq \frac{1}{2},
$$
which is the same limit as the stability criterion for the Forward Euler
scheme.
The exact discrete solution is
<!-- Equation labels as ordinary links -->
<div id="diffu:2D:analysis:CN:numexact"></div>
$$
\begin{equation}
u^{n}_{q,r} = A
\left(
\frac{1 - 2(F_x\sin^2 p_x + F_x\sin^2p_y)}{1 + 2(F_x\sin^2 p_x + F_x\sin^2p_y)}
\right)^n
e^{i(k_xq\Delta x + k_yr\Delta y)}\thinspace .
\label{diffu:2D:analysis:CN:numexact} \tag{26}
\end{equation}
$$
## Explanation of numerical artifacts
The behavior of the solution generated by Forward Euler discretization in time (and centered
differences in space) is summarized at the end of
the section [diffu:pde1:FE:experiments](#diffu:pde1:FE:experiments). Can we, from the analysis
above, explain the behavior?
We may start by looking at [Figure](#diffu:pde1:FE:fig:F=0.51)
where $F=0.51$. The figure shows that the solution is unstable and
grows in time. The stability limit for such growth is $F=0.5$ and
since the $F$ in this simulation is slightly larger, growth is
unavoidable.
[Figure](#diffu:pde1:FE:fig:F=0.5) has unexpected features:
we would expect the solution of the diffusion equation to be
smooth, but the graphs in [Figure](#diffu:pde1:FE:fig:F=0.5)
contain non-smooth noise. Turning to [Figure](#diffu:pde1:FE:fig:gauss:F=0.5), which has a quite similar
initial condition, we see that the curves are indeed smooth.
The problem with the results in [Figure](#diffu:pde1:FE:fig:F=0.5)
is that the initial condition is discontinuous. To represent it, we
need a significant amplitude on the shortest waves in the mesh.
However, for $F=0.5$, the shortest wave ($p=\pi/2$) gives
the amplitude in the numerical solution as $(1-4F)^n$, which oscillates
between negative and positive values at subsequent time levels
for $F>\frac{1}{4}$. Since the shortest waves have visible amplitudes in
the solution profile, the oscillations becomes visible. The
smooth initial condition in [Figure](#diffu:pde1:FE:fig:gauss:F=0.5),
on the other hand, leads to very small amplitudes of the shortest waves.
That these waves then oscillate in a non-physical way for
$F=0.5$ is not a visible effect. The oscillations
in time in the amplitude $(1-4F)^n$ disappear for $F\leq\frac{1}{4}$,
and that is why also the discontinuous initial condition always leads to
smooth solutions in [Figure](#diffu:pde1:FE:fig:F=0.25), where
$F=\frac{1}{4}$.
Turning the attention to the Backward Euler scheme and the experiments
in [Figure](#diffu:pde1:BE:fig:F=0.5), we see that even the discontinuous
initial condition gives smooth solutions for $F=0.5$ (and in fact all other
$F$ values). From the exact expression of the numerical amplitude,
$(1 + 4F\sin^2p)^{-1}$, we realize that this factor can never flip between
positive and negative values, and no instabilities can occur. The conclusion
is that the Backward Euler scheme always produces smooth solutions.
Also, the Backward Euler scheme guarantees that the solution cannot grow
in time (unless we add a source term to the PDE, but that is meant to
represent a physically relevant growth).
Finally, we have some small, strange artifacts when simulating the
development of the initial plug profile with the Crank-Nicolson scheme,
see [Figure](#diffu:pde1:CN:fig:F=10), where $F=3$.
The Crank-Nicolson scheme cannot give growing amplitudes, but it may
give oscillating amplitudes in time. The critical factor is
$1 - 2F\sin^2p$, which for the shortest waves ($p=\pi/2$) indicates
a stability limit $F=0.5$. With the discontinuous initial condition, we have
enough amplitude on the shortest waves so their wrong behavior is visible,
and this is what we see as small instabilities in
[Figure](#diffu:pde1:CN:fig:F=10). The only remedy is to lower the $F$ value.
# Exercises
<!-- --- begin exercise --- -->
## Exercise 1: Explore symmetry in a 1D problem
<div id="diffu:exer:1D:gaussian:symmetric"></div>
This exercise simulates the exact solution ([7](#diffu:pde1:sol:Gaussian)).
Suppose for simplicity that $c=0$.
**a)**
Formulate an initial-boundary value problem that has
([7](#diffu:pde1:sol:Gaussian)) as solution in the domain $[-L,L]$.
Use the exact solution ([7](#diffu:pde1:sol:Gaussian)) as Dirichlet
condition at the boundaries.
Simulate the diffusion of the Gaussian peak. Observe that the
solution is symmetric around $x=0$.
**b)**
Show from ([7](#diffu:pde1:sol:Gaussian)) that $u_x(c,t)=0$.
Since the solution is symmetric around $x=c=0$, we can solve the
numerical problem in frac{1}{2} of the domain, using a *symmetry boundary condition*
$u_x=0$ at $x=0$. Set up the
initial-boundary value problem in this case. Simulate the
diffusion problem in $[0,L]$ and compare with the solution in a).
<!-- --- begin solution of exercise --- -->
**Solution.**
$$
\begin{align*}
u_t &= \dfc u_xx,\\
u_x(0,t) &= 0,\\
u(L,t)& =\frac{1}{\sqrt{4\pi\dfc t}} \exp{\left({-\frac{x^2}{4\dfc t}}\right)}\thinspace .
\end{align*}
$$
<!-- --- end solution of exercise --- -->
Filename: `diffu_symmetric_gaussian`.
<!-- --- end exercise --- -->
<!-- --- begin exercise --- -->
## Exercise 2: Investigate approximation errors from a $u_x=0$ boundary condition
<div id="diffu:exer:1D:ux:onesided"></div>
We consider the problem solved in [Exercise 1: Explore symmetry in a 1D problem](#diffu:exer:1D:gaussian:symmetric)
part b). The boundary condition $u_x(0,t)=0$ can be implemented in
two ways: 1) by a standard symmetric finite difference $[D_{2x}u]_i^n=0$,
or 2) by a one-sided difference $[D^+u=0]^n_i=0$.
Investigate the effect of these two conditions on the
convergence rate in space.
<!-- --- begin hint in exercise --- -->
**Hint.**
If you use a Forward Euler scheme, choose a discretization parameter
$h=\Delta t = \Delta x^2$ and assume the error goes like $E\sim h^r$.
The error in the scheme is $\Oof{\Delta t,\Delta x^2}$ so one should
expect that the estimated $r$ approaches 1. The question is if
a one-sided difference approximation to $u_x(0,t)=0$ destroys this
convergence rate.
<!-- --- end hint in exercise --- -->
Filename: `diffu_onesided_fd`.
<!-- --- end exercise --- -->
<!-- --- begin exercise --- -->
## Exercise 3: Experiment with open boundary conditions in 1D
<div id="diffu:exer:1D:openBC"></div>
We address diffusion of a Gaussian function
as in [Exercise 1: Explore symmetry in a 1D problem](#diffu:exer:1D:gaussian:symmetric),
in the domain $[0,L]$,
but now we shall explore different types of boundary
conditions on $x=L$. In real-life problems we do not know
the exact solution on $x=L$ and must use something simpler.
**a)**
Imagine that we want to solve the problem numerically on
$[0,L]$, with a symmetry boundary condition $u_x=0$ at $x=0$,
but we do not know the exact solution and cannot of that
reason assign a correct Dirichlet condition at $x=L$.
One idea is to simply set $u(L,t)=0$ since this will be an
accurate approximation before the diffused pulse reaches $x=L$
and even thereafter it might be a satisfactory condition if the exact $u$ has
a small value.
Let $\uex$ be the exact solution and let $u$ be the solution
of $u_t=\dfc u_{xx}$ with an initial Gaussian pulse and
the boundary conditions $u_x(0,t)=u(L,t)=0$. Derive a diffusion
problem for the error $e=\uex - u$. Solve this problem
numerically using an exact Dirichlet condition at $x=L$.
Animate the evolution of the error and make a curve plot of
the error measure
$$
E(t)=\sqrt{\frac{\int_0^L e^2dx}{\int_0^L udx}}\thinspace .
$$
Is this a suitable error measure for the present problem?
**b)**
Instead of using $u(L,t)=0$ as approximate boundary condition for
letting the diffused Gaussian pulse move out of our finite domain,
one may try $u_x(L,t)=0$ since the solution for large $t$ is
quite flat. Argue that this condition gives a completely wrong
asymptotic solution as $t\rightarrow 0$. To do this,
integrate the diffusion equation from $0$ to $L$, integrate
$u_{xx}$ by parts (or use Gauss' divergence theorem in 1D) to
arrive at the important property
$$
\frac{d}{dt}\int_{0}^L u(x,t)dx = 0,
$$
implying that $\int_0^Ludx$ must be constant in time, and therefore
$$
\int_{0}^L u(x,t)dx = \int_{0}^LI(x)dx\thinspace .
$$
The integral of the initial pulse is 1.
**c)**
Another idea for an artificial boundary condition at $x=L$
is to use a cooling law
<!-- Equation labels as ordinary links -->
<div id="diffu:pde1:Gaussian:xL:cooling"></div>
$$
\begin{equation}
-\dfc u_x = q(u - u_S),
\label{diffu:pde1:Gaussian:xL:cooling} \tag{27}
\end{equation}
$$
where $q$ is an unknown heat transfer coefficient and $u_S$ is
the surrounding temperature in the medium outside of $[0,L]$.
(Note that arguing that $u_S$ is approximately $u(L,t)$ gives
the $u_x=0$ condition from the previous subexercise that is
qualitatively wrong for large $t$.)
Develop a diffusion problem for the error in the solution using
([27](#diffu:pde1:Gaussian:xL:cooling)) as boundary condition.
Assume one can take $u_S=0$ "outside the domain" since
$\uex\rightarrow 0$ as $x\rightarrow\infty$.
Find a function $q=q(t)$ such that the exact solution
obeys the condition ([27](#diffu:pde1:Gaussian:xL:cooling)).
Test some constant values of $q$ and animate how the corresponding
error function behaves. Also compute $E(t)$ curves as defined above.
Filename: `diffu_open_BC`.
<!-- --- end exercise --- -->
<!-- --- begin exercise --- -->
## Exercise 4: Simulate a diffused Gaussian peak in 2D/3D
**a)**
Generalize ([7](#diffu:pde1:sol:Gaussian)) to multi dimensions by
assuming that one-dimensional solutions can be multiplied to solve
$u_t = \dfc\nabla^2 u$. Set $c=0$ such that the peak of
the Gaussian is at the origin.
**b)**
One can from the exact solution show
that $u_x=0$ on $x=0$, $u_y=0$ on $y=0$, and $u_z=0$ on $z=0$.
The approximately correct condition $u=0$ can be set
on the remaining boundaries (say $x=L$, $y=L$, $z=L$), cf. [Exercise 3: Experiment with open boundary conditions in 1D](#diffu:exer:1D:openBC).
Simulate a 2D case and make an animation of the diffused Gaussian peak.
**c)**
The formulation in b) makes use of symmetry of the solution such that we
can solve the problem in the first quadrant (2D) or octant (3D) only.
To check that the symmetry assumption is correct, formulate the problem
without symmetry in a domain $[-L,L]\times [L,L]$ in 2D. Use $u=0$ as
approximately correct boundary condition. Simulate the same case as
in b), but in a four times as large domain. Make an animation and compare
it with the one in b).
Filename: `diffu_symmetric_gaussian_2D`.
<!-- --- end exercise --- -->
<!-- --- begin exercise --- -->
## Exercise 5: Examine stability of a diffusion model with a source term
<div id="diffu:exer:uterm"></div>
Consider a diffusion equation with a linear $u$ term:
$$
u_t = \dfc u_{xx} + \beta u\thinspace .
$$
**a)**
Derive in detail the Forward Euler, Backward Euler,
and Crank-Nicolson schemes for this type of diffusion model.
Thereafter, formulate a $\theta$-rule to summarize the three schemes.
**b)**
Assume a solution like ([8](#diffu:pde1:sol1)) and find the relation
between $a$, $k$, $\dfc$, and $\beta$.
<!-- --- begin hint in exercise --- -->
**Hint.**
Insert ([8](#diffu:pde1:sol1)) in the PDE problem.
<!-- --- end hint in exercise --- -->
**c)**
Calculate the stability of the Forward Euler scheme. Design
numerical experiments to confirm the results.
<!-- --- begin hint in exercise --- -->
**Hint.**
Insert the discrete counterpart to ([8](#diffu:pde1:sol1)) in the
numerical scheme. Run experiments at the stability limit and slightly above.
<!-- --- end hint in exercise --- -->
**d)**
Repeat c) for the Backward Euler scheme.
**e)**
Repeat c) for the Crank-Nicolson scheme.
**f)**
How does the extra term $bu$ impact the accuracy of the three schemes?
<!-- --- begin hint in exercise --- -->
**Hint.**
For analysis of the accuracy,
compare the numerical and exact amplification factors, in
graphs and/or by Taylor series expansion.
<!-- --- end hint in exercise --- -->
Filename: `diffu_stability_uterm`.
<!-- --- end exercise --- -->
| true |
code
| 0.630401 | null | null | null | null |
|
# License
```
# Copyright 2022 Google LLC
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# https://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# [Run in Colab](https://colab.research.google.com/github/google/profit-bidder/blob/main/solution_test/profit_bidder_quickstart.ipynb)
# Overview
The current notebook acts as a quick startup guide to make you understand the different steps involved in the solution. Unlike the production pipeline that you can set up using the complete solution, the notebook runs through all the steps in one place using synthesized test data. Please note that you will **not be able to test the final step** because of fake synthesized data.
## Scope of this notebook
### Dataset
We provide synthesized data sets in the gitrepo that you will clone and use in the notebook. There are three csv files:
* p_Campaign_43939335402485897.csv
* p_Conversion_43939335402485897.csv
* client_profit.csv
In addition, we also provide the schema for the above files in json format which you will use in the notebook to create the tables in the BigQuery.
### Objective
To help you be conversant on the following:
1. Setup your environment (install the libraries, initialize the variables, authenticate to Google Cloud, etc.)
1. Create a service account and two BigQuery datasets
1. Transform the data, create batches of the data, and push the data through a REST API call to CM360
### Costs
This tutorial uses billable components of Google Cloud:
* [BigQuery](https://cloud.google.com/bigquery)
Use the [Pricing Calculator](https://cloud.google.com/products/calculator/) to generate a cost estimate based on your projected usage.
## Before you begin
For this reference guide, you need a [Google Cloud project](https://console.cloud.google.com/cloud-resource-manager).
You can create a new one, or select a project you already created.
The following steps are required, regardless where you are running your notebook (local or in Cloud AI Platform Notebook).
* [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.
* [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).
* (When using non-Google Cloud local envirionments)Install Google Cloud SDK [Google Cloud SDK](https://cloud.google.com/sdk/)
### Mandatory variables
You must set the below variables:
* PB_GCP_PROJECT to [Your Google Cloud Project]
* PB_GCP_APPLICATION_CREDENTIALS to [Full path with the file name to the Service Account json file, if you chose to use Service Account to authenticate to Google Cloud]
# Setup environment
## *PIP install appropriate packages*
```
%pip install google-cloud-storage # for Storage Account
%pip install google-cloud # for cloud sdk
%pip install google-cloud-bigquery # for BigQuery
%pip install google-cloud-bigquery-storage # for BigQuery Storage client
%pip install google-api-python-client # for Key management
%pip install oauth2client # for Key management
```
## *Initialize all the variables*
### *Remove all envrionment variables*
Comes handy in troubleshooting
```
# remove all localvariables
# ^^^^^^^^^^^^^^^^^^^^^
# beg utils
# ^^^^^^^^^^^^^^^^^^^^^
# local scope
myvar = [key for key in locals().keys() if not key.startswith('_')]
print (len(locals().keys()))
print (len(myvar))
# print (myvar)
for eachvar in myvar:
print (eachvar)
del locals()[eachvar]
print (len(locals().keys()))
# global scope
myvar = [key for key in globals().keys() if not key.startswith('_')]
print (len(globals().keys()))
print (len(myvar))
# print (myvar)
for eachvar in myvar:
print (eachvar)
del globals()[eachvar]
print (len(globals().keys()))
# ^^^^^^^^^^^^^^^^^^^^^
# end utils
# ^^^^^^^^^^^^^^^^^^^^^
```
### *Create Python and Shell envrionment variables*
```
# GCP Project
PB_GCP_PROJECT = "my-project" #@param {type:"string"}
# Default values
PB_SOLUTION_PREFIX="pb_" #@param {type:"string"}
# service account
PB_SERVICE_ACCOUNT_NAME=PB_SOLUTION_PREFIX+"profit-bidder" #@param {type:"string"}
PB_SERVICE_ACCOUNT_NAME=PB_SERVICE_ACCOUNT_NAME.replace('_','-')
PB_SA_ROLES="roles/bigquery.dataViewer roles/pubsub.publisher roles/iam.serviceAccountTokenCreator"
PB_SA_EMAIL=PB_SERVICE_ACCOUNT_NAME + '@' + PB_GCP_PROJECT + '.iam.gserviceaccount.com'
# BQ DS for SA360/CM360
PB_DS_SA360=PB_SOLUTION_PREFIX + "sa360_data" #@param {type:"string"}
# BQ DS for Business data
PB_DS_BUSINESS_DATA=PB_SOLUTION_PREFIX + "business_data" #@param {type:"string"}
# Client margin table
PB_CLIENT_MARGIN_DATA_TABLE_NAME="client_margin_data_table" #@param {type:"string"}
# Tranformed data table
PB_CM360_TABLE="my_transformed_data" #@param {type:"string"}
PB_CM360_PROFILE_ID="my_cm_profileid" #@param {type:"string"}
PB_CM360_FL_ACTIVITY_ID="my_fl_activity_id" #@param {type:"string"}
PB_CM360_FL_CONFIG_ID="my_fl_config_id" #@param {type:"string"}
# DON'T CHNAGE THE BELOW VARIABLES; it is hardcoded to match the test dataset
PB_SQL_TRANSFORM_ADVERTISER_ID="43939335402485897" #synthensized id to test.
PB_CAMPAIGN_TABLE_NAME="p_Campaign_" + PB_SQL_TRANSFORM_ADVERTISER_ID
PB_CONVERSION_TABLE_NAME="p_Conversion_" + PB_SQL_TRANSFORM_ADVERTISER_ID
PB_TIMEZONE="America/New_York"
PB_REQUIRED_KEYS = [
'conversionId',
'conversionQuantity',
'conversionRevenue',
'conversionTimestamp',
'conversionVisitExternalClickId',
]
PB_API_SCOPES = ['https://www.googleapis.com/auth/dfareporting',
'https://www.googleapis.com/auth/dfatrafficking',
'https://www.googleapis.com/auth/ddmconversions',
'https://www.googleapis.com/auth/devstorage.read_write']
PB_CM360_API_NAME = 'dfareporting'
PB_CM360_API_VERSION = 'v3.5'
PB_BATCH_SIZE=100
# create a variable that you can pass to the bq Cell magic
# import the variables to the shell
import os
PB_all_args = [key for key in locals().keys() if not key.startswith('_')]
# print (PB_all_args)
PB_BQ_ARGS = {}
for PB_each_key in PB_all_args:
# print (f"{PB_each_key}:{locals()[PB_each_key]}")
if PB_each_key.upper().startswith(PB_SOLUTION_PREFIX.upper()):
PB_BQ_ARGS[PB_each_key] = locals()[PB_each_key]
os.environ[PB_each_key] = str(PB_BQ_ARGS[PB_each_key])
print (PB_BQ_ARGS)
```
## *Setup your Google Cloud project*
```
# set the desired Google Cloud project
!gcloud config set project $PB_GCP_PROJECT
import os
os.environ['GOOGLE_CLOUD_PROJECT'] = PB_GCP_PROJECT
# validate that the Google Cloud project has been set properly.
!echo 'gcloud will use the below project:'
!gcloud info --format='value(config.project)'
```
## *Authenticate with Google Cloud*
### Authenticate using ServiceAccount Key file
```
# download the ServiceAccount key and provide the path to the file below
# PB_GCP_APPLICATION_CREDENTIALS = "<Full path with the file name to the above downloaded json file>"
# PB_GCP_APPLICATION_CREDENTIALS = "/Users/dpani/Downloads/dpani-sandbox-2-3073195cd132.json"
# uncomment the below code in codelab environment
# authenticate using service account
# from google.colab import files
# # Upload service account key
# keyfile_upload = files.upload()
# PB_GCP_APPLICATION_CREDENTIALS = list(keyfile_upload.keys())[0]
# import os
# os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = PB_GCP_APPLICATION_CREDENTIALS
# # set the account
# !echo "Setting Service Account:" $PB_GCP_APPLICATION_CREDENTIALS
# !gcloud auth activate-service-account --key-file=$PB_GCP_APPLICATION_CREDENTIALS
```
### Authenticate using OAuth
```
# uncomment the below code in codelab environment
# authenticate using oauth
import sys
if 'google.colab' in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
```
## *Enable the below Google Cloud Services for the solution*
```
# set the proper Permission for the required Google Cloud Services
!gcloud services enable \
bigquery.googleapis.com \
bigquerystorage.googleapis.com \
bigquerydatatransfer.googleapis.com \
doubleclickbidmanager.googleapis.com \
doubleclicksearch.googleapis.com \
storage-api.googleapis.com
```
# Utilities fuctions
## *Delete a dataset in BigQuery (DDL)*
```
# delete the BigQuery dataset...!!! BE CAREFUL !!!
def delete_dataset(dataset_id):
"""Deletes a BigQuery dataset
This is not recommendated to use it in a production enviornment.
Comes handy in the iterative development and testing phases of the SDLC.
!!! BE CAREFUL !!!!
Args:
dataset_id(:obj:`str`): The BigQuery dataset name that we want to delete
"""
# [START bigquery_delete_dataset]
from google.cloud import bigquery
# Construct a BigQuery client object.
client = bigquery.Client()
# dataset_id = 'your-project.your_dataset'
# Use the delete_contents parameter to delete a dataset and its contents.
# Use the not_found_ok parameter to not receive an error if the
# dataset has already been deleted.
client.delete_dataset(
dataset_id, delete_contents=True, not_found_ok=True
) # Make an API request.
print("Deleted dataset '{}'.".format(dataset_id))
```
## *Delete a table in BigQuery (DDL)*
```
# delete BigQuery table if not needed...!!! BE CAREFUL !!!
def delete_table(table_id):
"""Deletes a BigQuery table
This is not recommendated to use it in a production enviornment.
Comes handy in the iterative development and testing phases of the SDLC.
!!! BE CAREFUL !!!!
Args:
table_id(:obj:`str`): The BigQuery table name that we want to delete
"""
from google.cloud import bigquery
# Construct a BigQuery client object.
client = bigquery.Client()
# client.delete_table(table_id, not_found_ok=True) # Make an API request.
client.delete_table(table_id) # Make an API request.
print("Deleted table '{}'.".format(table_id))
```
## *Deletes a Service Account*
```
# delete a service account
def delete_service_account(PB_GCP_PROJECT: str,
PB_ACCOUNT_NAME: str
):
"""The function deletes a service account
This is not recommendated to use it in a production enviornment.
Comes handy in the iterative development and testing phases of the SDLC.
!!! BE CAREFUL !!!!
Args:
PB_GCP_PROJECT:(:obj:`str`): Google Cloud project for deployment
PB_ACCOUNT_NAME:(:obj:`str`): Name of the service account.
"""
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
service = discovery.build('iam', 'v1', credentials=credentials)
# The resource name of the service account in the following format:
# `projects/{PROJECT_ID}/serviceAccounts/{ACCOUNT}`.
# Using `-` as a wildcard for the `PROJECT_ID` will infer the project from
# the account. The `ACCOUNT` value can be the `email` address or the
# `unique_id` of the service account.
name = f'projects/{PB_GCP_PROJECT}/serviceAccounts/{PB_ACCOUNT_NAME}@{PB_GCP_PROJECT}.iam.gserviceaccount.com'
print("Going to delete service account '{}'.".format(name))
request = service.projects().serviceAccounts().delete(name=name)
request.execute()
print("Account deleted")
```
# Profit bid solution
## *Creates the Service Account and BigQuery DSs:*
* Service account (the same one used to push the conversion to the SA360/CM360)
* BQ DS for SA360/CM360
* BQ DS for Business data
```
%%bash
# create the service account
# and add necessary iam roles
function get_roles {
gcloud projects get-iam-policy ${PB_GCP_PROJECT} --flatten="bindings[].members" --format='table(bindings.role)' --filter="bindings.members:${PB_SA_EMAIL}"
}
function create_service_account {
echo "Creating service account $PB_SA_EMAIL"
gcloud iam service-accounts describe $PB_SA_EMAIL > /dev/null 2>&1
RETVAL=$?
if (( ${RETVAL} != "0" )); then
gcloud iam service-accounts create ${PB_SERVICE_ACCOUNT_NAME} --description 'Profit Bidder Service Account' --project ${PB_GCP_PROJECT}
fi
for role in ${PB_SA_ROLES}; do
echo -n "Adding ${PB_SERVICE_ACCOUNT_NAME} to ${role} "
if get_roles | grep $role &> /dev/null; then
echo "already added."
else
gcloud projects add-iam-policy-binding ${PB_GCP_PROJECT} --member="serviceAccount:${PB_SA_EMAIL}" --role="${role}"
echo "added."
fi
done
}
# Creates the service account and adds necessary permissions
create_service_account
function create_bq_ds {
dataset=$1
echo "Creating BQ dataset: '${dataset}'"
bq --project_id=${PB_GCP_PROJECT} show --dataset ${dataset} > /dev/null 2>&1
RETVAL=$?
if (( ${RETVAL} != "0" )); then
bq --project_id=${PB_GCP_PROJECT} mk --dataset ${dataset}
else
echo "Reusing ${dataset}."
fi
}
#create the BQ DSs
create_bq_ds $PB_DS_SA360
create_bq_ds $PB_DS_BUSINESS_DATA
```
## *Download the test data*
Test data is in 'solution_test' folder
```
%%bash
# Download the test data from gitrepo
DIR=$HOME/solutions/profit-bidder
if [ -d "$DIR" ]
then
echo $DIR already exists.
else
mkdir -p $HOME/solutions/profit-bidder
cd $HOME/solutions/profit-bidder
git clone https://github.com/google/profit-bidder.git .
fi
export PB_TEST_DATA_DIR=$DIR/solution_test
ls -ltrah $PB_TEST_DATA_DIR
echo $PB_TEST_DATA_DIR folder contains the test data.
```
## *Uploads Test data to BigQuery*
```
%%bash
# uploades the test data into the BigQuery
function create_bq_table {
dataset=$1
table_name=$2
schema_name=$3
sql_result=$(list_bq_table $1 $2)
echo "Creating BQ table: '${dataset}.${table_name}'"
if [[ "$sql_result" == *"1"* ]]; then
echo "Reusing ${dataset}.${table_name}."
else
bq --project_id=${PB_GCP_PROJECT} mk -t --schema ${schema_name} --time_partitioning_type DAY ${dataset}.${table_name}
fi
}
function delete_bq_table {
dataset=$1
table_name=$2
sql_result=$(list_bq_table $1 $2)
echo "Deleting BQ table: '${dataset}.${table_name}'"
if [[ "$sql_result" == *"1"* ]]; then
bq rm -f -t $PB_GCP_PROJECT:$dataset.$table_name
else
echo "${dataset}.${table_name} doesn't exists."
fi
}
function list_bq_table {
dataset=$1
table_name=$2
echo "Checking BQ table exist: '${dataset}.${table_name}'"
sql_query='SELECT
COUNT(1) AS cnt
FROM
`<myproject>`.<mydataset>.__TABLES_SUMMARY__
WHERE table_id = "<mytable_name>"'
sql_query="${sql_query/<myproject>/${PB_GCP_PROJECT}}"
sql_query="${sql_query/<mydataset>/${dataset}}"
sql_query="${sql_query/<mytable_name>/${table_name}}"
bq_qry_cmd="bq query --use_legacy_sql=false --format=csv '<mysql_qery>'"
bq_qry_cmd="${bq_qry_cmd/<mysql_qery>/${sql_query}}"
sql_result=$(eval $bq_qry_cmd)
if [[ "$sql_result" == *"1"* ]]; then
echo "${dataset}.${table_name} exist"
echo "1"
else
echo "${dataset}.${table_name} doesn't exist"
echo "0"
fi
}
function load_bq_table {
dataset=$1
table_name=$2
data_file=$3
schema_name=$4
sql_result=$(list_bq_table $1 $2)
echo "Loading data to BQ table: '${dataset}.${table_name}'"
if [[ "$sql_result" == *"1"* ]]; then
delete_bq_table $dataset $table_name
fi
if [[ "$schema_name" == *"autodetect"* ]]; then
bq --project_id=${PB_GCP_PROJECT} load \
--autodetect \
--source_format=CSV \
$dataset.$table_name \
$data_file
else
create_bq_table $dataset $table_name $schema_name
bq --project_id=${PB_GCP_PROJECT} load \
--source_format=CSV \
--time_partitioning_type=DAY \
--skip_leading_rows=1 \
${dataset}.${table_name} \
${data_file}
fi
}
# save the current working dierctory
current_working_dir=`pwd`
# change to the test data directory
DIR=$HOME/solutions/profit-bidder
export PB_TEST_DATA_DIR=$DIR/solution_test
ls -ltrah $PB_TEST_DATA_DIR
echo $PB_TEST_DATA_DIR folder contains the test data.
cd $PB_TEST_DATA_DIR
pwd
# create campaign table
# load test data to campaign table
load_bq_table $PB_DS_SA360 $PB_CAMPAIGN_TABLE_NAME "p_Campaign_${PB_SQL_TRANSFORM_ADVERTISER_ID}.csv" "p_Campaign_schema.json"
# create conversion table
# load test data to conversion
load_bq_table $PB_DS_SA360 $PB_CONVERSION_TABLE_NAME "p_Conversion_${PB_SQL_TRANSFORM_ADVERTISER_ID}.csv" "${PB_TEST_DATA_DIR}/p_Conversion_schema.json"
# load test profit data
load_bq_table $PB_DS_BUSINESS_DATA $PB_CLIENT_MARGIN_DATA_TABLE_NAME "client_profit.csv" "autodetect"
# change to original working directory
cd $current_working_dir
pwd
```
## *Create a BigQuery client, import the libraries, load the bigquery Cell magic*
```
# create a BigQuery client
from google.cloud import bigquery
bq_client = bigquery.Client(project=PB_GCP_PROJECT)
# load the bigquery Cell magic
# %load_ext google.cloud.bigquery
%reload_ext google.cloud.bigquery
# test that BigQuery client works
sql = """
SELECT name
FROM `bigquery-public-data.usa_names.usa_1910_current`
WHERE state = 'TX'
LIMIT 100
"""
# Run a Standard SQL query using the environment's default project
df = bq_client.query(sql).to_dataframe()
df
```
## *Transform and aggregate*
```
# The below query transforms the data from Campaign, Conversion,
# and profit tables.
aggregate_sql = f"""
-- Copyright 2021 Google LLC
--
-- Licensed under the Apache License, Version 2.0 (the "License");
-- you may not use this file except in compliance with the License.
-- You may obtain a copy of the License at
--
-- http://www.apache.org/licenses/LICENSE-2.0
--
-- Unless required by applicable law or agreed to in writing, software
-- distributed under the License is distributed on an "AS IS" BASIS,
-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-- See the License for the specific language governing permissions and
-- limitations under the License.
-- ****** TEMPLATE CODE ******
-- NOTE: Please thoroughly review and test your version of this query before launching your pipeline
-- The resulting data from this script should provide all the necessary columns for upload via
-- the CM360 API and the SA360 API
--
-- the below placeholders must be replaced with appropriate values.
-- install.sh does so
-- project_id as: {PB_GCP_PROJECT}
-- sa360_dataset_name as: {PB_DS_SA360}
-- advertiser_id as: {PB_SQL_TRANSFORM_ADVERTISER_ID}
-- timezone as: America/New_York e.g. America/New_York
-- floodlight_name as: My Sample Floodlight Activity
-- account_type as: Other engines
-- gmc_dataset_name as: pb_gmc_data
-- gmc_account_id as: mygmc_account_id
-- business_dataset_name as: {PB_DS_BUSINESS_DATA}
-- client_margin_data_table as: {PB_CLIENT_MARGIN_DATA_TABLE_NAME}
-- client_profit_data_sku_col as: sku
-- client_profit_data_profit_col as: profit
-- target_floodlight_name as: My Sample Floodlight Activity
-- product_sku_var as: u9
-- product_quantity_var as: u10
-- product_unit_price_var as: u11
-- product_sku_regex as: (.*?);
-- product_quantity_regex as: (.*?);
-- product_unit_price_regex as: (.*?);
-- product_sku_delim as: |
-- product_quantity_delim as: |
-- product_unit_price_delim as: |
--
WITH
campaigns AS (
-- Example: Extracting all campaign names and IDs if needed for filtering for
-- conversions for a subset of campaigns
SELECT
campaign,
campaignId,
row_number() OVER (partition BY campaignId ORDER BY lastModifiedTimestamp DESC) as row_num -- for de-duping
FROM `{PB_GCP_PROJECT}.{PB_DS_SA360}.p_Campaign_{PB_SQL_TRANSFORM_ADVERTISER_ID}`
-- Be sure to replace the Timezone with what is appropriate for your use case
WHERE EXTRACT(DATE FROM _PARTITIONTIME) >= DATE_SUB(CURRENT_DATE('America/New_York'), INTERVAL 7 DAY)
)
,expanded_conversions AS (
-- Parses out all relevant product data from a conversion request string
SELECT
conv.*,
campaign,
-- example of U-Variables that are parsed to extract product purchase data
SPLIT(REGEXP_EXTRACT(floodlightEventRequestString, "u9=(.*?);"),"|") AS u9,
SPLIT(REGEXP_EXTRACT(floodlightEventRequestString, "u10=(.*?);"),"|") AS u10,
SPLIT(REGEXP_EXTRACT(floodlightEventRequestString, "u11=(.*?);"),"|") AS u11,
FROM `{PB_GCP_PROJECT}.{PB_DS_SA360}.p_Conversion_{PB_SQL_TRANSFORM_ADVERTISER_ID}` AS conv
LEFT JOIN (
SELECT campaign, campaignId
FROM campaigns
WHERE row_num = 1
GROUP BY 1,2
) AS camp
USING (campaignId)
WHERE
-- Filter for conversions that occured in the previous day
-- Be sure to replace the Timezone with what is appropriate for your use case
floodlightActivity IN ('My Sample Floodlight Activity')
AND accountType = 'Other engines' -- filter by Account Type as needed
)
,flattened_conversions AS (
-- Flattens the extracted product data for each conversion which leaves us with a row
-- of data for each product purchased as part of a given conversion
SELECT
advertiserId,
campaignId,
conversionId,
skuId,
pos1,
quantity,
pos2,
cost,
pos3
FROM expanded_conversions,
UNNEST(expanded_conversions.u9) AS skuId WITH OFFSET pos1,
UNNEST(expanded_conversions.u10) AS quantity WITH OFFSET pos2,
UNNEST(expanded_conversions.u11) AS cost WITH OFFSET pos3
WHERE pos1 = pos2 AND pos1 = pos3 AND skuId != ''
GROUP BY 1,2,3,4,5,6,7,8,9
ORDER BY conversionId
)
,inject_gmc_margin AS (
-- Merges Margin data with the products found in the conversion data
SELECT
advertiserId,
campaignId,
conversionId,
skuId,
quantity,
IF(cost = '', '0', cost) as cost,
pos1,
pos2,
pos3,
-- PLACEHOLDER MARGIN, X% for unclassified items
CASE
WHEN profit IS NULL THEN 0.0
ELSE profit
END AS margin,
sku,
FROM flattened_conversions
LEFT JOIN `{PB_GCP_PROJECT}.{PB_DS_BUSINESS_DATA}.{PB_CLIENT_MARGIN_DATA_TABLE_NAME}`
ON flattened_conversions.skuId = sku
group by 1,2,3,4,5,6,7,8,9,10,11
)
,all_conversions as (
-- Rolls up all previously expanded conversion data while calculating profit based on the matched
-- margin value. Also assigns timestamp in millis and micros
SELECT
e.account,
e.accountId,
e.accountType,
e.advertiser,
igm.advertiserId,
e.agency,
e.agencyId,
igm.campaignId,
e.campaign,
e.conversionAttributionType,
e.conversionDate,
-- '00' may be changed to any string value that will help you identify these
-- new conversions in reporting
CONCAT(igm.conversionId, '00') as conversionId,
e.conversionLastModifiedTimestamp,
-- Note:Rounds float quantity and casts to INT, change based on use case
-- This is done to support CM360 API
CAST(ROUND(e.conversionQuantity) AS INT64) AS conversionQuantity,
e.conversionRevenue,
SUM(
FLOOR(CAST(igm.cost AS FLOAT64))
) AS CALCULATED_REVENUE,
-- PROFIT CALCULATED HERE, ADJUST LOGIC AS NEEDED FOR YOUR USE CASE
ROUND(
SUM(
-- multiply item cost by class margin
SAFE_MULTIPLY(
CAST(igm.cost AS FLOAT64),
igm.margin)
),2
) AS CALCULATED_PROFIT,
e.conversionSearchTerm,
e.conversionTimestamp,
-- SA360 timestamp should be in millis
UNIX_MILLIS(e.conversionTimestamp) as conversionTimestampMillis,
-- CM360 Timestamp should be in micros
UNIX_MICROS(e.conversionTimestamp) as conversionTimestampMicros,
e.conversionType,
e.conversionVisitExternalClickId,
e.conversionVisitId,
e.conversionVisitTimestamp,
e.deviceSegment,
e.floodlightActivity,
e.floodlightActivityId,
e.floodlightActivityTag,
e.floodlightEventRequestString,
e.floodlightOrderId,
e.floodlightOriginalRevenue,
status
FROM inject_gmc_margin AS igm
LEFT JOIN expanded_conversions AS e
ON igm.advertiserID = e.advertiserId AND igm.campaignId = e.campaignID AND igm.conversionId = e.conversionId
GROUP BY 1,2,3,4,5,6,8,7,9,10,11,12,13,14,15,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33
)
-- The columns below represent the original conversion data with their new profit
-- values calculated (assigned to conversionRevenue column) along with any original
-- floofdlight data that the client wishes to keep for trouble shooting.
SELECT
account,
accountId,
accountType,
advertiser,
advertiserId,
agency,
agencyId,
campaignId,
campaign,
conversionId,
conversionAttributionType,
conversionDate,
conversionTimestamp,
conversionTimestampMillis,
conversionTimestampMicros,
CALCULATED_PROFIT AS conversionRevenue,
conversionQuantity,
-- The below is used only troublehsooting purpose.
"My Sample Floodlight Activity" AS floodlightActivity,
conversionSearchTerm,
conversionType,
conversionVisitExternalClickId,
conversionVisitId,
conversionVisitTimestamp,
deviceSegment,
CALCULATED_PROFIT,
CALCULATED_REVENUE,
-- Please prefix any original conversion values you wish to keep with "original".
-- These values may help with troubleshooting
conversionRevenue AS originalConversionRevenue,
floodlightActivity AS originalFloodlightActivity,
floodlightActivityId AS originalFloodlightActivityId,
floodlightActivityTag AS originalFloodlightActivityTag,
floodlightOriginalRevenue AS originalFloodlightRevenue,
floodlightEventRequestString,
floodlightOrderId
FROM all_conversions
WHERE CALCULATED_PROFIT > 0.0
ORDER BY account ASC
"""
# execute the transform query
df = bq_client.query(aggregate_sql).to_dataframe()
# print a couple of records of the transformed query
df.head()
# write the data to a table
df.to_gbq(f'{PB_DS_BUSINESS_DATA}.{PB_CM360_TABLE}',
project_id=PB_GCP_PROJECT,
if_exists='replace',
progress_bar=True,)
```
## *Formulate the payload and push to CM360*
```
# Reads the from transformed table, chunks the data,
# and uploads the data to CM360
# We need to chunk the data so as to adhere
# to the payload limit of the CM360 REST API.
import pytz
import datetime
import decimal
import logging
import json
import google.auth
import google.auth.impersonated_credentials
import google_auth_httplib2
from googleapiclient import discovery
def today_date(timezone):
"""Returns today's date using the timezone
Args:
timezone(:obj:`str`): The timezone with default to America/New_York
Returns:
Date: today's date
"""
tz = pytz.timezone(timezone)
return datetime.datetime.now(tz).date()
def time_now_str(timezone):
"""Returns today's date using the timezone
Args:
timezone(:obj:`str`): The timezone with default to America/New_York
Returns:
Timezone: current timezone
"""
# set correct timezone for datetime check
tz = pytz.timezone(timezone)
return datetime.datetime.now(tz).strftime("%m-%d-%Y, %H:%M:%S")
def pluralize(count):
"""An utility function
Args:
count(:obj:`int`): A number
Returns:
str: 's' or empty
"""
if count > 1:
return 's'
return ''
def get_data(table_ref_name, cloud_client, batch_size):
"""Returns the data from the transformed table.
Args:
table_ref_name(:obj:`google.cloud.bigquery.table.Table`): Reference to the table
cloud_client(:obj:`google.cloud.bigquery.client.Client`): BigQuery client
batch_size(:obj:`int`): Batch size
Returns:
Array[]: list/rows of data
"""
current_batch = []
table = cloud_client.get_table(table_ref_name)
print(f'Downloading {table.num_rows} rows from table {table_ref_name}')
skip_stats = {}
for row in cloud_client.list_rows(table_ref_name):
missing_keys = []
for key in PB_REQUIRED_KEYS:
val = row.get(key)
if val is None:
missing_keys.append(key)
count = skip_stats.get(key, 0)
count += 1
skip_stats[key] = count
if len(missing_keys) > 0:
row_as_dict = dict(row.items())
logging.debug(f'Skipped row: missing values for keys {missing_keys} in row {row_as_dict}')
continue
result = {}
conversionTimestamp = row.get('conversionTimestamp')
# convert floating point seconds to microseconds since the epoch
result['conversionTimestampMicros'] = int(conversionTimestamp.timestamp() * 1_000_000)
for key in row.keys():
value = row.get(key)
if type(value) == datetime.datetime or type(value) == datetime.date:
result[key] = value.strftime("%y-%m-%d ")
elif type(value) == decimal.Decimal:
result[key] = float(value)
else:
result[key] = value
current_batch.append(result)
if len(current_batch) >= batch_size:
yield current_batch
current_batch = []
if len(current_batch) > 0:
yield current_batch
pretty_skip_stats = ', '.join([f'{val} row{pluralize(val)} missing key "{key}"' for key, val in skip_stats.items()])
logging.info(f'Processed {table.num_rows} from table {table_ref_name} skipped {pretty_skip_stats}')
def setup(sa_email, api_scopes, api_name, api_version):
"""Impersonates a service account, authenticate with Google Service,
and returns a discovery api for further communication with Google Services.
Args:
sa_email(:obj:`str`): Service Account to impersonate
api_scopes(:obj:`Any`): An array of scope that the service account
expectes to have permission in the CM360
api_name(:obj:`str`): CM360 API Name
api_version(:obj:`str`): CM360 API version
Returns:
module:discovery: to interact with Goolge Services.
"""
source_credentials, project_id = google.auth.default()
target_credentials = google.auth.impersonated_credentials.Credentials(
source_credentials=source_credentials,
target_principal=sa_email,
target_scopes=api_scopes,
delegates=[],
lifetime=500)
http = google_auth_httplib2.AuthorizedHttp(target_credentials)
# setup API service here
try:
return discovery.build(
api_name,
api_version,
cache_discovery=False,
http=http)
except:
print('Could not authenticate')
def upload_data(timezone, rows, profile_id, fl_configuration_id, fl_activity_id):
"""POSTs the conversion data using CM360 API
Args:
timezone(:obj:`Timezone`): Current timezone or defaulted to America/New_York
rows(:obj:`Any`): An array of conversion data
profile_id(:obj:`str`): Profile id - should be gathered from the CM360
fl_configuration_id(:obj:`str`): Floodlight config id - should be gathered from the CM360
fl_activity_id(:obj:`str`): Floodlight activity id - should be gathered from the CM360
"""
print('Starting conversions for ' + time_now_str(timezone))
if not fl_activity_id or not fl_configuration_id:
print('Please make sure to provide a value for both floodlightActivityId and floodlightConfigurationId!!')
return
# Build the API connection
try:
service = setup(PB_SA_EMAIL, PB_API_SCOPES,
PB_CM360_API_NAME, PB_CM360_API_VERSION)
# upload_log = ''
print('Authorization successful')
currentrow = 0
all_conversions = """{"kind": "dfareporting#conversionsBatchInsertRequest", "conversions": ["""
while currentrow < len(rows):
for row in rows[currentrow:min(currentrow+100, len(rows))]:
conversion = json.dumps({
'kind': 'dfareporting#conversion',
'gclid': row['conversionVisitExternalClickId'],
'floodlightActivityId': fl_activity_id, # (Use short form CM Floodlight Activity Id )
'floodlightConfigurationId': fl_configuration_id, # (Can be found in CM UI)
'ordinal': row['conversionId'],
'timestampMicros': row['conversionTimestampMicros'],
'value': row['conversionRevenue'],
'quantity': row['conversionQuantity'] #(Alternatively, this can be hardcoded to 1)
})
# print('Conversion: ', conversion) # uncomment if you want to output each conversion
all_conversions = all_conversions + conversion + ','
all_conversions = all_conversions[:-1] + ']}'
payload = json.loads(all_conversions)
print(f'CM360 request payload: {payload}')
request = service.conversions().batchinsert(profileId=profile_id, body=payload)
print('[{}] - CM360 API Request: '.format(time_now_str()), request)
response = request.execute()
print('[{}] - CM360 API Response: '.format(time_now_str()), response)
if not response['hasFailures']:
print('Successfully inserted batch of 100.')
else:
status = response['status']
for line in status:
try:
if line['errors']:
for error in line['errors']:
print('Error in line ' + json.dumps(line['conversion']))
print('\t[%s]: %s' % (error['code'], error['message']))
except:
print('Conversion with gclid ' + line['gclid'] + ' inserted.')
print('Either finished or found errors.')
currentrow += 100
all_conversions = """{"kind": "dfareporting#conversionsBatchInsertRequest", "conversions": ["""
except:
print('Could not authenticate')
def partition_and_distribute(cloud_client, table_ref_name, batch_size, timezone,
profile_id, fl_configuration_id, fl_activity_id):
"""Partitions the data to chunks of batch size and
uploads to the CM360
Args:
table_ref_name(:obj:`google.cloud.bigquery.table.Table`): Reference to the table
cloud_client(:obj:`google.cloud.bigquery.client.Client`): BigQuery client
batch_size(:obj:`int`): Batch size
timezone(:obj:`Timezone`): Current timezone or defaulted to America/New_York
profile_id(:obj:`str`): Profile id - should be gathered from the CM360
fl_configuration_id(:obj:`str`): Floodlight config id - should be gathered from the CM360
fl_activity_id(:obj:`str`): Floodlight activity id - should be gathered from the CM360
"""
for batch in get_data(table_ref_name, cloud_client, batch_size):
# print(f'Batch size: {len(batch)} batch: {batch}')
upload_data(timezone, batch, profile_id, fl_configuration_id,
fl_activity_id)
# DEBUG BREAK!
if batch_size == 1:
break
try:
table = bq_client.get_table(f'{PB_DS_BUSINESS_DATA}.{PB_CM360_TABLE}')
except:
print ('Could not find table with the provided table name: {}.'.format(f'{PB_DS_BUSINESS_DATA}.{PB_CM360_TABLE}'))
table = None
todays_date = today_date(PB_TIMEZONE)
if table is not None:
table_ref_name = table.full_table_id.replace(':', '.')
if table.modified.date() == todays_date or table.created.date() == todays_date:
print('[{}] is up-to-date. Continuing with upload...'.format(table_ref_name))
partition_and_distribute(bq_client, table_ref_name, PB_BATCH_SIZE,
PB_TIMEZONE, PB_CM360_PROFILE_ID,
PB_CM360_FL_CONFIG_ID, PB_CM360_FL_ACTIVITY_ID)
else:
print('[{}] data may be stale. Please check workflow to verfiy that it has run correctly. Upload is aborted!'.format(table_ref_name))
else:
print('Table not found! Please double check your workflow for any errors.')
```
# Clean up - !!! BE CAREFUL!!!
## Delete the transformed table
```
# deletes the transformed table
delete_table(f'{PB_DS_BUSINESS_DATA}.{PB_CM360_TABLE}')
```
## Delete the SA and BQ DSs:
* Service account (the same one used to push the conversion to the SA360/CM360)
* BQ DS for SA360/CM360
* BQ DS for Business data
```
# deletes the service account
delete_service_account(PB_GCP_PROJECT, PB_SERVICE_ACCOUNT_NAME)
# deletes the dataset
delete_dataset(PB_DS_SA360)
delete_dataset(PB_DS_BUSINESS_DATA)
```
## Delete the Google Cloud Project
To avoid incurring charges to your Google Cloud Platform account for the resources used in this tutorial is to **Delete the project**.
The easiest way to eliminate billing is to delete the project you created for the tutorial.
**Caution**: Deleting a project has the following effects:
* *Everything in the project is deleted.* If you used an existing project for this tutorial, when you delete it, you also delete any other work you've done in the project.
* <b>Custom project IDs are lost. </b>When you created this project, you might have created a custom project ID that you want to use in the future. To preserve the URLs that use the project ID, such as an appspot.com</b> URL, delete selected resources inside the project instead of deleting the whole project.
If you plan to explore multiple tutorials and quickstarts, reusing projects can help you avoid exceeding project quota limits.
<br>
<ol type="1">
<li>In the Cloud Console, go to the <b>Manage resources</b> page.</li>
Go to the <a href="https://console.cloud.google.com/iam-admin/projects">Manage resources page</a>
<li>In the project list, select the project that you want to delete and then click <b>Delete</b> Trash icon.</li>
<li>In the dialog, type the project ID and then click <b>Shut down</b> to delete the project. </li>
</ol>
```
```
| true |
code
| 0.702709 | null | null | null | null |
|
# Road Following - Live demo
In this notebook, we will use model we trained to move jetBot smoothly on track.
### Load Trained Model
We will assume that you have already downloaded ``best_steering_model_xy.pth`` to work station as instructed in "train_model.ipynb" notebook. Now, you should upload model file to JetBot in to this notebooks's directory. Once that's finished there should be a file named ``best_steering_model_xy.pth`` in this notebook's directory.
> Please make sure the file has uploaded fully before calling the next cell
Execute the code below to initialize the PyTorch model. This should look very familiar from the training notebook.
```
import torchvision
import torch
model = torchvision.models.resnet18(pretrained=False)
model.fc = torch.nn.Linear(512, 2)
```
Next, load the trained weights from the ``best_steering_model_xy.pth`` file that you uploaded.
```
model.load_state_dict(torch.load('best_steering_model_xy.pth'))
```
Currently, the model weights are located on the CPU memory execute the code below to transfer to the GPU device.
```
device = torch.device('cuda')
model = model.to(device)
model = model.eval().half()
```
### Creating the Pre-Processing Function
We have now loaded our model, but there's a slight issue. The format that we trained our model doesnt exactly match the format of the camera. To do that, we need to do some preprocessing. This involves the following steps:
1. Convert from HWC layout to CHW layout
2. Normalize using same parameters as we did during training (our camera provides values in [0, 255] range and training loaded images in [0, 1] range so we need to scale by 255.0
3. Transfer the data from CPU memory to GPU memory
4. Add a batch dimension
```
import torchvision.transforms as transforms
import torch.nn.functional as F
import cv2
import PIL.Image
import numpy as np
mean = torch.Tensor([0.485, 0.456, 0.406]).cuda().half()
std = torch.Tensor([0.229, 0.224, 0.225]).cuda().half()
def preprocess(image):
image = PIL.Image.fromarray(image)
image = transforms.functional.to_tensor(image).to(device).half()
image.sub_(mean[:, None, None]).div_(std[:, None, None])
return image[None, ...]
```
Awesome! We've now defined our pre-processing function which can convert images from the camera format to the neural network input format.
Now, let's start and display our camera. You should be pretty familiar with this by now.
```
from IPython.display import display
import ipywidgets
import traitlets
from jetbot import Camera, bgr8_to_jpeg
camera = Camera()
image_widget = ipywidgets.Image()
traitlets.dlink((camera, 'value'), (image_widget, 'value'), transform=bgr8_to_jpeg)
display(image_widget)
```
We'll also create our robot instance which we'll need to drive the motors.
```
from jetbot import Robot
robot = Robot()
```
Now, we will define sliders to control JetBot
> Note: We have initialize the slider values for best known configurations, however these might not work for your dataset, therefore please increase or decrease the sliders according to your setup and environment
1. Speed Control (speed_gain_slider): To start your JetBot increase ``speed_gain_slider``
2. Steering Gain Control (steering_gain_sloder): If you see JetBot is woblling, you need to reduce ``steering_gain_slider`` till it is smooth
3. Steering Bias control (steering_bias_slider): If you see JetBot is biased towards extreme right or extreme left side of the track, you should control this slider till JetBot start following line or track in the center. This accounts for motor biases as well as camera offsets
> Note: You should play around above mentioned sliders with lower speed to get smooth JetBot road following behavior.
```
speed_gain_slider = ipywidgets.FloatSlider(min=0.0, max=1.0, step=0.01, description='speed gain')
steering_gain_slider = ipywidgets.FloatSlider(min=0.0, max=1.0, step=0.01, value=0.2, description='steering gain')
steering_dgain_slider = ipywidgets.FloatSlider(min=0.0, max=0.5, step=0.001, value=0.0, description='steering kd')
steering_bias_slider = ipywidgets.FloatSlider(min=-0.3, max=0.3, step=0.01, value=0.0, description='steering bias')
display(speed_gain_slider, steering_gain_slider, steering_dgain_slider, steering_bias_slider)
```
Next, let's display some sliders that will let us see what JetBot is thinking. The x and y sliders will display the predicted x, y values.
The steering slider will display our estimated steering value. Please remember, this value isn't the actual angle of the target, but simply a value that is
nearly proportional. When the actual angle is ``0``, this will be zero, and it will increase / decrease with the actual angle.
```
x_slider = ipywidgets.FloatSlider(min=-1.0, max=1.0, description='x')
y_slider = ipywidgets.FloatSlider(min=0, max=1.0, orientation='vertical', description='y')
steering_slider = ipywidgets.FloatSlider(min=-1.0, max=1.0, description='steering')
speed_slider = ipywidgets.FloatSlider(min=0, max=1.0, orientation='vertical', description='speed')
display(ipywidgets.HBox([y_slider, speed_slider]))
display(x_slider, steering_slider)
```
Next, we'll create a function that will get called whenever the camera's value changes. This function will do the following steps
1. Pre-process the camera image
2. Execute the neural network
3. Compute the approximate steering value
4. Control the motors using proportional / derivative control (PD)
```
angle = 0.0
angle_last = 0.0
def execute(change):
global angle, angle_last
image = change['new']
xy = model(preprocess(image)).detach().float().cpu().numpy().flatten()
x = xy[0]
y = (0.5 - xy[1]) / 2.0
x_slider.value = x
y_slider.value = y
speed_slider.value = speed_gain_slider.value
angle = np.arctan2(x, y)
pid = angle * steering_gain_slider.value + (angle - angle_last) * steering_dgain_slider.value
angle_last = angle
steering_slider.value = pid + steering_bias_slider.value
robot.left_motor.value = max(min(speed_slider.value + steering_slider.value, 1.0), 0.0)
robot.right_motor.value = max(min(speed_slider.value - steering_slider.value, 1.0), 0.0)
execute({'new': camera.value})
```
Cool! We've created our neural network execution function, but now we need to attach it to the camera for processing.
We accomplish that with the observe function.
>WARNING: This code will move the robot!! Please make sure your robot has clearance and it is on Lego or Track you have collected data on. The road follower should work, but the neural network is only as good as the data it's trained on!
```
camera.observe(execute, names='value')
```
Awesome! If your robot is plugged in it should now be generating new commands with each new camera frame.
You can now place JetBot on Lego or Track you have collected data on and see whether it can follow track.
If you want to stop this behavior, you can unattach this callback by executing the code below.
```
import time
camera.unobserve(execute, names='value')
time.sleep(0.1) # add a small sleep to make sure frames have finished processing
robot.stop()
```
### Conclusion
That's it for this live demo! Hopefully you had some fun seeing your JetBot moving smoothly on track follwing the road!!!
If your JetBot wasn't following road very well, try to spot where it fails. The beauty is that we can collect more data for these failure scenarios and the JetBot should get even better :)
| true |
code
| 0.606382 | null | null | null | null |
|
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/GetStarted/08_masking.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/GetStarted/08_masking.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=GetStarted/08_masking.ipynb"><img width=58px src="https://mybinder.org/static/images/logo_social.png" />Run in binder</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/GetStarted/08_masking.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.
The magic command `%%capture` can be used to hide output from a specific cell.
```
# %%capture
# !pip install earthengine-api
# !pip install geehydro
```
Import libraries
```
import ee
import folium
import geehydro
```
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()`
if you are running this notebook for this first time or if you are getting an authentication error.
```
# ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function.
The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
```
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
```
## Add Earth Engine Python script
```
# This function gets NDVI from Landsat 5 imagery.
def getNDVI(image):
return image.normalizedDifference(['B4', 'B3'])
# Load two Landsat 5 images, 20 years apart.
image1 = ee.Image('LANDSAT/LT05/C01/T1_TOA/LT05_044034_19900604')
image2 = ee.Image('LANDSAT/LT05/C01/T1_TOA/LT05_044034_20100611')
# Compute NDVI from the scenes.
ndvi1 = getNDVI(image1)
ndvi2 = getNDVI(image2)
# Compute the difference in NDVI.
ndviDifference = ndvi2.subtract(ndvi1)
# Load the land mask from the SRTM DEM.
landMask = ee.Image('CGIAR/SRTM90_V4').mask()
# Update the NDVI difference mask with the land mask.
maskedDifference = ndviDifference.updateMask(landMask)
# Display the masked result.
vizParams = {'min': -0.5, 'max': 0.5,
'palette': ['FF0000', 'FFFFFF', '0000FF']}
Map.setCenter(-122.2531, 37.6295, 9)
Map.addLayer(maskedDifference, vizParams, 'NDVI difference')
```
## Display Earth Engine data layers
```
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
```
| true |
code
| 0.569494 | null | null | null | null |
|
# Data Exploration
Learning objectives:
1. Learn useful patterns for exploring data before modeling
2. Gain an understanding of the dataset and identify any data issues.
The goal of this notebook is to explore our base tables before we began feature engineering and modeling. We will explore the price history of stock in the S&P 500.
* Price history : Price history of stocks
* S&P 500 : A list of all companies and symbols for companies in the S&P 500
For our analysis, let's limit price history since 2000. In general, the further back historical data is used the lower it's predictive power can be.
```
import os
PROJECT = 'your-gcp-project' # Change to your project.
BUCKET = PROJECT
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
import numpy as np
import pandas as pd
import seaborn as sns
from matplotlib import pyplot as plt
from google.cloud import bigquery
from IPython.core.magic import register_cell_magic
from IPython import get_ipython
bq = bigquery.Client(project=PROJECT)
# Allow you to easily have Python variables in SQL query.
@register_cell_magic('with_globals')
def with_globals(line, cell):
contents = cell.format(**globals())
if 'print' in line:
print(contents)
get_ipython().run_cell(contents)
```
## Preparing the dataset
Let's create the dataset in our project BiqQuery and import the stock data by running the following cells:
```
!bq mk stock_src
%%bash
TABLE=price_history
SCHEMA=symbol:STRING,Date:DATE,Open:FLOAT,Close:FLOAT
test -f $TABLE.csv || unzip ../stock_src/$TABLE.csv.zip
gsutil -m cp $TABLE.csv gs://$BUCKET/stock_src/$TABLE.csv
bq load --source_format=CSV --skip_leading_rows=1 \
stock_src.$TABLE gs://$BUCKET/stock_src/$TABLE.csv $SCHEMA
%%bash
TABLE=eps
SCHEMA=date:DATE,company:STRING,symbol:STRING,surprise:STRING,reported_EPS:FLOAT,consensus_EPS:FLOAT
test -f $TABLE.csv || unzip ../stock_src/$TABLE.csv.zip
gsutil -m cp $TABLE.csv gs://$BUCKET/stock_src/$TABLE.csv
bq load --source_format=CSV --skip_leading_rows=1 \
stock_src.$TABLE gs://$BUCKET/stock_src/$TABLE.csv $SCHEMA
%%bash
TABLE=snp500
SCHEMA=company:STRING,symbol:STRING,industry:STRING
test -f $TABLE.csv || unzip ../stock_src/$TABLE.csv.zip
gsutil -m cp $TABLE.csv gs://$BUCKET/stock_src/$TABLE.csv
bq load --source_format=CSV --skip_leading_rows=1 \
stock_src.$TABLE gs://$BUCKET/stock_src/$TABLE.csv $SCHEMA
```
Let's look at the tables and columns we have for analysis.
**Learning objective 1.**
```
%%with_globals
%%bigquery --project {PROJECT}
SELECT table_name, column_name, data_type
FROM `stock_src.INFORMATION_SCHEMA.COLUMNS`
ORDER BY table_name, ordinal_position
```
## Price History
Retrieve Google's stock price history.
```
def query_stock(symbol):
return bq.query('''
SELECT *
FROM `stock_src.price_history`
WHERE symbol="{0}"
ORDER BY Date
'''.format(symbol)).to_dataframe()
df_stock = query_stock('GOOG')
df_stock.Date = pd.to_datetime(df_stock.Date)
ax = df_stock.plot(x='Date', y='Close', title='Google stock')
# Add smoothed plot.
df_stock['Close_smoothed'] = df_stock.Close.rolling(100, center=True).mean()
df_stock.plot(x='Date', y='Close_smoothed', ax=ax);
```
Compare google to S&P
```
df_sp = query_stock('gspc')
def plot_with_sp(symbol):
df_stock = query_stock(symbol)
df_stock.Date = pd.to_datetime(df_stock.Date)
df_stock.Date = pd.to_datetime(df_stock.Date)
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax2 = ax1.twinx()
ax = df_sp.plot(x='Date', y='Close', label='S&P', color='green', ax=ax1,
alpha=0.7)
ax = df_stock.plot(x='Date', y='Close', label=symbol,
title=symbol + ' and S&P index', ax=ax2, alpha=0.7)
ax1.legend(loc=3)
ax2.legend(loc=4)
ax1.set_ylabel('S&P price')
ax2.set_ylabel(symbol + ' price')
ax.set_xlim(pd.to_datetime('2004-08-05'), pd.to_datetime('2013-08-05'))
plot_with_sp('GOOG')
```
**Learning objective 2**
```
plot_with_sp('IBM')
```
Let's see how the price of stocks change over time on a yearly basis. Using the `LAG` function we can compute the change in stock price year-over-year.
Let's compute average close difference for each year. This line could, of course, be done in Pandas. Often times it's useful to use some combination of BigQuery and Pandas for exploration analysis. In general, it's most effective to let BigQuery do the heavy-duty processing and then use Pandas for smaller data and visualization.
**Learning objective 1, 2**
```
%%with_globals
%%bigquery df --project {PROJECT}
WITH
with_year AS
(
SELECT symbol,
EXTRACT(YEAR FROM date) AS year,
close
FROM `stock_src.price_history`
WHERE symbol in (SELECT symbol FROM `stock_src.snp500`)
),
year_aggregated AS
(
SELECT year, symbol, AVG(close) as avg_close
FROM with_year
WHERE year >= 2000
GROUP BY year, symbol
)
SELECT year, symbol, avg_close as close,
(LAG(avg_close, 1) OVER (PARTITION BY symbol order by year DESC))
AS next_yr_close
FROM year_aggregated
ORDER BY symbol, year
```
Compute the year-over-year percentage increase.
```
df.dropna(inplace=True)
df['percent_increase'] = (df.next_yr_close - df.close) / df.close
```
Let's visualize some yearly stock
```
def get_random_stocks(n=5):
random_stocks = df.symbol.sample(n=n, random_state=3)
rand = df.merge(random_stocks)
return rand[['year', 'symbol', 'percent_increase']]
rand = get_random_stocks()
for symbol, _df in rand.groupby('symbol'):
plt.figure()
sns.barplot(x='year', y="percent_increase", data=_df)
plt.title(symbol)
```
There have been some major fluctations in individual stocks. For example, there were major drops during the early 2000's for tech companies.
```
df.sort_values('percent_increase').head()
stock_symbol = 'YHOO'
%%with_globals
%%bigquery df --project {PROJECT}
SELECT date, close
FROM `stock_src.price_history`
WHERE symbol='{stock_symbol}'
ORDER BY date
ax = df.plot(x='date', y='close')
```
**Stock splits** can also impact our data - causing a stock price to rapidly drop. In practice, we would need to clean all of our stock data to account for this. This would be a major effort! Fortunately, in the case of [IBM](https://www.fool.com/investing/2017/01/06/ibm-stock-split-will-2017-finally-be-the-year-shar.aspx), for example, all stock splits occurred before the year 2000.
**Learning objective 2**
```
stock_symbol = 'IBM'
%%with_globals
%%bigquery df --project {PROJECT}
SELECT date, close
FROM `stock_src.price_history`
WHERE symbol='{stock_symbol}'
ORDER BY date
IBM_STOCK_SPLIT_DATE = '1979-05-10'
ax = df.plot(x='date', y='close')
ax.vlines(pd.to_datetime(IBM_STOCK_SPLIT_DATE),
0, 500, linestyle='dashed', color='grey', alpha=0.7);
```
## S&P companies list
```
%%with_globals
%%bigquery df --project {PROJECT}
SELECT *
FROM `stock_src.snp500`
df.industry.value_counts().plot(kind='barh');
```
We can join the price histories table with the S&P 500 table to compare industries:
**Learning objective 1,2**
```
%%with_globals
%%bigquery df --project {PROJECT}
WITH sp_prices AS
(
SELECT a.*, b.industry
FROM `stock_src.price_history` a
JOIN `stock_src.snp500` b
USING (symbol)
WHERE date >= "2000-01-01"
)
SELECT Date, industry, AVG(close) as close
FROM sp_prices
GROUP BY Date, industry
ORDER BY industry, Date
df.head()
```
Using pandas we can "unstack" our table so that each industry has it's own column. This will be useful for plotting.
```
# Pandas `unstack` to make each industry a column. Useful for plotting.
df_ind = df.set_index(['industry', 'Date']).unstack(0).dropna()
df_ind.columns = [c[1] for c in df_ind.columns]
df_ind.head()
ax = df_ind.plot(figsize=(16, 8))
# Move legend down.
ax.legend(loc='upper center', bbox_to_anchor=(0.5, -0.05), shadow=True, ncol=2)
```
Let's scale each industry using min/max scaling. This will put all of the stocks on the same scale. Currently it can be hard to see the changes in stocks over time across industries.
**Learning objective 1**
```
def min_max_scale(df):
return (df - df.min()) / df.max()
scaled = min_max_scale(df_ind)
ax = scaled.plot(figsize=(16, 8))
ax.legend(loc='upper center', bbox_to_anchor=(0.5, -0.05), shadow=True, ncol=2);
```
We can also create a smoothed version of the plot above using a [rolling mean](https://en.wikipedia.org/wiki/Moving_average). This is a useful transformation to make when visualizing time-series data.
```
SMOOTHING_WINDOW = 30 # Days.
rolling = scaled.copy()
for col in scaled.columns:
rolling[col] = scaled[col].rolling(SMOOTHING_WINDOW).mean()
ax = rolling.plot(figsize=(16, 8))
ax.legend(loc='upper center', bbox_to_anchor=(0.5, -0.05), shadow=True, ncol=2);
```
Information technology had a large crash during the early 2000s and again in 2008/2009; along with all other stocks. After 2008, some industries were a bit slower to recover than other industries.
BONUS: In the next lab, we will want to predict the price of the stock in the future. What are some features that we can use to predict future price? Try visualizing some of these features.
Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| true |
code
| 0.522507 | null | null | null | null |
|
# Automatic differentiation with JAX
## Main features
- Numpy wrapper
- Auto-vectorization
- Auto-parallelization (SPMD paradigm)
- Auto-differentiation
- XLA backend and JIT support
## How to compute gradient of your objective?
- Define it as a standard Python function
- Call ```jax.grad``` and voila!
- Do not forget to wrap these functions with ```jax.jit``` to speed up
```
import jax
import jax.numpy as jnp
```
- By default, JAX exploits single-precision numbers ```float32```
- You can enable double precision (```float64```) by hands.
```
from jax.config import config
config.update("jax_enable_x64", True)
n = 5
x = jax.random.normal(jax.random.PRNGKey(0), (n,))
y = jax.random.normal(jax.random.PRNGKey(10), (n,))
print(x.shape, y.shape)
print(x @ y)
print(x.T @ y)
print(jnp.outer(x, y))
print(x[:, None].shape, y.shape)
print((x[None, :] @ y)[0])
@jax.jit # Just-in-time compilation
def f(x, A, b):
res = A @ x - b
res = jax.ops.index_update(res, 0, 100)
# y = res[res > 1]
# res[0] = 100
return res @ res
gradf = jax.grad(f, argnums=0, has_aux=False)
```
## Random numbers in JAX
- JAX focuses on the reproducibility of the runs
- Analogue of random seed is **the necessary argument** of all functions that generate something random
- More details and references on the design of ```random``` submodule are [here](https://github.com/google/jax/blob/master/design_notes/prng.md)
```
n = 1000
x = jax.random.normal(jax.random.PRNGKey(0), (n, ))
A = jax.random.normal(jax.random.PRNGKey(0), (n, n))
b = jax.random.normal(jax.random.PRNGKey(0), (n, ))
print("Check correctness", jnp.linalg.norm(gradf(x, A, b) - 2 * A.T @ (A @ x - b)))
# print(gradf(x, A, b))
print("Compare speed")
print("Analytical gradient")
# %timeit 2 * A.T @ (A @ x - b)
print("Grad function")
%timeit gradf(x, A, b).block_until_ready()
jit_gradf = jax.jit(gradf)
print("Jitted grad function")
%timeit jit_gradf(x, A, b).block_until_ready()
hess_func = jax.jit(jax.hessian(f))
print("Check correctness", jnp.linalg.norm(2 * A.T @ A - hess_func(x, A, b)))
print("Time for hessian")
%timeit hess_func(x, A, b).block_until_ready()
print("Emulate hessian and check correctness",
jnp.linalg.norm(jax.jit(hess_func)(x, A, b) - jax.jacfwd(jax.jacrev(f))(x, A, b)))
print("Time of emulating hessian")
hess_umul_func = jax.jit(jax.jacfwd(jax.jacrev(f)))
%timeit hess_umul_func(x, A, b).block_until_ready()
```
## Summary
- JAX is a simple and extensible tool in the problem where autodiff is crucial
- JIT is a key to fast Python code
- Input/output dimensions are important
- Hessian matvec is faster than explicit hessian matrix by vector product
| true |
code
| 0.583915 | null | null | null | null |
|
<h2 id='part1'>Project 1: Blog</h2>
Looking into the population of the stack Overflow data, I wanted to look at the differences between men and women.
__The questions that I want to answer are:__
<br> a) How big is the disparity in pay between men and women?
<br> b) How does having children impact progression?
<br> c) Women in STEM… Is there really an obstacle? (i.e is it harder for women to break into?)
I thought a good place to start was looking at the what the breakdown of the population was by gender.
For that I needed to read in the data:
```
#importing pakcages needed for the project
import numpy as np
import pandas as pd
import os
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import r2_score
import matplotlib.pyplot as plt
%matplotlib inline
#Reading in the StackOverflow Developer Survey data
if os.path.exists(os.path.join(os.getcwd(),'df_personal_data.pkl')):
df_personal = pd.read_pickle('df_personal_data.pkl')
else:
file_path = os.path.join(os.getcwd(),r"StackOverflow_Data\2018\survey_results_public.csv")
df=pd.read_csv(file_path)
#Selecting columns needed for the analysis - started with just Gender and added these slowly as my analysis below needed.
cols = ['CareerSatisfaction','JobSatisfaction', 'CompanySize',
'Country','Gender','Age','ConvertedSalary',
'UndergradMajor','YearsCoding','Dependents']
df_personal=df[cols]
df_personal.to_pickle(os.path.join(os.getcwd(),'df_personal_data.pkl'))
#Outputting metrics on gender breakdown
#Defined a function to convert multiple choice columns into a usable output
def split_and_stack(df_orig, col, sep):
"""This splits multiple choice answers within a column into multiple columns, then converts them back into extra rows
so each option selected by 1 user will be on a new row, meaning that the popultion can be analysed.
Steps:
1) Splits a single column in a dataframe into multiple columns (/levels), using a defined seperator.
2) Stacks these extra column entries into rows, but shows indexes of extra levels which the data was split over.
3) Extra levels / generated columns are then dropped.
4) Renames the last column as the Orignal column name.
Parameters:
df_orig (pandas.DataFrame): A DataFrame containing columns with multiple choice answers.
col (string): The column which requires multiple choice answers to be split.
sep (string): The seperator which the column (col) mentioned above needs to be split over.
Returns:
pandas.DataFrame:Returning a DataFrame of the total population with extra rows (multiple for the same index)
for multiple choice responses.
"""
new_df = df_orig[col].str.split(sep,expand=True).stack().to_frame().reset_index()
new_df = new_df.drop(['level_0','level_1'], axis=1)
new_df.columns = [col]
return new_df
#splitting the data into usable rows, see function defined above (preparing the data)
df_gender = split_and_stack(df_personal, 'Gender', ';')
#Grouping by and calculating Gender breakdowns.
#Groupby disregards null Gender values so these are removed, don't want to see them as doesn't give us information about gende rpopulation
gender = df_gender.groupby('Gender')['Gender'].count().sort_values(ascending=False)/len(df_gender)
gender_stats = zip(list(gender.index),list(gender))
#Printing stats in percentage form
for gender in gender_stats:
print(gender[0] + ": " +"{:.2%}".format(gender[1]))
```
### Question 1: How big is the disparity in pay between men and women?
Looking at the stark differences in population size, I wondered what else was different about the populations between men and women.
Something regularly in the media, the gender pay gap, can be detrimental to the view of professions / businesses and I wanted to assess how big the impact on pay is.
```
#Outputting graph to highlight salary differences by percentile.
#Splitting data to male only and female only as population sizes are significantly different.
#Null values for Gender are removed as we don't know their gender and could skew the results.
#Moreover, imputing values wouldn't make sense, we could only use the mode which would just be classifying them all as male.
df_male = df_personal.dropna(subset=['Gender'], axis=0)[df_personal.Gender.dropna(axis=0).\
apply(lambda x: True if 'Male' in x else False)]
df_female = df_personal.dropna(subset=['Gender'],axis=0)[df_personal.Gender.dropna(axis=0).\
apply(lambda x: True if 'Female' in x else False)]
#Finding percentiles of salary for male and female.
#THe Quantile function ignores nulll values for ConvertedSalary. If we imputed values, i.e replace null with mean/median)
#then this would potentially skew the results and change the distribution below.
female_percentiles = [ (i*100, df_female.ConvertedSalary.quantile(i)) for i in np.arange(0,1,0.005) ]
male_percentiles = [ (i*100, df_male.ConvertedSalary.quantile(i)) for i in np.arange(0,1,0.005) ]
#Separating x and y values for the graph
x_female_percentile = [x[0] for x in female_percentiles]
y_female_percentile = [y[1] for y in female_percentiles]
x_male_percentile = [x[0] for x in male_percentiles]
y_male_percentile = [y[1] for y in male_percentiles]
#setting graph limits x and y limits and labelling axis
plt.ylim((50000,200000))
plt.ylabel('Salary (USD)')
plt.xlabel('Percentile')
plt.xlim((50,100))
plt.plot(x_female_percentile, y_female_percentile, label = 'Female')
plt.plot(x_male_percentile, y_male_percentile, label = 'Male')
plt.legend(loc='upper left', prop={'size':10})
#Saving file
plt.savefig(os.path.join(os.getcwd(),'Pay_gap.png'),bbox_inches='tight')
```
It is clear from the graph below that there is a significant difference between men and women in high paying roles.
<br>
<br> This prompted another question, if women are paid less, does this affect their satisfaction in their current role?
```
#Outputting graph of job satisfaction by gender
#Re-casting the JobSatisfaction to Ordered Category as this data is ordered and is needed to correctly order the output
df_male['JobSatisfaction']=df_male['JobSatisfaction']\
.astype(pd.api.types.CategoricalDtype(
categories=['Extremely dissatisfied','Moderately dissatisfied',
'Slightly dissatisfied','Neither satisfied nor dissatisfied',
'Slightly satisfied','Moderately satisfied','Extremely satisfied'],
ordered=True))
df_female['JobSatisfaction']=df_female['JobSatisfaction']\
.astype(pd.api.types.CategoricalDtype(
categories=['Extremely dissatisfied','Moderately dissatisfied',
'Slightly dissatisfied','Neither satisfied nor dissatisfied',
'Slightly satisfied','Moderately satisfied','Extremely satisfied'],
ordered=True))
#Finding percentage breakdown for career satisfaction. Count/Groupby function ignores nulll values for CareerSatisfaction
#Since we just want population distribution, it makes sense to ignore these.
female_job_sat = df_female.groupby('JobSatisfaction').JobSatisfaction.count().sort_index()/len(df_female)
male_job_sat = df_male.groupby('JobSatisfaction').JobSatisfaction.count().sort_index()/len(df_male)
#Formatting and generating a graph
plt.ylabel('Proportion')
plt.xticks(rotation=90)
plt.plot(list(female_job_sat.index), list(female_job_sat), label = 'Female')
plt.plot(list(male_job_sat.index), list(male_job_sat), label = 'Male')
plt.legend(loc='upper left', prop={'size':10})
plt.savefig(os.path.join(os.getcwd(),'Gender_job_satisfaction.png'),bbox_inches='tight')
```
Even though the above indicates men may be slightly more satisfied with their jobs, the distribtuion is generally the same and I would say satisfaction for both genders is pretty similar.
<br>
<br> This didn't seem intuitive to me, so I explicitly looked at the salaries by Job Satisfaction for both genders to get a better understanding.
```
#Outputting a graph of the salary for men and women by job satisfaction breakdown
#Mean function ignores nulll values for ConvertedSalary. Groupby function ignores nulll values for CareerSatisfaction.
#Since we want mean Salary values, imputing median may skew this figure with large numbers of nulls and mean wouldn't affect this, so ignoring.
#We also want this figure to be consistent with graph above, so not imputing JobSatisfaction values.
female_job_sat_mean = df_female.groupby('JobSatisfaction').ConvertedSalary.mean().sort_index()
male_job_sat_mean = df_male.groupby('JobSatisfaction').ConvertedSalary.mean().sort_index()
#Formatting and generating a graph
plt.title('Mean Salary by Satisfaction')
plt.ylabel('Salary (USD)')
plt.xticks(rotation=90)
plt.plot(list(female_job_sat_mean.index), list(female_job_sat_mean), label = 'Female')
plt.plot(list(male_job_sat_mean.index), list(male_job_sat_mean), label = 'male')
plt.legend(loc='upper right', prop={'size':10})
plt.savefig(os.path.join(os.getcwd(),'Gender_pay_by_Satisfaction.png'),bbox_inches='tight')
```
The above graph illustrates that salary and JobSatisfaction are not directly correlated, if anything, it suggested higher paid professionals may actually dislike their jobs more!
<br>
<br> To explain why salaries may be different between men and women, as discovered above, I thought looking into the experience of the individuals would be a better indicator.
```
#Outputting graph of men and women's 90th percentile salaries by years of experience,
#Groupby function ignores null values in YearsCoding, quantile function ignores nulll values for ConvertedSalary
#If we imputed the converted salary values, this may shift the distribution, so these are ignored.
#Years coding is directly related with age and it may be non-sensical to impute values as this relationship wouldnt be preserved.
female_exp_sal = df_female.groupby('YearsCoding').ConvertedSalary.quantile(0.9)
male_exp_sal = df_male.groupby('YearsCoding').ConvertedSalary.quantile(0.9)
#Ordering points for graph so it's in Experience ascending order
female_exp_sal_sort = sorted(list(zip(female_exp_sal.index, female_exp_sal)),key = lambda x: int(x[0].split()[0].split('-')[0]))
male_exp_sal_sort = sorted(list(zip(male_exp_sal.index, male_exp_sal)),key = lambda x: int(x[0].split()[0].split('-')[0]))
#Separating x and y values for the graph
x_female_exp_sal = [x[0] for x in female_exp_sal_sort]
y_female_exp_sal = [y[1] for y in female_exp_sal_sort]
x_male_exp_sal = [x[0] for x in male_exp_sal_sort]
y_male_exp_sal = [y[1] for y in male_exp_sal_sort]
#Formatting and generating a graph
plt.title('90th Percentile Salary by Experience')
plt.ylabel('Salary (USD)')
plt.xlabel('Years Coding')
plt.xticks(rotation=90)
plt.plot(x_female_exp_sal, y_female_exp_sal, label = 'Female')
plt.plot(x_male_exp_sal, y_male_exp_sal, label = 'male')
plt.legend(loc='upper left', prop={'size':10})
plt.savefig(os.path.join(os.getcwd(),'90thpercentile_pay_gap_exp.png'),bbox_inches='tight')
```
As expected, the graph above shows how closely correlated experience and salary are.
<br>There didn't seem to be a huge difference in this correlation between men and women, so it doesn't really explain why there were bigger pay differences in these higher percentiles.
<br>
<br> I decided to look at the breakdown of the population by experience, to shed some light on why their salaries may be different:
```
#Outputting graph of male and female population by age.
#Count & Groupby ignores null values in YearsCoding
#YearsCoding is directly correlated with age, so imputing values will not preserve this relationship
female_exp = df_female.groupby('YearsCoding').YearsCoding.count()/len(df_female)
male_exp = df_male.groupby('YearsCoding').YearsCoding.count()/len(df_male)
#Ordering points for graph so it's in Experience ascending order
female_exp_sort = sorted(list(zip(female_exp.index, female_exp)),key = lambda x: int(x[0].split()[0].split('-')[0]))
male_exp_sort = sorted(list(zip(male_exp.index, male_exp)),key = lambda x: int(x[0].split()[0].split('-')[0]))
#Separating x and y values for the graph
x_female_exp = [x[0] for x in female_exp_sort]
y_female_exp = [y[1] for y in female_exp_sort]
x_male_exp = [x[0] for x in male_exp_sort]
y_male_exp = [y[1] for y in male_exp_sort]
#Formatting and generating a graph
plt.title('Population distribution by experience')
plt.ylabel('Proportion')
plt.xlabel('Years Coding')
plt.xticks(rotation=90)
plt.plot(x_female_exp, y_female_exp, label = 'Female')
plt.plot(x_male_exp, y_male_exp, label = 'male')
plt.legend(loc='upper right', prop={'size':10})
plt.savefig(os.path.join(os.getcwd(),'Gender_exp_pop_dist.png'),bbox_inches='tight')
```
As can be seen, the Female population is skewed to the left, meaning that there is a significantly greater proportion of more junior coders, potentially explaining why there is a disparity in pay.
<br>
<br> TO be sure that this is the case, I wanted to look at the difference in mean pay for these as well, to understand better the overall correlation between Coding experience and salary
```
#Calculating & Outputting graph of population distribution by years of experience.
#Mean function ignores nulll values for ConvertedSalary, Groupby ignores null values in YearsCoding
#Doesn't make sense to impute the values here, as Age and Years Coding are implicitly linked and
#imputing mean values for Salary wouldn't change our findings (as we are taking the mean)
female_sal_exp_mean = df_female.groupby('YearsCoding').ConvertedSalary.mean()
male_sal_exp_mean = df_male.groupby('YearsCoding').ConvertedSalary.mean()
#Ordering points for graph so it's in Experience ascending order
female_sal_exp_mean_sort = sorted(list(zip(female_sal_exp_mean.index, female_sal_exp_mean)),key = lambda x: int(x[0].split()[0].split('-')[0]))
male_sal_exp_mean_sort = sorted(list(zip(male_sal_exp_mean.index, male_sal_exp_mean)),key = lambda x: int(x[0].split()[0].split('-')[0]))
#Separating x and y values for the graph
x_female_mean_sal_exp = [x[0] for x in female_sal_exp_mean_sort]
y_female_mean_sal_exp = [y[1] for y in female_sal_exp_mean_sort]
x_male_mean_sal_exp = [x[0] for x in male_sal_exp_mean_sort]
y_male_mean_sal_exp = [y[1] for y in male_sal_exp_mean_sort]
#Formatting and generating a graph
plt.title('Mean Pay by Experience')
plt.ylabel('Salary (USD)')
plt.xticks(rotation=90)
plt.plot(x_female_mean_sal_exp, y_female_mean_sal_exp, label = 'Female')
plt.plot(x_male_mean_sal_exp, y_male_mean_sal_exp, label = 'male')
plt.legend(loc='upper left', prop={'size':10})
plt.savefig(os.path.join(os.getcwd(),'MedianPay_by_Exp.png'),bbox_inches='tight')
```
In general,the correlation of experience and salary holds true, giving a good explanation for why women's salries may be lower, as there's more junior people.
<br>
<br> However, there is an exception for the age bracket 24-29 years. THis could be put down to sample size, but it could symptomatic of issues women face around these years. Moreover, someone with 24-29 years of coding experiences are likely to be around 50 years old. This lead me to investigate into a different area of which women of this age may have been challenged with....
<h2 id='part1'>Question 2: How does having children impact progression?</h2>
<br>
Typically, women have longer a longer absence from work due to maternity leave and undertake more of the care-giving role than men. I wanted to see if having children, had a significant impact on their progression. One way of measuring progression is salary, which is what I have focussed on below.
```
#Outputting graph to show female salary differences witgh and without children, by age
#We are only interested in looking at population who answered the dependency question and the salary question.
#Therefore, dropping values where people haven't answered these questions as it won't inform outcome
df_dep_no_null_f = df_female.dropna(subset=['Dependents','ConvertedSalary'],how='any')
df_dep_no_null_m = df_male.dropna(subset=['Dependents','ConvertedSalary'],how='any')
#Filtering for ages most likley children are to have an impact on salary.
#Women tend to retire earlier than men and under 25s are unlikely to have children, potentially not have a salary either.
#Therefore, they have been removed
ages_for_children = ['25 - 34 years old','35 - 44 years old','45 - 54 years old']
df_dependents_f=df_dep_no_null_f[df_dep_no_null_f.Age.apply(lambda x: True if x in ages_for_children else False)]
df_dependents_m=df_dep_no_null_m[df_dep_no_null_m.Age.apply(lambda x: True if x in ages_for_children else False)]
#Finding average Salaries by age and by dependents status
#Groupby removes nulls for Age, since we want to find out affects over age bands, these values have been dropped.
female_sal_series = df_dependents_f.groupby(['Dependents','Age']).ConvertedSalary.mean().sort_index()
male_sal_series = df_dependents_m.groupby(['Dependents','Age']).ConvertedSalary.mean().sort_index()
#Formatting and generating a graph
plt.plot(list(female_sal_series['Yes'].index), list(female_sal_series['Yes']), label='Female&Children')
plt.plot(list(female_sal_series['No'].index), list(female_sal_series['No']), label='Female&NoChildren')
plt.title("Mean Female Salary by Age")
plt.ylabel('Salary (USD)')
plt.xticks(rotation=90)
plt.legend()
plt.savefig(os.path.join(os.getcwd(),'FemalePay_by_age&dependents.png'),bbox_inches='tight')
```
As you can see, the graph above indicates having children has a significant impact on women's salaries, especially later in life.
<br>
<br> I wanted to evaluate how much of an impact it actually has, and how this is different from men.
```
#Outputting/ Calculating the relative differences in salaries with and without children
female_dep_cost = female_sal_series['Yes']/female_sal_series['No']
male_dep_cost = male_sal_series['Yes']/male_sal_series['No']
#Combining the results into one dataframe for output
df_dep_cost = pd.concat([pd.DataFrame(list(female_dep_cost), index = list(female_dep_cost.index), columns = ['Female']),
pd.DataFrame(list(male_dep_cost), index = list(male_dep_cost.index),columns=['Male'])],axis=1)
#Reformatting the data to be percentage change and in percentage format
df_dep_cost['Female'] = df_dep_cost['Female'].apply(lambda x: "{:.2%}".format(x-1))
df_dep_cost['Male'] = df_dep_cost['Male'].apply(lambda x: "{:.2%}".format(x-1))
df_dep_cost
```
Clearly, from the above, men's income is much more stable when it comes to having children. Moreover, the magnitude of the difference is significant. Women can expect to have a large earnings hit if they have children.
It is worth noting, that the stabilitiy __may__ be due to an increase in population size, but it's difficult to determine this effect.
<h2 id='part1'>Question 3: Women in STEM… Is there really an obstacle??</h2>
<br>
Furthering from the differences in having children, I wanted to see if there were any other drivers of why older women may have a lower salary.
<br>
<br> I started to think about generational difference and how women were less likely to have higher education, but also less likely to pursue STEM subjects. In recent years, there are lots of initiative to increase women's participation in STEM subjects and I wanted to see what the impact of this was. I started with the interest in Computer Science / Technical degree as the dataset is relating to Developers:
```
#Calculating total proportion of Computer Science related degrees for each gender
#Adding flags for technical degrees. Looking at the data, people with a technical background with the below Majors
#would be better equipped for a future in computer science/ a developer role
technical_degree = ['Computer science, computer engineering, or software engineering',
'Information systems, information technology, or system administration'
'Web development or web design']
#Dropping entries with NaNs for undergrad major, otherwise we would be assuming they were 'Non-Technical'
#for all nulls in the population which would skew the results. Moreover, removing nulls also removes those who didn't
#go to university. Since we are considering those with technical degrees, we want to remove these people from our population
df_female_grad = df_female.dropna(subset=['UndergradMajor'],axis=0)
df_male_grad = df_male.dropna(subset=['UndergradMajor'],axis=0)
df_female_grad['TechnicalDegree'] = df_female_grad.UndergradMajor\
.apply(lambda x : 'Technical' if x in technical_degree else 'Non-Technical')
df_male_grad['TechnicalDegree'] = df_male_grad.UndergradMajor\
.apply(lambda x : 'Technical' if x in technical_degree else 'Non-Technical')
#Finding the number of technical vs non-technical people in population by Gender
female_tech_bd = df_female_grad.groupby('TechnicalDegree').TechnicalDegree.count()/len(df_female_grad)
male_tech_bd = df_male_grad.groupby('TechnicalDegree').TechnicalDegree.count()/len(df_male_grad)
#Formatting and printing the output
print('Women with a Computer Science related degree: ' + "{:.2%}".format(female_tech_bd['Technical']))
print('Men with a Computer Science related degree: ' + "{:.2%}".format(male_tech_bd['Technical']))
```
From a first glance at the data, it is clear that there are more men with technical degrees which indicates a bias in their education to technical fields.
<br>
<br> Exploring this further...
```
#Outputting graph showing the age distribution of graduates.
#Filtering list to only Technical people to show age distribution of this
df_female_tech = df_female_grad[df_female_grad['TechnicalDegree']=='Technical']
df_male_tech = df_male_grad[df_male_grad['TechnicalDegree']=='Technical']
#Filtering age to that of graduate ages, as we are considering Undergraduate Majors
over_35 = ['35 - 44 years old','45 - 54 years old','55 - 64 years old','65 years or older']
under_25 = ['25 - 34 years old','18 - 24 years old']
graduate_ages = under_25 + over_35
#Groupby removes nulls for Age, since we want to find distribution of age bands, these values have been dropped.
#Imputing these values may skew the distribution
female_tech_bd_age = df_female_tech.groupby('Age')['Age'].count()[graduate_ages].sort_index()/len(df_female) #_tech
male_tech_bd_age = df_male_tech.groupby('Age')['Age'].count()[graduate_ages].sort_index()/len(df_male)
##Printing statistics for reference in blog, age and gender differences in population
print('Women, 35 and over: ' + "{:.2%}".format(female_tech_bd_age[over_35].sum()))
print('Men, 35 and over: ' + "{:.2%}".format(male_tech_bd_age[over_35].sum()))
print('Women, 18 to 34: ' + "{:.2%}".format(female_tech_bd_age[under_25].sum()))
print('Men, 18 to 34: ' + "{:.2%}".format(male_tech_bd_age[under_25].sum()))
#Formatting and generating a graph
plt.plot(list(female_tech_bd_age.index), list(female_tech_bd_age), label='Female')
plt.plot(list(male_tech_bd_age.index), list(male_tech_bd_age), label='Male')
plt.title("Tech Graduates by Age")
plt.ylabel('Proportion')
plt.xticks(rotation=90)
plt.legend()
plt.savefig(os.path.join(os.getcwd(),'Tech_grads_by_age.png'),bbox_inches='tight')
```
It is clear that men favoured technical subjects from age 25 and above, putting women at a disadvantage for developer roles. However, it can also be seen that for the youngest age bracket, there has been a switch in the proportion of men and women exploring these options, with women now pursuing technical subjects more than men.
<br>
<br> I wanted to have a deeper look into Undergrad Majors because many people in the survey did not have these degrees (i.e 35% men, 45% of women). STEM subjects are closely linked to developer roles and I wanted to have a wider understanding of educational bias.
```
#Calculating total proportion of STEM related degrees for each gender
#Adding a flag for STEM subjects and repeating the analysis above
#Note: still only taking the graduate population, as above.
non_STEM = ['Fine arts or performing arts (ex. graphic design, music, studio art)',
'A social science (ex. anthropology, psychology, political science)',
'A humanities discipline (ex. literature, history, philosophy)',
'A business discipline (ex. accounting, finance, marketing)',
'I never declared a major']
df_female_grad['STEM'] = df_female_grad.UndergradMajor\
.apply(lambda x : 'Non-STEM' if x in non_STEM else 'STEM')
df_male_grad['STEM'] = df_male_grad.UndergradMajor\
.apply(lambda x : 'Non-STEM' if x in non_STEM else 'STEM')
##Printing statistics for reference in blog, gender &STEM differences in population
print("Women in STEM: {:.2%}".format(df_female_grad.groupby('STEM').STEM.count()['STEM']/len(df_female_grad)))
print("Men in STEM: {:.2%}".format(df_male_grad.groupby('STEM').STEM.count()['STEM']/len(df_male_grad)))
```
From a first glance at the data, it is clear that there are more men with STEM degrees which indicates a bias in their education to technical fields.
<br>
<br> Exploring this further...
```
#Calculating the population distribution of STEM related degrees for each gender by age
#Filtering out stem graduates only as we want to look at the demographics of this.
df_female_STEM = df_female_grad[df_female_grad['STEM']=='STEM']
df_male_STEM = df_male_grad[df_male_grad['STEM']=='STEM']
#Only considering working professionals as they are most likely to have degrees and be in employment
#People below these ages are unlikely to have a degree and it would be non-sensical.
working_professionals = ['18 - 24 years old','25 - 34 years old','35 - 44 years old',
'45 - 54 years old','55 - 64 years old']
#Groupby and Count removes Nulls from calculation. We don;t want to impute these values as is may skew the population distribution
female_STEM_bd_age = df_female_STEM.groupby('Age')['Age'].count()[working_professionals]/len(df_female_STEM)
male_STEM_bd_age = df_male_STEM.groupby('Age')['Age'].count()[working_professionals]/len(df_male_STEM)
#Combining data together into one DataFrame
df_STEM_bd = pd.concat([pd.DataFrame(list(female_STEM_bd_age), index = list(female_STEM_bd_age.index), columns = ['Female']),
pd.DataFrame(list(male_STEM_bd_age), index = list(male_STEM_bd_age.index),columns=['Male'])],axis=1)
#Reformatting data into percentages to 2dp.
df_STEM_bd['Female'] = df_STEM_bd['Female'].apply(lambda x: "{:.2%}".format(x))
df_STEM_bd['Male'] = df_STEM_bd['Male'].apply(lambda x: "{:.2%}".format(x))
df_STEM_bd
```
Looking at population distribution of STEM graduates, this information does show a preferene for the younger generation to take this subjects. __However__, this data is skewed by the fact the majority of respondents were in the lower age ranges, meaning this doesn't give us as much information as initially though.
<br>
<br> As a result, looking at the breakdowns of the population for each age group would be more indicative.
```
#Calculating the STEM percentage for EACH age group, by gender
#Groupby removes nulls for Age, since we want to find STEM distributions forr age bands, we don't want to impute these value.
#Otherwise, it may skew the distribution
STEM_count_f = df_female_grad.groupby(['STEM','Age']).STEM.count()['STEM'][working_professionals]
STEM_count_f_total = df_female_grad.groupby('Age').STEM.count()[working_professionals]
STEM_count_m = df_male_grad.groupby(['STEM','Age']).STEM.count()['STEM'][working_professionals]
STEM_count_m_total = df_male_grad.groupby('Age').STEM.count()[working_professionals]
##Calculating the STEM population percentage by age
STEM_bd_female = STEM_count_f/STEM_count_f_total
STEM_bd_male = STEM_count_m/STEM_count_m_total
#Combining data together into one DataFrame
df_STEM_bd_2 = pd.concat([pd.DataFrame(list(STEM_bd_female), index = list(STEM_bd_female.index), columns = ['Female']),
pd.DataFrame(list(STEM_bd_male), index = list(STEM_bd_male.index),columns=['Male'])],axis=1)
#Reformatting data into percentages to 2dp.
df_STEM_bd_2['Female'] = df_STEM_bd_2['Female'].apply(lambda x: "{:.2%}".format(x))
df_STEM_bd_2['Male'] = df_STEM_bd_2['Male'].apply(lambda x: "{:.2%}".format(x))
df_STEM_bd_2
```
This output gives us a lot more information about the relationship between STEM and age because it isn;t skewed by the population. <br>
<br> It is now clear that men are a lot more likely to pursure STEM subjects than women, meaning they have an advantage in Developer type roles which often require skills from these subjects.
<br>
<br>
However, it is cleae that there's generational bias for education which has slowly becoming more rectified. More and more women are pursuing these fields and overcoming the obstacles which they once may have faced.
<br>
<br> I wanted to have a final look on how these differences really impacted women's progressions/salaries
```
#Outputting a graph of women's salaries by age with and without a degree in a STEM area.
#Groupby removes nulls for Age, since we want to find STEM salaries by age bands, we don't want to impute these value.
#Otherwise, it may skew the distribution. Moreover, imputing salaries with the mean wouldn't have an impact as we're tryign to find the mean
df_STEM_age_f = df_female_grad.groupby(['STEM','Age']).ConvertedSalary.mean()['STEM'][working_professionals]
df_NSTEM_age_f = df_female_grad.groupby(['STEM','Age']).ConvertedSalary.mean()['Non-STEM'][working_professionals]
#Formatting and generating a graph
plt.plot(list(df_STEM_age_f.index),list(df_STEM_age_f), label='Female & STEM')
plt.plot(list(df_NSTEM_age_f.index),list(df_NSTEM_age_f), label='Female & Non-STEM')
plt.title("Women in STEM\nSalary Comparison by Age")
plt.ylabel('Salary (USD)')
plt.xticks(rotation=90)
plt.legend()
plt.savefig(os.path.join(os.getcwd(),'Female_STEM_Salary_by_age.png'),bbox_inches='tight')
```
As we can clearly see above, historically, STEM salaries have yielded higher paying professions, meaning that women's salary progression has been a harder battle due to the bias in their education.
However, there is optimism for the future as the trends in educational bias indicate that the gap between men and women are reducing and we are moving to a more equal society.
<h2 id='part1'>Additional material: Showcase Data prep, imputing values and ML techniques</h2>
<br> I tried to incorporate machine learning / sklearn algorithms into my blog, however, the models produced did not give me sensible results. As a result, I've produced a framework for what I would have done if a model would have given me a sensible output.
```
#Showcase of data prep and evaluation for a machine learning (ML) model
#Splitting models into male and female since the data is skewed and is overfitting on male attributes.
#Converting variables which contain categorical data, into categorical variables (JobSatisfaction has been done above)
#Only doing for one dataframe as using this as a means to convert to floats for ML algorithm
df_male['CareerSatisfaction']=df_male['CareerSatisfaction']\
.astype(pd.api.types.CategoricalDtype(
categories=['Extremely dissatisfied','Moderately dissatisfied',
'Slightly dissatisfied','Neither satisfied nor dissatisfied',
'Slightly satisfied','Moderately satisfied','Extremely satisfied'],
ordered=True))
df_male['CompanySize']=df_male['CompanySize']\
.astype(pd.api.types.CategoricalDtype(
categories=['Fewer than 10 employees','10 to 19 employees',
'20 to 99 employees','100 to 499 employees',
'500 to 999 employees','1,000 to 4,999 employees',
'5,000 to 9,999 employees','10,000 or more employees'],
ordered=True))
df_male['Age']=df_male['Age']\
.astype(pd.api.types.CategoricalDtype(
categories=['Under 18 years old','18 - 24 years old', '25 - 34 years old',
'45 - 54 years old','35 - 44 years old', '55 - 64 years old',
'65 years or older'],
ordered=True))
#Dropping Gender axis as this is not used, since we are creating a male and female model
df_male = df_male.drop(['Gender'], axis=1)
df_female = df_female.drop(['Gender'], axis=1)
#Adding flags for STEM and technical subjects, creating features from the analytics above (shown correlation between STEM & Salary)
#Dropping UndergradMajor as a result as engineered this feature from this.
#Making sure to distinguish the nulls so they aren't all classified as the wrong thing.
df_male['STEM'] = df_male['UndergradMajor'].apply(lambda x : [0] if x in non_STEM else [1,1 if pd.isna(x) else 0])
df_female['STEM'] = df_female['UndergradMajor'].apply(lambda x : [0] if x in non_STEM else [1,1 if pd.isna(x) else 0])
df_male['STEM'] = df_male['STEM'].apply(lambda x : np.nan if sum(x) == 2 else sum(x))
df_female['STEM'] = df_female['STEM'].apply(lambda x : np.nan if sum(x) == 2 else sum(x))
df_male['Technical'] = df_male['UndergradMajor'].apply(lambda x : [1] if x in technical_degree else [0, 2 if pd.isna(x) else 0])
df_female['Technical'] = df_female['UndergradMajor'].apply(lambda x : [1] if x in technical_degree else [0, 2 if pd.isna(x) else 0])
df_male['Technical'] = df_male['Technical'].apply(lambda x : np.nan if sum(x) == 2 else sum(x))
df_female['Technical'] = df_female['Technical'].apply(lambda x : np.nan if sum(x) == 2 else sum(x))
df_male = df_male.drop(['UndergradMajor'],axis=1)
df_female = df_female.drop(['UndergradMajor'],axis=1)
#Mapping 'Dependents' column to a flag, indicating whether or not the indiviual has children/dependents
dependent_mapping = {'Yes' : 1, 'No' : 0}
df_male['Dependents'] = df_male['Dependents'].apply(lambda x : dependent_mapping[x] if pd.isna(x) == False else x)
df_female['Dependents'] = df_female['Dependents'].apply(lambda x : dependent_mapping[x] if pd.isna(x) == False else x)
#Creating ordered lists of categorical variables so that they can be indexed, converting their original column into
#I.e creating a numbered scale for example JobSatisfaction (1 being extremely dissatisfied to 7, extremely satisfied)
ordered_satisfaction = list(df_male.groupby('CareerSatisfaction').CareerSatisfaction.count().sort_index().index)
ordered_size = list(df_male.groupby('CompanySize').CompanySize.count().sort_index().index)
ordered_age = list(df_male.groupby('Age').CompanySize.count().sort_index().index)
df_male['CompanySize'] = df_male['CompanySize']\
.apply(lambda x : ordered_size.index(x) if pd.isna(x) == False else np.nan)
df_male['CareerSatisfaction'] = df_male['CareerSatisfaction']\
.apply(lambda x : ordered_satisfaction.index(x) if pd.isna(x) == False else np.nan)
df_male['JobSatisfaction'] = df_male['JobSatisfaction']\
.apply(lambda x : ordered_satisfaction.index(x) if pd.isna(x) == False else np.nan)
df_male['Age'] = df_male['Age']\
.apply(lambda x : ordered_age.index(x) if pd.isna(x) == False else np.nan)
df_female['CompanySize'] = df_female['CompanySize']\
.apply(lambda x : ordered_size.index(x) if pd.isna(x) == False else np.nan)
df_female['CareerSatisfaction'] = df_female['CareerSatisfaction']\
.apply(lambda x : ordered_satisfaction.index(x) if pd.isna(x) == False else np.nan)
df_female['JobSatisfaction'] = df_female['JobSatisfaction']\
.apply(lambda x : ordered_satisfaction.index(x) if pd.isna(x) == False else np.nan)
df_female['Age'] = df_female['Age']\
.apply(lambda x : ordered_age.index(x) if pd.isna(x) == False else np.nan)
#Showcasing another way to convert Age/YearsCoding columns to ML compatible inputs
#Taking the middle of the bands (i.e 0-2 years gets mapped to 1 year)
df_male['YearsCoding'] = df_male['YearsCoding']\
.apply(lambda x : sum([float(y) for y in x.split()[0].split('-')])/len(x.split()[0].split('-')) if pd.isna(x) == False else np.nan)
df_female['YearsCoding'] = df_female['YearsCoding']\
.apply(lambda x : sum([float(y) for y in x.split()[0].split('-')])/len(x.split()[0].split('-')) if pd.isna(x) == False else np.nan)
#Splitting out country columns into 11 columns, 1 for each top 10 most indicated country and 1 for other.
#Placing a flag for each of the column, so if the country is United states, 1 would be in the united states column and 0 elsewhere
for frame in [df_male,df_female]:
top_n_countries = list(frame.groupby('Country')['Country'].count().sort_values(ascending=False).nlargest(10).index)
frame["Country_Other"] = frame['Country'].apply(lambda x : 0 if x in top_n_countries else 1)
for value in top_n_countries:
frame["Country_" + value] = frame['Country'].apply(lambda x : 1 if x == value else 0)
#Dropping the original Country column as the features ahve been extracted in a ML friendly manor
df_male = df_male.drop(['Country'],axis=1)
df_female = df_female.drop(['Country'],axis=1)
```
The data has now been processed into a ML friendly format, except for the existence of nulls.
```
#Highlighting the nulls in each field
print('Male null %:\n',df_male.isnull().mean())
print('Female null %:\n',df_female.isnull().mean())
```
The above shows us what null values there are. There is a large proportion of the data which would be unusable if we were to just drop all na values. However, it wouldnt make sense to impute these values either as it would result in information lost....
<br>
<br> This could be as high as 34.8% (ConvertedSalary Female), but this would need to be dropped anyways as this is what we are fitting to. However, over 20% of the Company Size data is null and we can salvage some of the information from the rows with a null in Company size.
```
#Dropping nulls from the 6 columns listed below due to their strong correltions with salary.
#Converted Salarty is the column we are fitting against so this cannot be null, which is why we're dropping this.
df_male = df_male.dropna(subset=['ConvertedSalary'], axis=0)
df_female = df_female.dropna(subset=['ConvertedSalary'], axis=0)
#Converting categorical datatypes to float so we can allocate null values
df_male['CareerSatisfaction'] = df_male['CareerSatisfaction'].astype(str).astype(float)
df_male['JobSatisfaction'] = df_male['JobSatisfaction'].astype(str).astype(float)
df_male['CompanySize'] = df_male['CompanySize'].astype(str).astype(float)
df_male['Age'] = df_male['Age'].astype(str).astype(float)
df_female['CareerSatisfaction'] = df_female['CareerSatisfaction'].astype(str).astype(float)
df_female['JobSatisfaction'] = df_female['JobSatisfaction'].astype(str).astype(float)
df_female['Age'] = df_female['Age'].astype(str).astype(float)
#It wouldnt make sense to impute any of the values in these columns as it would confuse the correlations between variables.
#Using decision trees (random forest), putting in a negative value should distinguish these null values seperately
df_male = df_male.fillna(-1)
df_female = df_female.fillna(-1)
#Highlighting the nulls in each field
print('Male null %:\n',df_male.isnull().mean())
print('Female null %:\n',df_female.isnull().mean())
```
As you can now see, there are no more null values, therefore we can now fit a model to make predictions.
Since we want to predict salary, we will need a regressor as the data is continuos.
```
#Splitting the data into features and the varaible we want to predict
X_male = df_male.dropna().drop(['ConvertedSalary'],axis=1)
y_male = df_male.dropna()['ConvertedSalary']
X_female = df_female.dropna().drop(['ConvertedSalary'],axis=1)
y_female = df_female.dropna()['ConvertedSalary']
#Splitting data into train and test data
X_train_m, X_test_m, y_train_m, y_test_m = train_test_split(X_male,y_male,test_size=0.2,random_state=42)
X_train_f, X_test_f, y_train_f, y_test_f = train_test_split(X_female,y_female,test_size=0.2,random_state=42)
#Training the random Forest model on the training set
clf_m = RandomForestRegressor(n_estimators=100)
clf_f = RandomForestRegressor(n_estimators=100)
clf_m.fit(X_train_m,y_train_m)
clf_f.fit(X_train_f,y_train_f)
#Making predictions off the model
y_pred_test_m=clf_m.predict(X_test_m)
y_pred_train_m=clf_m.predict(X_train_m)
y_pred_test_f=clf_f.predict(X_test_f)
y_pred_train_f=clf_f.predict(X_train_f)
#Evaluating the models performance
print("Male Test score: ",r2_score(y_test_m, y_pred_test_m))
print("Male Train score: ",r2_score(y_train_m, y_pred_train_m))
print("Female Test score: ",r2_score(y_test_f, y_pred_test_f))
print("Female Train score: ",r2_score(y_train_f, y_pred_train_f))
```
The above example is a prime example of overfitting, taking features within a training dataset and creating correlations which improves their predictions, but, in fact, does not generalise well when dealing with new data. This is why the test score is so low but the training score is higher.
```
print("These are the male feature importances, ordered by importance:")
sorted(list(zip(X_male.columns,clf_m.feature_importances_)),key = lambda x:x[1],reverse=True)
print("These are the male feature importances, ordered by importance:")
sorted(list(zip(X_female.columns,clf_f.feature_importances_)),key = lambda x:x[1],reverse=True)
```
| true |
code
| 0.569194 | null | null | null | null |
|
[Look Up](https://www.luogu.org/problemnew/show/P2947)。给定一数组,求各数字右边第一个比该数字大的数,没有则设置0.
思路:单调栈的典型应用。从后往前扫描数组,设立一个栈,栈中始终保存比当前数字大的元素,若有小的全部弹出。
```
def LookUp(nums):
n = len(nums)
res = [0]*n
s = list()
for idx in range(n-1, -1, -1):
# 先排空栈中不大于当前数的所有数字
while s and nums[s[-1]] <= nums[idx]:
s.pop()
# 栈中剩下的元素都是比当前大的数了
if not s: # 栈为空
res[idx] = 0
else: # 栈不空,比当前大的最近的数就是栈顶元素
res[idx] = s[-1]
s.append(idx)
return res
```
[Remove K Digits](https://leetcode.com/problems/remove-k-digits/)。给一个数字字串,删掉其中的$k$位,使得得到的数字最小。
思路:单调栈。所有位顺序入栈,易得高位数字越小越好,当某一位小于栈顶元素时,需要弹栈。注意弹栈的次数不能超过$k$次。不难,实现逻辑稍微复杂一点。
```
def removeKdigits(num: str, k: int) -> str:
n = len(num)
if k >= n:
return '0'
s = list()
cnt = 0
for ch in num:
if cnt < k: # 删除k次
if not s:
s.append(ch)
continue
while s and int(ch) < int(s[-1]) and cnt < k:
s.pop()
cnt += 1
s.append(ch)
else:
s.append(ch)
return str(int(''.join(s[:n-k])))
```
猿辅导2019笔试题。给$C$个字符串的压缩表示形式,对其进行解码复原。
思路:输入字串可能带括号也可能没带括号,都需要考虑进去。
```
def func(C):
def reverse(s):
s = list(s)
i, j = 0, len(s) - 1
while i < j:
s[i], s[j] = s[j], s[i]
i += 1
j -= 1
return ''.join(s)
for _ in range(C):
s = sys.stdin.readline().strip()
n = len(s)
stack = list()
num = ''
idx = 0
while idx < n:
# 字母与括号入栈
while idx < n and not s[idx].isdigit():
stack.append(s[idx])
idx += 1
# 提取数字
while idx < n and s[idx].isdigit():
num += s[idx]
idx += 1
num = int(num) if num != '' else 1
sub_s = '' # 被压缩的子串
if stack[-1] == ')': # 有括号时需特殊处理
left_cnt = 0
while stack and stack[-1] != '(':
while stack[-1] == ')':
left_cnt += 1
stack.pop()
while stack and stack[-1] != '(':
sub_s += stack.pop()
for _ in range(left_cnt): # 清空等数量的左括号
stack.pop()
else: # 无括号时取出单个字符即可
sub_s = stack.pop()
sub_s = reverse(sub_s) * num
num = ''
for ch in sub_s:
stack.append(ch)
res = ''
while stack[-1] == ')':
stack.pop()
while stack and stack[-1] != '(':
if stack[-1] == ')':
stack.pop()
continue
res += stack.pop()
print(reverse(res))
```
[Decode String](https://leetcode.com/problems/decode-string/)。给一个编码压缩后的字串,字符被包裹在方括号中,数字表示方括号中字符的重复次数,根据该规则还原完整的字符串。
思路:纯逻辑题,写起来比较麻烦。当遇到右括号时,说明需要弹栈,直到弹到左括号为止,这一部分是压缩的字串;再弹出栈中所有的数字。
```
def decodeString(s: str) -> str:
stack = list()
for ch in s:
if ch != ']':
stack.append(ch)
else:
# 1. 弹出压缩的字串
enc_s = str()
while stack[-1] != '[':
enc_s = stack.pop()+enc_s
_ = stack.pop()
# 2. 弹出数字
num = str()
while stack and stack[-1].isdigit():
num = stack.pop()+num
num = int(num)
# 3. 还原
dec_s = enc_s*num
for c in dec_s:
stack.append(c)
return ''.join(stack)
decodeString("3[a]2[bc]")
```
[Min Stack](https://leetcode.com/problems/min-stack/)。设计一个栈结构,要求其```push()```、```top()```、```pop()```、```min()```的时间复杂度均为$O(1)$。
思路:设置一个辅助栈,里面只压入$min$值。弹栈时,当普通栈与辅助栈的栈顶元素相同时,辅助栈才需要弹栈。
```
class MinStack:
def __init__(self):
"""
initialize your data structure here.
"""
self.s=list()
self.s_min=list()
def push(self, x: int) -> None:
self.s.append(x)
if not self.s_min or x<=self.s_min[-1]:
self.s_min.append(x)
def pop(self) -> None:
if self.s_min[-1]==self.s[-1]:
self.s_min.pop()
self.s.pop()
def top(self) -> int:
return self.s[-1]
def getMin(self) -> int:
return self.s_min[-1]
```
[Dota2 Senate](https://leetcode.com/problems/dota2-senate/)。给一只含'R'和'D'的字串,表示两阵营的攻击顺序。每一阵营成员攻击会杀死一名另一阵营的成员,求胜利阵营。
思路:设置两个队列,双方成员均进入队列,为每一位成员标上一个索引,索引越小攻击顺序越靠前。每次从两队列中取出成员PK,只有胜者才会再次进入队列并更新索引。
```
def predictPartyVictory(senate: str) -> str:
q_D, q_R = list(), list()
n = len(senate)
for idx, ch in enumerate(senate):
if ch == 'R':
q_R.append(idx)
else:
q_D.append(idx)
while q_D and q_R:
idx_D, idx_R = q_D.pop(0), q_R.pop(0)
if idx_D < idx_R:
q_D.append(idx_D+n)
else:
q_R.append(idx_R+n)
return "Radiant" if q_R else 'Dire'
```
| true |
code
| 0.288782 | null | null | null | null |
|
# Sketch Classifier for "How Do Humans Sketch Objects?"
A sketch classifier using the dataset from the paper <a href='http://cybertron.cg.tu-berlin.de/eitz/projects/classifysketch/'>How Do Humans Sketch Objects?</a> where the authors collected 20,000 unique sketches evenly distributed over 250 object categories - we will use a CNN (using Keras) to classify a sketch.
<img src='http://cybertron.cg.tu-berlin.de/eitz/projects/classifysketch/teaser_siggraph.jpg'/>
```
from __future__ import print_function
import matplotlib.pyplot as plt
import numpy as np
import random
from scipy.misc import imresize
import os
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
plt.style.use('ggplot')
SKETCH_DIR = '/Volumes/Storage/sketches (subset)/png/'
DEST_SKETCH_DIR = '/Volumes/Storage/sketches (subset)/sketches_training_data/'
TARGET_SIZE = (128,128)
```
## Create subset data
To reduce the size of the data (and demands of training), we will use a subset of the data.
```
def get_image_file_paths_and_categories():
"""
Walk the root directory and for each subdirectory, obtain the
list of .png image files creating (and returning) a list for each category label and
associated filepath
"""
image_file_paths = []
categories = []
for d in os.listdir(SKETCH_DIR):
label = d
if not os.path.isdir(os.path.join(SKETCH_DIR, d)):
continue
for f in os.listdir(os.path.join(SKETCH_DIR, d)):
full_path = os.path.join(os.path.join(SKETCH_DIR, d), f)
if os.path.isfile(full_path) and ".png" in full_path.lower():
categories.append(label)
image_file_paths.append(full_path)
return image_file_paths, categories
image_file_paths, categories = get_image_file_paths_and_categories()
set(categories)
TARGET_COUNT = 150
selected_categories = []
available_categories = list(set(categories))
while len(selected_categories) < TARGET_COUNT:
idx = random.randint(0, len(available_categories)-1)
category = available_categories[idx]
selected_categories.append(category)
del available_categories[idx]
selected_categories
print("Filtered categories count {}".format(len(selected_categories)))
def split_training_validation_data(shuffle=True, split=0.8, target_size=TARGET_SIZE, selected_categories=None):
"""
Split the data into training and validation (as well as resizing the images)
Copies are made from the main file path and stored in a destination folder.
"""
image_scale = None
training_samples_count = 0
validation_samples_count = 0
for d in os.listdir(SKETCH_DIR):
label = d
if not os.path.isdir(os.path.join(SKETCH_DIR, d)) or d not in selected_categories:
continue
file_names = []
file_data = []
for f in os.listdir(os.path.join(SKETCH_DIR, d)):
full_path = os.path.join(os.path.join(SKETCH_DIR, d), f)
if os.path.isfile(full_path) and ".png" in full_path.lower():
file_names.append(f)
if image_scale is None:
image_scale = float(target_size[0]) / float(plt.imread(full_path).shape[0])
file_data.append(imresize(plt.imread(full_path), image_scale))
# shuffle
indexes = np.arange(len(file_names))
if shuffle:
np.random.shuffle(indexes)
training_end_index = int(len(indexes) * split)
training_indexes = indexes[:training_end_index]
validation_indexes = indexes[training_end_index:]
training_dir = os.path.join(DEST_SKETCH_DIR, 'training')
validation_dir = os.path.join(DEST_SKETCH_DIR, 'validation')
class_training_dir = os.path.join(training_dir, label)
class_validation_dir = os.path.join(validation_dir, label)
if not os.path.exists(training_dir):
os.mkdir(training_dir)
if not os.path.exists(validation_dir):
os.mkdir(validation_dir)
if not os.path.exists(class_training_dir):
os.mkdir(class_training_dir)
if not os.path.exists(class_validation_dir):
os.mkdir(class_validation_dir)
for idx in training_indexes:
training_samples_count += 1
plt.imsave(
os.path.join(class_training_dir, file_names[idx]), file_data[idx],
format='png', cmap='gray')
for idx in validation_indexes:
validation_samples_count += 1
plt.imsave(
os.path.join(class_validation_dir, file_names[idx]), file_data[idx],
format='png', cmap='gray')
print("Finished - training samples = {}, validation samples {}".format(training_samples_count,
validation_samples_count))
return training_samples_count, validation_samples_count
training_samples_count, validation_samples_count = split_training_validation_data(
selected_categories=selected_categories)
print("training_samples_count {}, validation_samples_count {}".format(
training_samples_count, validation_samples_count))
```
## Data exploration
```
def get_training_validation_data():
training_labels = []
training_filenames = []
validation_labels = []
validation_filenames = []
training_dir = os.path.join(DEST_SKETCH_DIR, 'training')
validation_dir = os.path.join(DEST_SKETCH_DIR, 'validation')
# iterate through the training directory
for d in os.listdir(training_dir):
label = d
if not os.path.isdir(os.path.join(training_dir, d)):
continue
for f in os.listdir(os.path.join(training_dir, d)):
full_path = os.path.join(os.path.join(training_dir, d), f)
if os.path.isfile(full_path) and ".png" in full_path.lower():
training_labels.append(label)
training_filenames.append(full_path)
# iterate through the validation directory
for d in os.listdir(validation_dir):
label = d
if not os.path.isdir(os.path.join(validation_dir, d)):
continue
for f in os.listdir(os.path.join(validation_dir, d)):
full_path = os.path.join(os.path.join(validation_dir, d), f)
if os.path.isfile(full_path) and ".png" in full_path.lower():
validation_labels.append(label)
validation_filenames.append(full_path)
return training_labels, training_filenames, validation_labels, validation_filenames
training_labels, training_filenames, _, _ = get_training_validation_data()
plt.imread(training_filenames[100]).shape
f, axarr = plt.subplots(8, 2, figsize=(8,32))
image_scale = 1.0
for r in range(0, 8):
for c in range(0, 2):
index = random.randint(0, len(training_labels)-1)
axarr[r, c].imshow(imresize(plt.imread(training_filenames[index]), image_scale), cmap='gray', interpolation='nearest')
axarr[r, c].set_title(training_labels[index])
```
| true |
code
| 0.436742 | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.