text
stringlengths 2.5k
6.39M
| kind
stringclasses 3
values |
---|---|
# Monte Carlo Simulations with Python (Part 1)
[Patrick Hanbury](https://towardsdatascience.com/monte-carlo-simulations-with-python-part-1-f5627b7d60b0)
- Notebook author: Israel Oliveira [\[e-mail\]](mailto:'Israel%20Oliveira%20'<[email protected]>)
```
%load_ext watermark
import numpy as np
import math
import random
from matplotlib import pyplot as plt
from IPython.display import clear_output, display, Markdown, Latex, Math
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.patches import Polygon
import scipy.integrate as integrate
from decimal import Decimal
import pandas as pd
PI = math.pi
e = math.e
# Run this cell before close.
%watermark
%watermark --iversion
%watermark -b -r -g
```
We want:
$I_{ab} = \int\limits_{a}^{b} f(x) dx ~~~(1)$
so, we could achive that with a avarage value of $f$:
$\hat{f}_{ab} = \frac{1}{b-a} \int\limits_{a}^{b} f(x) dx ~~~(2)$
```
def func(x):
return (x - 3) * (x - 5) * (x - 7) + 85
a, b = 2, 9 # integral limits
x = np.linspace(0, 10)
y = func(x)
fig, ax = plt.subplots()
ax.plot(x, y, 'r', linewidth=2)
ax.set_ylim(bottom=0)
# Make the shaded region
ix = np.linspace(a, b)
iy = func(ix)
verts = [(a, 0), *zip(ix, iy), (b, 0)]
poly = Polygon(verts, facecolor='0.9', edgecolor='0.5')
ax.add_patch(poly)
ax.text(0.5 * (a + b), 30, r"$\int_a^b f(x)\mathrm{d}x$",
horizontalalignment='center', fontsize=20)
fig.text(0.9, 0.05, '$x$')
fig.text(0.1, 0.9, '$y$')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.xaxis.set_ticks_position('bottom')
ax.set_xticks((a, b))
ax.set_xticklabels(('$a$', '$b$'))
ax.set_yticks([])
plt.axhline(90, xmin=0.225, xmax=0.87)
ax.text(0.5 * (a + b), 30, r"$\int_a^b f(x)\mathrm{d}x$",
horizontalalignment='center', fontsize=20)
plt.show()
```
With $(1)$ and $(2)$:
$\hat{f}_{ab} = \frac{1}{b-a} I $
$I = (b-a)\hat{f}_{ab} ~~~(3)$
Sampling $f(\cdot)$, it is possible to calculate a approximate value for $\hat{f}_{ab}$ (with a random variable $\mathbf{x}$):
$\mathbf{F}_{ab} = {f(\mathbf{x}) ~|~ \mathbf{x} ~\in~ [a, b]}$
The expectation for $\mathbf{F}_{ab}$ is:
$E[\mathbf{F}_{ab}] = \hat{f}_{ab}$
and concluding with
$I = E[\mathbf{F}_{ab}](b-a)$
So, how we could calculate $E[\mathbf{F}_{ab}]$? With $N$ uniform sampling of $x~\in~[a, b]$. If $N$ is large enough and $\mathbf{x}$ is uniform between $[a, b]$:
$ E[\mathbf{F}_{ab}] = \frac{1}{N} \sum\limits_{i}^N f(\mathbf{x}) ~|~ \mathbf{x} ~\in~ [a, b]$
and
$I = E[\mathbf{F}_{ab}](b-a) = \lim\limits_{N \rightarrow \infty} \frac{b-a}{N} \sum\limits_{i}^N f(\mathbf{x}) ~|~ \mathbf{x} ~\in~ [a, b] ~~~(4)$
This is the *Crude Monte Carlo*.
#### Example 1:
Calculate:
$I = \int\limits_{0}^{+\infty} \frac{e^{-x}}{(x-1)^2 + 1} dx ~~~(5)$
```
def get_rand_number(min_value, max_value):
"""
This function gets a random number from a uniform distribution between
the two input values [min_value, max_value] inclusively
Args:
- min_value (float)
- max_value (float)
Return:
- Random number between this range (float)
"""
range = max_value - min_value
choice = random.uniform(0,1)
return min_value + range*choice
def f_of_x(x):
"""
This is the main function we want to integrate over.
Args:
- x (float) : input to function; must be in radians
Return:
- output of function f(x) (float)
"""
return (e**(-1*x))/(1+(x-1)**2)
lower_bound = 0
upper_bound = 5
def crude_monte_carlo(num_samples=10000, lower_bound = 0, upper_bound = 5):
"""
This function performs the Crude Monte Carlo for our
specific function f(x) on the range x=0 to x=5.
Notice that this bound is sufficient because f(x)
approaches 0 at around PI.
Args:
- num_samples (float) : number of samples
Return:
- Crude Monte Carlo estimation (float)
"""
sum_of_samples = 0
for i in range(num_samples):
x = get_rand_number(lower_bound, upper_bound)
sum_of_samples += f_of_x(x)
return (upper_bound - lower_bound) * float(sum_of_samples/num_samples)
display(Math(r'I \approx {:.4f}, ~N = 10^4'.format(crude_monte_carlo())))
display(Math(r'\left . f(a) \right |_{a=0} \approx '+r'{:.4f} '.format(f_of_x(lower_bound))))
display(Math(r'\left . f(b) \right |_{b=5} \approx '+r'{:.4f} '.format(f_of_x(upper_bound))))
```
Why $b=5$?
$ \lim\limits_{x \rightarrow +\infty} \frac{e^{-x}}{(x-1)^2 + 1} \rightarrow 0 $
We could consider $0.0004 ~\approx~ 0$.
If $b = 10$?
```
upper_bound = 10
display(Math(r'\left . f(b) \right |_{b=5} \approx '+r'{:.6f}'.format(f_of_x(upper_bound))+'= 10^{-6}'))
display(Math(r'I \approx {:.4f}, ~N = 10^5 '.format(crude_monte_carlo(num_samples=100000, upper_bound = 10))))
plt.figure()
def func(x):
return f_of_x(x)
a, b = 0, 5 # integral limits
x = np.linspace(0, 6)
y = func(x)
fig, ax = plt.subplots()
ax.plot(x, y, 'r', linewidth=2)
ax.set_ylim(bottom=0)
# Make the shaded region
ix = np.linspace(a, b)
iy = func(ix)
verts = [(a, 0), *zip(ix, iy), (b, 0)]
poly = Polygon(verts, facecolor='0.9', edgecolor='0.5')
ax.add_patch(poly)
ax.text(0.2 * (a + b), 0.05, r"$\int_a^b f(x)\mathrm{d}x$",
horizontalalignment='center', fontsize=20)
fig.text(0.9, 0.05, '$x$')
fig.text(0.1, 0.9, '$y$')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.xaxis.set_ticks_position('bottom')
ax.set_xticks((a, b))
ax.set_xticklabels(('$a$', '$b$'))
ax.set_yticks([])
ax.axhline(0.2*crude_monte_carlo(),color='b', xmin=0.051, xmax=0.81)
iy = iy*0+0.2*crude_monte_carlo()
verts = [(a, 0), *zip(ix, iy), (b, 0)]
poly = Polygon(verts, facecolor='0.94', edgecolor='0.99')
ax.add_patch(poly)
ax.text(5,0.2*crude_monte_carlo()+0.03 , r"$\hat{f}_{ab}$",
horizontalalignment='center', fontsize=20)
plt.show()
```
Comparing with [Integration (scipy.integrate)](https://docs.scipy.org/doc/scipy/reference/tutorial/integrate.html).
```
results = integrate.quad(lambda x: f_of_x(x), lower_bound, upper_bound)
tmp = (Decimal(results[1]).as_tuple().digits[0], Decimal(results[1]).as_tuple().exponent + len(Decimal(results[1]).as_tuple().digits) -1)
display(Math(r'I_{\text{SciPy}} = '+r'{:.4f}, ~e \approx {}'.format(results[0],tmp[0])+r'\cdot 10^{'+'{}'.format(tmp[1])+r'}'))
diff = []
for _ in range(100):
diff.append(crude_monte_carlo(num_samples=100000, upper_bound = 10)-results[0])
df = pd.DataFrame([abs(x) for x in diff], columns=['$I- I_{\text{SciPy}}$'])
display(df.describe())
df.plot(grid = True)
df = pd.DataFrame([abs(x)/results[0] for x in diff], columns=['$(I- I_{\text{SciPy}})/I_{\text{SciPy}}$'])
display(df.describe())
df.plot(grid = True)
```
Confirm the estimated error with variance.
```
def get_crude_MC_variance(num_samples = 10000, upper_bound = 5):
"""
This function returns the variance fo the Crude Monte Carlo.
Note that the inputed number of samples does not neccissarily
need to correspond to number of samples used in the Monte
Carlo Simulation.
Args:
- num_samples (int)
Return:
- Variance for Crude Monte Carlo approximation of f(x) (float)
"""
int_max = upper_bound # this is the max of our integration range
# get the average of squares
running_total = 0
for i in range(num_samples):
x = get_rand_number(0, int_max)
running_total += f_of_x(x)**2
sum_of_sqs = running_total*int_max / num_samples
# get square of average
running_total = 0
for i in range(num_samples):
x = get_rand_number(0, int_max)
running_total = f_of_x(x)
sq_ave = (int_max*running_total/num_samples)**2
return sum_of_sqs - sq_ave
s1 = get_crude_MC_variance()
"{:.4f}".format(s1)
s2 = get_crude_MC_variance(100000,10)
"{:.4f}".format(s2)
math.sqrt(s1 / 10000)
df.describe().loc['mean'].to_list()[0]
```
### Importance Sampling
```
# this is the template of our weight function g(x)
def g_of_x(x, A, lamda):
e = 2.71828
return A*math.pow(e, -1*lamda*x)
def inverse_G_of_r(r, lamda):
return (-1 * math.log(float(r)))/lamda
def get_IS_variance(lamda, num_samples):
"""
This function calculates the variance if a Monte Carlo
using importance sampling.
Args:
- lamda (float) : lamdba value of g(x) being tested
Return:
- Variance
"""
A = lamda
int_max = 5
# get sum of squares
running_total = 0
for i in range(num_samples):
x = get_rand_number(0, int_max)
running_total += (f_of_x(x)/g_of_x(x, A, lamda))**2
sum_of_sqs = running_total / num_samples
# get squared average
running_total = 0
for i in range(num_samples):
x = get_rand_number(0, int_max)
running_total += f_of_x(x)/g_of_x(x, A, lamda)
sq_ave = (running_total/num_samples)**2
return sum_of_sqs - sq_ave
# get variance as a function of lambda by testing many
# different lambdas
test_lamdas = [i*0.05 for i in range(1, 61)]
variances = []
for i, lamda in enumerate(test_lamdas):
print(f"lambda {i+1}/{len(test_lamdas)}: {lamda}")
A = lamda
variances.append(get_IS_variance(lamda, 10000))
clear_output(wait=True)
optimal_lamda = test_lamdas[np.argmin(np.asarray(variances))]
IS_variance = variances[np.argmin(np.asarray(variances))]
print(f"Optimal Lambda: {optimal_lamda}")
print(f"Optimal Variance: {IS_variance}")
print(f"Error: {(IS_variance/10000)**0.5}")
def importance_sampling_MC(lamda, num_samples):
A = lamda
running_total = 0
for i in range(num_samples):
r = get_rand_number(0,1)
running_total += f_of_x(inverse_G_of_r(r, lamda=lamda))/g_of_x(inverse_G_of_r(r, lamda=lamda), A, lamda)
approximation = float(running_total/num_samples)
return approximation
# run simulation
num_samples = 10000
approx = importance_sampling_MC(optimal_lamda, num_samples)
variance = get_IS_variance(optimal_lamda, num_samples)
error = (variance/num_samples)**0.5
# display results
print(f"Importance Sampling Approximation: {approx}")
print(f"Variance: {variance}")
print(f"Error: {error}")
display(Math(r'(I_{IS} - I_{\text{SciPy}})/I_{\text{SciPy}} = '+'{:.4}\%'.format(100*abs((approx-results[0])/results[0]))))
```
|
github_jupyter
|
```
import sys; sys.path.append('../rrr')
from multilayer_perceptron import *
from figure_grid import *
from local_linear_explanation import *
from toy_colors import generate_dataset, imgshape, ignore_rule1, ignore_rule2, rule1_score, rule2_score
import lime
import lime.lime_tabular
```
# Toy Color Dataset
This is a simple, two-class image classification dataset with two independent ways a model could learn to distinguish between classes. The first is whether all four corner pixels are the same color, and the second is whether the top-middle three pixels are all different colors. Images in class 1 satisfy both conditions and images in class 2 satisfy neither. See `color_dataset_generator` for more details.
We will train a multilayer perceptron to classify these images, explore which rule(s) it implicitly learns, and constrain it to use only one rule (or neither).
Let's first load our dataset:
```
X, Xt, y, yt = generate_dataset(cachefile='../data/toy-colors.npz')
E1 = np.array([ignore_rule2 for _ in range(len(y))])
E2 = np.array([ignore_rule1 for _ in range(len(y))])
print(X.shape, Xt.shape, y.shape, yt.shape, E1.shape, E2.shape)
```
## Understanding the Dataset
Let's just examine images from each class quickly and verify that in class 1, the corners are all the same color and the top-middle three pixels are all different (none of which should hold true in class 2):
```
plt.subplot(121)
plt.title('Class 1')
image_grid(X[np.argwhere(y == 0)[:9]], (5,5,3), 3)
plt.subplot(122)
plt.title('Class 2')
image_grid(X[np.argwhere(y == 1)[:9]], (5,5,3), 3)
plt.show()
```
Great.
## Explaining and learning diverse classifiers
Now let's see if we can train our model to implicitly learn each rule:
```
def explain(model, title='', length=4):
plt.title(title)
explanation_grid(model.grad_explain(Xt[:length*length]), imgshape, length)
# Train a model without any constraints
mlp_plain = MultilayerPerceptron()
mlp_plain.fit(X, y)
mlp_plain.score(Xt, yt)
# Train a model constrained to use the first rule
mlp_rule1 = MultilayerPerceptron(l2_grads=1000)
mlp_rule1.fit(X, y, E1)
mlp_rule1.score(Xt, yt)
# Train a model constrained to use the second rule
mlp_rule2 = MultilayerPerceptron(l2_grads=1000)
mlp_rule2.fit(X, y, E2)
mlp_rule2.score(Xt, yt)
# Visualize largest weights
with figure_grid(1,3, rowwidth=8) as g:
g.next()
explain(mlp_plain, 'No annotations')
g.next()
explain(mlp_rule1, '$A$ penalizing top middle')
g.next()
explain(mlp_rule2, '$A$ penalizing corners')
```
Notice that when we explicitly penalize corners or the top middle, the model appears to learn the _other_ rule perfectly. We haven't identified the pixels it does treat as significant in any way, but they are significant, so the fact that they show up in the explanations means that the explanation is probably an accurate reflection of the model's implicit logic.
When we don't have any annotations, the model does identify the top-middle pixels occasionally, suggesting it defaults to learning a heavily but not completely corner-weighted combination of the rules.
What happens when we forbid it from using either rule?
```
mlp_neither = MultilayerPerceptron(l2_grads=1e6)
mlp_neither.fit(X, y, E1 + E2)
mlp_neither.score(Xt, yt)
explain(mlp_neither, '$A$ biased against all relevant features')
plt.show()
```
As we might expect, accuracy goes down and we start identifying random pixels as significant.
## Find-another-explanation
Let's now pretend we have no knowledge of what $A$ _should_ be for this dataset. Can we still train models that use diverse rules just by examining explanations?
```
A1 = mlp_plain.largest_gradient_mask(X)
mlp_fae1 = MultilayerPerceptron(l2_grads=1000)
mlp_fae1.fit(X, y, A1)
mlp_fae1.score(Xt, yt)
explain(mlp_fae1, '$A$ biased against first model')
plt.show()
```
Excellent. When we train a model to have small gradients where the $A=0$ model has large ones, we reproduce the top middle rule, though in some cases we learn a hybrid of the two. Now let's train another model to be different from either one. Note: I'm going to iteratively increase the L2 penalty until I get explanation divergence. I'm doing this manually now but it could easily be automated.
```
A2 = mlp_fae1.largest_gradient_mask(X)
mlp_fae2 = MultilayerPerceptron(l2_grads=1e6)
mlp_fae2.fit(X, y, A1 + A2)
mlp_fae2.score(Xt, yt)
explain(mlp_fae2, '$A$ biased against models 1 and 2')
plt.show()
```
When we run this twice, we get low accuracy and random gradient placement. Let's visualize this all together:
```
gridsize = (2,3)
plt.subplot2grid(gridsize, (0,0))
explain(mlp_plain, r'$M_{0.67}\left[ f_X|\theta_0 \right]$', 4)
plt.subplot2grid(gridsize, (0,1))
explain(mlp_fae1, r'$M_{0.67}\left[ f_X|\theta_1 \right]$', 4)
plt.subplot2grid(gridsize, (0,2))
explain(mlp_fae2, r'$M_{0.67}\left[ f_X|\theta_2 \right]$', 4)
plt.subplot2grid(gridsize, (1,0), colspan=3)
plt.axhline(1, color='red', ls='--')
test_scores = [mlp_plain.score(Xt, yt), mlp_fae1.score(Xt, yt), mlp_fae2.score(Xt, yt)]
train_scores = [mlp_plain.score(X, y), mlp_fae1.score(X, y), mlp_fae2.score(X, y)]
plt.plot([0,1,2], train_scores, marker='^', label='Train', alpha=0.5, color='blue', markersize=10)
plt.plot([0,1,2], test_scores, marker='o', label='Test', color='blue')
plt.xlim(-0.5, 2.5)
plt.ylim(0.4, 1.05)
plt.ylabel(' Accuracy')
plt.xlabel('Find-another-explanation iteration')
plt.legend(loc='best', fontsize=10)
plt.xticks([0,1,2])
plt.show()
```
So this more or less demonstrates the find-another-explanation method on the toy color dataset.
## Transitions between rules
Separately, I ran a script to train many MLPs on this dataset, all biased against using corners, but with varying numbers of annotations in $A$ and varying L2 penalties. Let's see if we can find any transition behavior between these two rules:
```
import pickle
n_vals = pickle.load(open('../data/color_n_vals.pkl', 'rb'))
n_mlps = pickle.load(open('../data/color_n_mlps.pkl', 'rb'))
l2_vals = pickle.load(open('../data/color_l2_vals.pkl', 'rb'))
l2_mlps = pickle.load(open('../data/color_l2_mlps.pkl', 'rb'))
def realize(mlp_params):
return [MultilayerPerceptron.from_params(p) for p in mlp_params]
l2_rule1_scores = [rule1_score(mlp, Xt[:1000]) for mlp in realize(l2_mlps)]
l2_rule2_scores = [rule2_score(mlp, Xt[:1000]) for mlp in realize(l2_mlps)]
l2_acc_scores = [mlp.score(Xt[:1000], yt[:1000]) for mlp in realize(l2_mlps)]
n_rule1_scores = [rule1_score(mlp, Xt[:1000]) for mlp in realize(n_mlps)]
n_rule2_scores = [rule2_score(mlp, Xt[:1000]) for mlp in realize(n_mlps)]
n_acc_scores = [mlp.score(Xt[:1000], yt[:1000]) for mlp in realize(n_mlps)]
plt.figure(figsize=(8,4))
plt.subplot(121)
plt.plot(l2_vals, l2_rule1_scores, 'o', label='Corners', marker='^')
plt.plot(l2_vals, l2_rule2_scores, 'o', label='Top mid.')
plt.plot(l2_vals, l2_acc_scores, label='Accuracy')
plt.title('Effect of $\lambda_1$ on implicit rule (full $A$)')
plt.ylabel(r'Mean % $M_{0.67}\left[f_X\right]$ in corners / top middle')
plt.ylim(0,1.1)
plt.xscale("log")
plt.yticks([])
plt.xlim(0,1000)
plt.legend(loc='best', fontsize=10)
plt.xlabel(r'$\lambda_1$ (explanation L2 penalty)')
plt.subplot(122)
plt.plot(n_vals, n_rule1_scores, 'o', label='Corners', marker='^')
plt.plot(n_vals, n_rule2_scores, 'o', label='Top mid.')
plt.plot(n_vals, n_acc_scores, label='Accuracy')
plt.xscale('log')
plt.ylim(0,1.1)
plt.xlim(0,10000)
plt.legend(loc='best', fontsize=10)
plt.title('Effect of $A$ on implicit rule ($\lambda_1=1000$)')
plt.xlabel('Number of annotations (nonzero rows of $A$)')
plt.tight_layout()
plt.show()
```
Cool. So we can definitely see a clear transition effect between rules.
## Comparison with LIME
Although we have some pretty clear evidence that gradient explanations are descriptive for our MLP on this simple dataset, let's make sure LIME produces similar results. We'll also do a very basic benchmark to see how long each of the respective methods take.
```
explainer = lime.lime_tabular.LimeTabularExplainer(
Xt,
feature_names=list(range(len(Xt[0]))),
class_names=[0,1])
import time
t1 = time.clock()
lime_explanations = [
explainer.explain_instance(Xt[i], mlp_plain.predict_proba, top_labels=1)
for i in range(25)
]
t2 = time.clock()
input_grads = mlp_plain.input_gradients(Xt[:25])
t3 = time.clock()
print('LIME took {:.6f}s/example'.format((t2-t1)/25.))
print('grads took {:.6f}s/example, which is {:.0f}x faster'.format((t3-t2)/25., (t2-t1)/float(t3-t2)))
preds = mlp_plain.predict(Xt[:25])
lime_exps = [LocalLinearExplanation.from_lime(Xt[i], preds[i], lime_explanations[i]) for i in range(25)]
grad_exps = [LocalLinearExplanation(Xt[i], preds[i], input_grads[i]) for i in range(25)]
plt.subplot(121)
plt.title('LIME', fontsize=16)
explanation_grid(lime_exps, imgshape, 3)
plt.subplot(122)
plt.title(r'$M_{0.67}\left[f_X\right]$', fontsize=16)
explanation_grid(grad_exps, imgshape, 3)
plt.show()
```
So our explanation methods agree somewhat closely, which is good to see. Also, gradients are significantly faster.
## Learning from less data
Do explanations allow our model to learn with less data? Separately, we trained many models on increasing fractions of the dataset with different annotations; some penalizing the corners/top-middle and some penalizing everything but the corners/top-middle. Let's see how each version of the model performs:
```
import pickle
data_counts = pickle.load(open('../data/color_data_counts.pkl', 'rb'))
normals_by_count = pickle.load(open('../data/color_normals_by_count.pkl', 'rb'))
pro_r1s_by_count = pickle.load(open('../data/color_pro_r1s_by_count.pkl', 'rb'))
pro_r2s_by_count = pickle.load(open('../data/color_pro_r2s_by_count.pkl', 'rb'))
anti_r1s_by_count = pickle.load(open('../data/color_anti_r1s_by_count.pkl', 'rb'))
anti_r2s_by_count = pickle.load(open('../data/color_anti_r2s_by_count.pkl', 'rb'))
def score_all(ms):
return [m.score(Xt,yt) for m in realize(ms)]
def realize(mlp_params):
return [MultilayerPerceptron.from_params(p) for p in mlp_params]
sc_normal = score_all(normals_by_count)
sc_pro_r1 = score_all(pro_r1s_by_count)
sc_pro_r2 = score_all(pro_r2s_by_count)
sc_anti_r1 = score_all(anti_r1s_by_count)
sc_anti_r2 = score_all(anti_r2s_by_count)
from matplotlib import ticker
def plot_A(A):
plt.gca().set_xticks([])
plt.gca().set_yticks([])
plt.imshow((A[0].reshape(5,5,3) * 255).astype(np.uint8), interpolation='none')
for i in range(5):
for j in range(5):
if A[0].reshape(5,5,3)[i][j][0]:
plt.text(j,i+0.025,'1',ha='center',va='center',fontsize=8)
else:
plt.text(j,i+0.025,'0',ha='center',va='center',color='white',fontsize=8)
gridsize = (4,9)
plt.figure(figsize=(10,5))
cs=3
plt.subplot2grid(gridsize, (0,2*cs))
plt.title('Pro-Rule 1')
plot_A(~E2)
plt.subplot2grid(gridsize, (1,2*cs))
plt.title('Pro-Rule 2')
plot_A(~E1)
plt.subplot2grid(gridsize, (2,2*cs))
plt.title('Anti-Rule 1')
plot_A(E2)
plt.subplot2grid(gridsize, (3,2*cs))
plt.title('Anti-Rule 2')
plot_A(E1)
plt.subplot2grid(gridsize, (0,0), rowspan=4, colspan=cs)
plt.title('Learning Rule 1 with $A$')
plt.errorbar(data_counts, sc_normal, label=r'Normal', lw=2)
plt.errorbar(data_counts, sc_pro_r1, label=r'Pro-Rule 1', marker='H')
plt.errorbar(data_counts, sc_anti_r2, label=r'Anti-Rule 2', marker='^')
plt.xscale('log')
plt.ylim(0.5,1)
plt.ylabel('Test Accuracy')
plt.xlabel('# Training Examples')
plt.legend(loc='best', fontsize=10)
plt.gca().xaxis.set_major_formatter(ticker.FormatStrFormatter("%d"))
plt.gca().set_xticks([10,100,1000])
plt.subplot2grid(gridsize, (0,cs), rowspan=4, colspan=cs)
plt.title('Learning Rule 2 with $A$')
plt.gca().set_yticklabels([])
plt.errorbar(data_counts, sc_normal, label=r'Normal', lw=2)
plt.errorbar(data_counts, sc_pro_r2, label=r'Pro-Rule 2', marker='H')
plt.errorbar(data_counts, sc_anti_r1, label=r'Anti-Rule 1', marker='^')
plt.xscale('log')
plt.ylim(0.5,1)
plt.xlabel('# Training Examples')
plt.legend(loc='best', fontsize=10)
plt.gca().xaxis.set_major_formatter(ticker.FormatStrFormatter("%d"))
plt.show()
def improvement_over_normal(scores, cutoff):
norm = data_counts[next(i for i,val in enumerate(sc_normal) if val > cutoff)]
comp = data_counts[next(i for i,val in enumerate(scores) if val > cutoff)]
return norm / float(comp)
def print_improvement(name, scores, cutoff):
print('Extra data for normal model to reach {:.2f} accuracy vs. {}: {:.2f}'.format(
cutoff, name, improvement_over_normal(scores, cutoff)))
print_improvement('Anti-Rule 2', sc_anti_r2, 0.8)
print_improvement('Anti-Rule 2', sc_anti_r2, 0.9)
print_improvement('Anti-Rule 2', sc_anti_r2, 0.95)
print_improvement('Anti-Rule 2', sc_anti_r2, 0.99)
print('')
print_improvement('Pro-Rule 1', sc_pro_r1, 0.8)
print_improvement('Pro-Rule 1', sc_pro_r1, 0.9)
print_improvement('Pro-Rule 1', sc_pro_r1, 0.95)
print_improvement('Pro-Rule 1', sc_pro_r1, 0.99)
print('')
print_improvement('Pro-Rule 2', sc_pro_r2, 0.9)
print_improvement('Pro-Rule 2', sc_pro_r2, 0.95)
print_improvement('Pro-Rule 2', sc_pro_r2, 0.97)
print('')
print_improvement('Anti-Rule 1', sc_anti_r1, 0.7)
print_improvement('Anti-Rule 1', sc_anti_r1, 0.8)
print_improvement('Anti-Rule 1', sc_anti_r1, 0.9)
```
Generally, we learn better classifiers with less data using explanations (especially in the Pro-Rule 1 case, where we provide the most information). Biasing against the top-middle or against everything but the corners / top-middle tends to give us more accurate classifiers. Biasing against the corners, however, gives us _lower_ accuracy until we obtain more examples. This may be because it's an inherently harder rule to learn; there are only 4 ways that all corners can match, but $4*3*2=24$ ways the top-middle pixels can differ.
## Investigating cutoffs
We chose a 0.67 cutoff for most of our training a bit arbitrarily, so let's just investigate that briefly:
```
def M(input_gradients, cutoff=0.67):
return np.array([np.abs(e) > cutoff*np.abs(e).max() for e in input_gradients]).astype(int).ravel()
grads = mlp_plain.input_gradients(Xt)
grads2 = mlp_rule1.input_gradients(Xt)
cutoffs = np.linspace(0,1,100)
cutoff_pcts = np.array([M(grads, c).sum() / float(len(grads.ravel())) for c in cutoffs])
cutoff_pcts2 = np.array([M(grads2, c).sum() / float(len(grads2.ravel())) for c in cutoffs])
plt.plot(cutoffs, cutoff_pcts, label='$A=0$')
plt.plot(cutoffs, cutoff_pcts2, label='$A$ against corners')
plt.legend(loc='best')
plt.xlabel('Cutoff')
plt.ylabel('Mean fraction of qualifying gradient entries')
plt.yticks(np.linspace(0,1,21))
plt.yscale('log')
plt.axhline(0.06, ls='--', c='red')
plt.axvline(0.67, ls='--', c='blue')
plt.title('-- Toy color dataset- --\n# qualifying entries falls exponentially\n0.67 cutoff takes the top ~6%')
plt.show()
```
On average, the number of elements we keep falls exponentially without any clear kink in the curve, so perhaps our arbitrariness is justified, though it's problematic that it exists in the first place.
|
github_jupyter
|
```
import os
import re
import torch
import pickle
import pandas as pd
import numpy as np
from tqdm.auto import tqdm
tqdm.pandas()
```
# 1. Pre-processing
### Create a combined dataframe
> This creates a dataframe containing the image IDs & labels for both original images provided by the Bristol Myers Squibb pharmaceutical company, and the augmentations generated per each original image.
```
train_df = pd.read_csv('../../../../../../../../../Downloads/train/train_labels.csv')
```
### InChI pre-processing
> This, firstly, splits the first part of the InChI string (the chemical formula) into sequences of text and numbers. Secondly, this splits the second part of the InChI string (the other layers) into sequences of text and numbers.
```
def split_inchi_formula(formula: str) -> str:
"""
This function splits the chemical formula (in the first layer of InChI)
into its separate element and number components.
:param formula: chemical formula, e.g. C13H20OS
:type formula: string
:return: splitted chemical formula
:rtype: string
"""
string = ''
# for each chemical element in the formula
for i in re.findall(r"[A-Z][^A-Z]*", formula):
# return each separate element, i.e. text
elem = re.match(r"\D+", i).group()
# return each separate number
num = i.replace(elem, "")
# add either the element or both element and number (space-separated) to the string
if num == "":
string += f"{elem} "
else:
string += f"{elem} {str(num)} "
return string.rstrip(' ')
def split_inchi_layers(layers: str) -> str:
"""
This function splits the layers (following the first layer of InChI)
into separate element and number components.
:param layers: layer string, e.g. c1-9(2)8-15-13-6-5-10(3)7-12(13)11(4)14/h5-7,9,11,14H,8H2,1-4H3
:type layers: string
:return: splitted layer info
:rtype: string
"""
string = ''
# for each layer in layers
for i in re.findall(r"[a-z][^a-z]*", layers):
# get the character preceding the layer info
elem = i[0]
# get the number string succeeding the character
num = i.replace(elem, "").replace("/", "")
num_string = ''
# for each number string
for j in re.findall(r"[0-9]+[^0-9]*", num):
# get the list of numbers
num_list = list(re.findall(r'\d+', j))
# get the first number
_num = num_list[0]
# add the number string to the overall result
if j == _num:
num_string += f"{_num} "
else:
extra = j.replace(_num, "")
num_string += f"{_num} {' '.join(list(extra))} "
string += f"/{elem} {num_string}"
return string.rstrip(' ')
```
### Tokenize texts and predict captions
> This tokenizes each text by converting it to a sequence of characters. Backward compatibility is also maintained, i.e. sequence to text conversion. Image caption predictions also take place within the Tokenizer class.
```
class Tokenizer(object):
def __init__(self):
# string to integer mapping
self.stoi = {}
# integer to string mapping
self.itos = {}
def __len__(self) -> None:
"""
This method returns the length of token:index map.
:return: length of map
:rtype: int
"""
# return the length of the map
return len(self.stoi)
def fit_on_texts(self, texts: list) -> None:
"""
This method creates a vocabulary of all tokens contained in provided texts,
and updates the mapping of token to index, and index to token.
:param texts: list of texts
:type texts: list
"""
# create a storage for all tokens
vocab = set()
# add tokens from each text to vocabulary
for text in texts:
vocab.update(text.split(' '))
# sort the vocabulary in alphabetical order
vocab = sorted(vocab)
# add start, end and pad for sentence
vocab.append('<sos>')
vocab.append('<eos>')
vocab.append('<pad>')
# update the string to integer mapping, where integer is the index of the token
for i, s in enumerate(vocab):
self.stoi[s] = i
# reverse the previous vocabulary to create integer to string mapping
self.itos = {item[1]: item[0] for item in self.stoi.items()}
def text_to_sequence(self, text: str) -> list:
"""
This method converts the given text to a list of its individual tokens,
including start and end of string symbols.
:param text: input textual data
:type text: str
:return: list of tokens
:rtype: list
"""
# storage to append symbols to
sequence = []
# add the start of string symbol to storage
sequence.append(self.stoi['<sos>'])
# add each token in text to storage
for s in text.split(' '):
sequence.append(self.stoi[s])
# add the end of string symbol to storage
sequence.append(self.stoi['<eos>'])
return sequence
def texts_to_sequences(self, texts: list) -> list:
"""
This method converts each text in the provided list into sequences of characters.
Each sequence is appended to a list and the said list is returned.
:param texts: a list of input texts
:type texts: list
:return: a list of sequences
:rtype: list
"""
# storage to append sequences to
sequences = []
# for each text do
for text in texts:
# convert the text to a list of characters
sequence = self.text_to_sequence(text)
# append the lists of characters to an aggregated list storage
sequences.append(sequence)
return sequences
def sequence_to_text(self, sequence: list) -> str:
"""
This method converts the sequence of characters back into text.
:param sequence: list of characters
:type sequence: list
:return: text
:rtype: str
"""
# join the characters with no space in between
return ''.join(list(map(lambda i: self.itos[i], sequence)))
def sequences_to_texts(self, sequences: list) -> list:
"""
This method converts each provided sequence into text and returns all texts inside a list.
:param sequences: list of character sequences
:type sequences: list
:return: list of texts
:rtype: list
"""
# storage for texts
texts = []
# convert each sequence to text and append to storage
for sequence in sequences:
text = self.sequence_to_text(sequence)
texts.append(text)
return texts
def predict_caption(self, sequence: list) -> str:
"""
This method predicts the caption by adding each symbol in sequence to a resulting string.
This keeps happening up until the end of sentence or padding is met.
:param sequence: list of characters
:type sequence: list
:return: image caption
:rtype: string
"""
# storage for the final caption
caption = ''
# for each index in a sequence of symbols
for i in sequence:
# if symbol is the end of sentence or padding, break
if i == self.stoi['<eos>'] or i == self.stoi['<pad>']:
break
# otherwise, add the symbol to the final caption
caption += self.itos[i]
return caption
def predict_captions(self, sequences: list) -> list:
"""
This method predicts the captions for each sequence in a list of sequences.
:param sequences: list of sequences
:type sequences: list
:return: list of final image captions
:rtype: list
"""
# storage for captions
captions = []
# for each sequence, do
for sequence in sequences:
# predict the caption per sequence
caption = self.predict_caption(sequence)
# append to the storage of captions
captions.append(caption)
return captions
# split the InChI string with the backslash delimiter
train_df['InChI_chemical_formula'] = train_df['InChI'].apply(lambda x: x.split('/')[1])
```
### Pre-process
> This performs all preprocessing steps, mainly: (1) converting InChI string to space separated list of elements,
(2) tokenizing the InChI string by creating lists of elements, and (3) computing the actual lengths of each such list. The results are returned in `train_df`.
```
# split the InChI string into the chemical formula part and the other layers part
train_df['InChI_text'] = (
train_df['InChI_chemical_formula'].apply(split_inchi_formula)
+ ' '
+ train_df['InChI'].apply(lambda x: '/'.join(x.split('/')[2:])).apply(split_inchi_layers).values
+ ' '
+ train_df['InChI'].apply(lambda x: x[x.find('/h'):]).apply(split_inchi_layers).values
)
# adjust for cases where hydrogen was not found and NaN was returned
for idx in range(len(train_df['InChI_text'])):
if '/h' not in train_df.loc[idx, 'InChI']:
train_df.loc[idx, 'InChI_text'] = (
split_inchi_formula(train_df.loc[idx, 'InChI_chemical_formula'])
+
' '
+
split_inchi_layers('/'.join(train_df.loc[idx, 'InChI'].split('/')[2:])
)
)
# save the train_df in a separate csv
train_df.to_csv('../../../data/train_df.csv')
# create a tokenizer class
tokenizer = Tokenizer()
# create a vocabulary of all InChI tokens
tokenizer.fit_on_texts(train_df['InChI_text'].values)
# save the tokenizer
torch.save(tokenizer, '../../../data/tokenizer.pth')
# store all sequence lengths
lengths = []
# creates a progress bar around the iterable
tk = tqdm(train_df['InChI_text'].values, total=len(train_df))
# for each text, i.e. InChI string, in the iterable, do
for text in tk:
# convert text to sequence of characters
seq = tokenizer.text_to_sequence(text)
# update the caption length (reduced by 2 for <end> and <pad>) and append to the aggregated storage
length = len(seq) - 2
lengths.append(length)
# write down the lengths in the dataframe
train_df['InChI_length'] = lengths
# save as a pickle file
train_df.to_pickle('../../../data/train.pkl')
print('Saved the train dataframe as a pickle file.')
```
|
github_jupyter
|
```
import functools
import pathlib
import numpy as np
import matplotlib.pyplot as plt
import shapely.geometry
import skimage.draw
import tensorflow as tf
import pydicom
import pymedphys
import pymedphys._dicom.structure as dcm_struct
# Put all of the DICOM data here, file structure doesn't matter:
data_path_root = pathlib.Path.home().joinpath('.data/dicom-ct-and-structures')
dcm_paths = list(data_path_root.rglob('**/*.dcm'))
dcm_headers = []
for dcm_path in dcm_paths:
dcm_headers.append(pydicom.read_file(
dcm_path, force=True, specific_tags=['SOPInstanceUID', 'SOPClassUID']))
ct_image_paths = {
header.SOPInstanceUID: path
for header, path in zip(dcm_headers, dcm_paths)
if header.SOPClassUID.name == "CT Image Storage"
}
structure_set_paths = {
header.SOPInstanceUID: path
for header, path in zip(dcm_headers, dcm_paths)
if header.SOPClassUID.name == "RT Structure Set Storage"
}
# names = set()
# for uid, path in structure_set_paths.items():
# dcm = pydicom.read_file(
# path, force=True, specific_tags=['StructureSetROISequence'])
# for item in dcm.StructureSetROISequence:
# names.add(item.ROIName)
names_map = {
'BB': "bite_block",
'Bladder': "bladder",
"Bladder_obj": None,
"Bowel": 'bowel',
"Bowel_obj": None,
"Box Adapter": None,
"BoxAdaptor": None,
"Brain": "brain",
"Brainstem": "brainstem",
"brainstem": "brainstem",
"Bulla Lt": "bulla_left",
"L bulla": "bulla_left",
"Bulla Rt": "bulla_right",
"Bulla L": "bulla_left",
"Bulla Left": "bulla_left",
"Bulla R": "bulla_right",
"Bulla Right": "bulla_right",
"R bulla": "bulla_right",
"CTV": None,
"CTV Eval": None,
"CTV thyroids": None,
"CTVCT": None,
"CTVMRI": None,
"CTVSmall": None,
"CTVeval": None,
"CTVnew": None,
"Chiasm": "chiasm",
"Colon": "colon",
"colon": "colon",
"Colon_obj": None,
"Cord": "spinal_cord",
"SPINAL CORD": "spinal_cord",
"Spinal Cord": "spinal_cord",
"Cord PRV": None,
"Couch Edge": None,
"Couch Foam Half Couch": None,
"Couch Outer Half Couch": None,
"GTV": None,
"24.000Gy": None,
"15.000Gy_AH": None,
"15.000Gy_NC": None,
"15.000Gy_v": None,
"30.000Gy_AH": None,
"30.000Gy_NC": None,
"30.000Gy_v": None,
"95%_Large": None,
"95.00%_SMALL": None,
"BowelObj_Large": None,
"BowelObj_small": None,
"AdrenalGTV": None,
"Bone_or": None,
"BrainObj": None,
"CTV1": None,
"CTV_LN": None,
"CTV_obj": None,
"CTV_uncropped": None,
"CTVmargin": None,
"CTVmargin_eval": None,
"CTVobj": None,
"CTVobjnew": None,
"CTVoptimise": None,
"CTVoptimisenew": None,
"Cauda equina": "cauda_equina",
"GTV LN": None,
"GTV thyroids": None,
"GTV+SCAR": None,
"GTV-2": None,
"GTV/scar": None,
"GTVCT": None,
"GTVMRI": None,
"GTV_Combined": None,
"GTVcombined": None,
"GTVobj": None,
"GTVoptimise": None,
"Heart": "heart",
"Heart/GVs": None,
"INGUINALobj": None,
"Implant": None,
"Implant_Avoid": None,
"InguinalLn": None,
"Kidney Lt": "kidney_left",
"Lkidney": "kidney_left",
"Kidney Rt": "kidney_right",
"Rkidney": "kidney_right",
"LN": None,
"LN GTV": None,
"LN Mandibular": None,
"LN Retropharyngeal": None,
"LNCTV": None,
"LNeval": None,
"Lacrimal Lt": "lacrimal_left",
"Lacrimal Rt": "lacrimal_right",
"Larynx": "larynx",
"Larynx/trachea": None,
"Liver": "liver",
"Lung Lt": "lung_left",
"Lung Left": "lung_left",
"Lung Rt": "lung_right",
"Lung_Combined": None,
"Lung_L": "lung_left",
"Lung_R": "lung_right",
"Lung Right": "lung_right",
"Oesophagus": "oesophagus",
"Esophagus": 'oesophagus',
"esophagus": 'oesophagus',
"OD": "lens_right",
"OD Lens": "lens_right",
"Lens OD": "lens_right",
"ODlens": "lens_right",
"OS": "lens_left",
"OS lens": "lens_left",
"Lens OS": "lens_left",
"OSlens": "lens_left",
"OpPathPRV": None,
"L optic N": "optic_nerve_left",
"OpticNLeft": "optic_nerve_left",
"LopticN": "optic_nerve_left",
"OpticL": "optic_nerve_left",
"Loptic": "optic_nerve_left",
"OpticNRight": "optic_nerve_right",
"OpticR": "optic_nerve_right",
"R optic N": "optic_nerve_right",
"Roptic": "optic_nerve_right",
"RopticN": "optic_nerve_right",
"PTV": None,
"PTV LN eval": None,
"PTV Prostate": None,
"PTV bladder": None,
"PTV crop": None,
"PTV eval": None,
"PTV nodes": None,
"PTV thyroids": None,
"PTV thyroid eval": None,
"PTV uncropped": None,
"PTV+2cm": None,
"PTV+4cm": None,
"PTV_Combined": None,
"PTV_INGUINAL": None,
'Pituitary': 'pituitary',
"Prostate": 'prostate',
"prostate": 'prostate',
"Rectum": 'rectum',
"OpticPathway": 'optic_pathway',
"Small Bowel": "small_bowel",
"Spleen": "spleen",
"Stomach": "stomach",
"Thyroid": "thyroid",
"Tongue": "tongue",
"tongue": "tongue",
"Trachea": "trachea",
"trachea": "trachea",
"Urethra": "urethra",
"Vacbag": "vacuum_bag",
"vacbag": "vacuum_bag",
"patient": "patient",
"testicles": "testicles"
}
ignore_list = [
'CTV start',
'CTV_Combined',
'ColonObj_large',
'ColonObj_small',
'CordPRV',
'CORDprv',
'Couch Foam Full Couch',
'Couch Outer Full Couch',
'Couch Parts Full Couch',
'LnCTV',
'LnGTV',
'Mand Ln',
'OpPathway',
'PTV Ln',
'PTV Ln PreSc',
'PTV total combined',
'PTVCombined_Large',
'PTVCombined_Small',
'PTVLarge',
'PTVSmall',
'PTV_Eval',
'PTV_LN',
'PTV_LN_15/2',
'PTV_LNeval',
'PTV_eval',
'PTV_eval_small',
'PTV_obj',
'PTVcombined',
'PTVcombined_15/2',
'PTVcombined_Eval',
'PTVeval',
'PTVeval_combined',
'PTVnew',
'PTVobj',
'PTVobjnew',
'PTVoptimise',
'PTVoptimisenew',
'PTVpituitary',
'PTVprimary',
'PTVsmooth',
'Patient small',
'Patient-bolus',
'R prescap Ln',
'Rectal_Syringe',
'RectumObj_large',
'RectumObj_small',
'Rectum_obj',
'RetroLn',
'CTVcombined',
'CTVlns',
'CTVprimary',
'CombinedLung',
'Cord-PTV',
'External',
'GTVscar',
'HeartCTV',
'HeartGTV',
'HeartPTV',
'Inguinal LnCTV',
'Inguinal+2cm',
'InguinalPTV_eval',
'Kidneys (Combined)',
'LN CTV',
'LN_PTV_eval',
'LN_ring',
'Lung',
'Lung total',
'LungGTV1',
'LungGTV2',
'LungGTV3',
'LungGTV4',
'LungGTV5',
'LungGTVMIP',
'LungPTV',
'MandiblePTV_eval',
'Mandible_ring',
'Nasal PTVeval',
'Nasal_ring',
'ODnew',
'OR_ Bone',
'OR_Metal',
'OR_Tissue',
'PTV Eval',
'PTVLn',
'PTV_Combined_eval',
'PTV_Distal',
'PTV_Distal_Crop',
'PTV_LN_Inguinal',
'PTV_LN_Popliteal',
'PTV_LN_Smooth',
'PTV_Sup',
'PTVdistal_eval',
'PTVsubSIB2mm',
'Popliteal LnCTV',
'Popliteal+2cm',
'SC_Olap',
'SC_Olap2',
'SC_Olapnew',
'SC_Olapnew2',
'SIB',
'Scar',
'Scar marker',
'Skin Spare',
'Skin Sparing',
'Skin spare',
'Small Bowel Replan',
'Small Bowel replan',
'SmallPTV',
'Small_PTV_Combined',
'Structure1',
'Structure2',
'Structure3',
'Structure4',
'Structure5',
'Syringe fill',
'TEST',
'Tissue_or',
'Tracheaoesophagus',
'Urethra/vulva',
'Urinary System',
'bolus_5mm',
'bowel_obj',
'brain-PTV',
'brain-ptv',
'combined PTV',
'ctv cropped',
'lungs',
'p',
'patient & bolus',
'patient no bolus',
'patient&Bolus',
'patient&bolus',
'patient-bolus',
'patient_1',
'patientbol',
'skin spare',
'urethra_PRV',
'whole lung'
]
for key in ignore_list:
names_map[key] = None
# mapped_names = set(names_map.keys())
# print(mapped_names.difference(names))
# names.difference(mapped_names)
set([item for key, item in names_map.items()]).difference({None})
# structure_uid = list(structure_set_paths.items())[0][0]
structure_uid = '1.2.840.10008.5.1.4.1.1.481.3.1574822743'
structure_set_path = structure_set_paths[structure_uid]
structure_set_path
structure_set = pydicom.read_file(
structure_set_path,
force=True,
specific_tags=['ROIContourSequence', 'StructureSetROISequence'])
number_to_name_map = {
roi_sequence_item.ROINumber: names_map[roi_sequence_item.ROIName]
for roi_sequence_item in structure_set.StructureSetROISequence
if names_map[roi_sequence_item.ROIName] is not None
}
number_to_name_map
contours_by_ct_uid = {}
for roi_contour_sequence_item in structure_set.ROIContourSequence:
try:
structure_name = number_to_name_map[roi_contour_sequence_item.ReferencedROINumber]
except KeyError:
continue
for contour_sequence_item in roi_contour_sequence_item.ContourSequence:
ct_uid = contour_sequence_item.ContourImageSequence[0].ReferencedSOPInstanceUID
try:
_ = contours_by_ct_uid[ct_uid]
except KeyError:
contours_by_ct_uid[ct_uid] = dict()
try:
contours_by_ct_uid[ct_uid][structure_name].append(contour_sequence_item.ContourData)
except KeyError:
contours_by_ct_uid[ct_uid][structure_name] = [contour_sequence_item.ContourData]
# ct_uid = list(contours_by_ct_uid.keys())[50]
ct_uid = '1.2.840.113704.1.111.2804.1556591059.12956'
ct_path = ct_image_paths[ct_uid]
dcm_ct = pydicom.read_file(ct_path, force=True)
dcm_ct.file_meta.TransferSyntaxUID = pydicom.uid.ImplicitVRLittleEndian
def get_image_transformation_parameters(dcm_ct):
# From Matthew Coopers work in ../old/data_generator.py
position = dcm_ct.ImagePositionPatient
spacing = [x for x in dcm_ct.PixelSpacing] + [dcm_ct.SliceThickness]
orientation = dcm_ct.ImageOrientationPatient
dx, dy, *_ = spacing
Cx, Cy, *_ = position
Ox, Oy = orientation[0], orientation[4]
return dx, dy, Cx, Cy, Ox, Oy
contours_by_ct_uid[ct_uid].keys()
organ = 'urethra'
original_contours = contours_by_ct_uid[ct_uid][organ]
def reduce_expanded_mask(expanded_mask, img_size, expansion):
return np.mean(np.mean(
tf.reshape(expanded_mask, (img_size, expansion, img_size, expansion)),
axis=1), axis=2)
def calculate_aliased_mask(contours, dcm_ct, expansion=5):
dx, dy, Cx, Cy, Ox, Oy = get_image_transformation_parameters(dcm_ct)
ct_size = np.shape(dcm_ct.pixel_array)
x_grid = np.arange(Cx, Cx + ct_size[0]*dx*Ox, dx*Ox)
y_grid = np.arange(Cy, Cy + ct_size[1]*dy*Oy, dy*Oy)
new_ct_size = np.array(ct_size) * expansion
expanded_mask = np.zeros(new_ct_size)
for xyz in contours:
x = np.array(xyz[0::3])
y = np.array(xyz[1::3])
z = xyz[2::3]
assert len(set(z)) == 1
r = (((y - Cy) / dy * Oy)) * expansion + (expansion - 1) * 0.5
c = (((x - Cx) / dx * Ox)) * expansion + (expansion - 1) * 0.5
expanded_mask = np.logical_or(expanded_mask, skimage.draw.polygon2mask(new_ct_size, np.array(list(zip(r, c)))))
mask = reduce_expanded_mask(expanded_mask, ct_size[0], expansion)
mask = 2 * mask - 1
return x_grid, y_grid, mask
def get_contours_from_mask(x_grid, y_grid, mask):
cs = plt.contour(x_grid, y_grid, mask, [0]);
contours = [
path.vertices for path in cs.collections[0].get_paths()
]
plt.close()
return contours
x_grid, y_grid, mask_with_aliasing = calculate_aliased_mask(original_contours, dcm_ct)
_, _, mask_without_aliasing = calculate_aliased_mask(original_contours, dcm_ct, expansion=1)
contours_with_aliasing = get_contours_from_mask(x_grid, y_grid, mask_with_aliasing)
contours_without_aliasing = get_contours_from_mask(x_grid, y_grid, mask_without_aliasing)
plt.figure(figsize=(10,10))
for xyz in original_contours:
x = np.array(xyz[0::3])
y = np.array(xyz[1::3])
plt.plot(x, y)
plt.axis('equal')
plt.figure(figsize=(10,10))
for contour in contours_with_aliasing:
plt.plot(contour[:,0], contour[:,1])
plt.plot(contour[:,0], contour[:,1])
plt.axis('equal')
plt.figure(figsize=(10,10))
for contour in contours_with_aliasing:
plt.plot(contour[:,0], contour[:,1])
plt.plot(contour[:,0], contour[:,1])
for xyz in original_contours:
x = np.array(xyz[0::3])
y = np.array(xyz[1::3])
plt.plot(x, y)
plt.axis('equal')
plt.figure(figsize=(10,10))
for contour in contours_without_aliasing:
plt.plot(contour[:,0], contour[:,1])
plt.plot(contour[:,0], contour[:,1])
plt.axis('equal')
plt.figure(figsize=(10,10))
for contour in contours_without_aliasing:
plt.plot(contour[:,0], contour[:,1])
plt.plot(contour[:,0], contour[:,1])
for xyz in original_contours:
x = np.array(xyz[0::3])
y = np.array(xyz[1::3])
plt.plot(x, y)
plt.axis('equal')
```
|
github_jupyter
|
```
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from keras.datasets import mnist
(x_train, y_train), _ = mnist.load_data()
x_train = x_train / 255.0
x_train = np.expand_dims(x_train, axis=3)
print(x_train.shape)
print(y_train.shape)
num_classes = 10
plt.imshow(np.squeeze(x_train[10]))
plt.show()
print(y_train[10])
'''def generator(z, y, reuse=False, verbose=True):
with tf.variable_scope("generator", reuse=reuse):
# Concatenate noise and conditional one-hot variable
inputs = tf.concat([z, y], 1)
# FC layer
fc1 = tf.layers.dense(inputs=inputs, units= 7 * 7 * 128, activation=tf.nn.leaky_relu)
reshaped = tf.reshape(fc1, shape=[-1, 7, 7, 128])
upconv1 = tf.layers.conv2d_transpose(inputs=reshaped,
filters=32,
kernel_size=[5,5],
kernel_initializer=tf.truncated_normal_initializer(stddev=0.1),
strides=[2,2],
activation=tf.nn.leaky_relu,
padding='same',
name='upscore1')
upconv2 = tf.layers.conv2d_transpose(inputs=upconv1,
filters=1,
kernel_size=[3,3],
kernel_initializer=tf.truncated_normal_initializer(stddev=0.1),
strides=[2,2],
activation=None,
padding='same',
name='upscore2')
prob = tf.nn.sigmoid(upconv2)
if verbose:
print("\nGenerator:")
print(inputs)
print(fc1)
print(reshaped)
print(upconv1)
print(upconv2)
return prob
def discriminator(x, y, reuse=False, verbose=True):
with tf.variable_scope("discriminator", reuse=reuse):
conv1 = tf.layers.conv2d(inputs=x,
filters=64,
kernel_size=[5,5],
kernel_initializer=tf.truncated_normal_initializer(stddev=0.1),
strides=[1,1],
activation=tf.nn.leaky_relu,
padding='same',
name='Conv1')
pool1 = tf.layers.max_pooling2d(inputs=conv1,
pool_size=[2,2],
strides=[2,2],
padding='same',
name='Pool1')
conv2 = tf.layers.conv2d(inputs=pool1,
filters=32,
kernel_size=[3,3],
kernel_initializer=tf.truncated_normal_initializer(stddev=0.1),
strides=[2,2],
activation=tf.nn.leaky_relu,
padding='same',
name='Conv2')
pool2 = tf.layers.max_pooling2d(inputs=conv2,
pool_size=[2,2],
strides=[2,2],
padding='same',
name='Pool2')
flattened = tf.layers.flatten(pool2)
concatened = tf.concat([flattened, y], 1)
fc1 = tf.layers.dense(inputs=concatened, units=256, activation=tf.nn.leaky_relu)
fc2 = tf.layers.dense(inputs=fc1, units=1, activation=None)
prob = tf.nn.sigmoid(fc2)
if verbose:
print("\nDiscriminator:")
print(conv1)
print(pool1)
print(conv2)
print(pool2)
print(flattened)
print(concatened)
print(fc1)
print(fc2)
return prob, fc2'''
def sample_Z(batch_size, img_size):
# Sample noise for generator
return np.random.uniform(-1., 1., size=[batch_size, img_size])
def one_hot(batch_size, num_classes, labels):
assert(batch_size == len(labels))
y_one_hot = np.zeros(shape=[batch_size, num_classes])
y_one_hot[np.arange(batch_size), labels] = 1
return y_one_hot
def generator(z, y, reuse=False, verbose=True):
with tf.variable_scope("generator", reuse=reuse):
inputs = tf.concat([z, y], 1)
fc1 = tf.layers.dense(inputs=inputs, units=256, activation=tf.nn.leaky_relu)
fc2 = tf.layers.dense(inputs=fc1, units=784, activation=None)
logits = tf.nn.sigmoid(fc2)
if verbose:
print("\nGenerator:")
print(inputs)
print(fc1)
print(fc2)
return logits
def discriminator(x, y, reuse=False, verbose=True):
with tf.variable_scope("discriminator", reuse=reuse):
inputs = tf.concat([x, y], 1)
fc1 = tf.layers.dense(inputs=inputs, units=256, activation=tf.nn.leaky_relu)
fc2 = tf.layers.dense(inputs=fc1, units=1, activation=None)
prob = tf.nn.sigmoid(fc2)
if verbose:
print("\nDiscriminator:")
print(inputs)
print(fc1)
print(fc2)
return prob, fc2
tf.reset_default_graph()
# Discriminator input
#X = tf.placeholder(tf.float32, shape=[None, x_train.shape[1], x_train.shape[2], 1], name='X')
X = tf.placeholder(tf.float32, shape=[None, x_train.shape[1] * x_train.shape[2] * 1], name='X')
# Generator noise input
Z_dim = 100
Z = tf.placeholder(tf.float32, shape=[None, Z_dim], name='Z')
# Generator conditional
Y = tf.placeholder(tf.float32, shape=[None, num_classes], name='Y')
# Print shapes
print("Inputs:")
print("Discriminator input: " + str(X))
print("Conditional variable: " + str(Y))
print("Generator input noise: " + str(Z))
# Networks
gen_sample = generator(Z, Y)
D_real, D_logit_real = discriminator(X, Y)
D_fake, D_logit_fake = discriminator(gen_sample, Y, reuse=True, verbose=False)
```
## Theoretical remark
### Binary cross entropy loss
\begin{equation*}
L(\theta) = - \frac{1}{n} \sum_{i=1}^n [y_i log(p_i) + (1 - y_i) log(1 - p_i)]
\end{equation*}
- Discriminator final probability is 1 => REAL IMAGE
- Discriminator final probability is 0 => FAKE IMAGE
Log values:
- Log(1) => Loss would be 0
- Log(0+) => Loss would be to - ∞
### Generator:
Maximize D(G(z))
### Discriminator:
Maximize D(x) AND minimize D(G(z))
```
# Losses have minus sign because I have to maximize them
D_loss = - tf.reduce_mean( tf.log(D_real) + tf.log(1. - D_fake) )
G_loss = - tf.reduce_mean( tf.log(D_fake) )
# Optimizers
D_var = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='discriminator')
D_optimizer = tf.train.AdamOptimizer(learning_rate=0.0005).minimize(D_loss, var_list=D_var)
G_var = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='generator')
G_optimizer = tf.train.AdamOptimizer(learning_rate=0.0005).minimize(G_loss, var_list=G_var)
```
## Training of the generator and discriminator network
```
batch_size = 64
with tf.Session() as sess:
# Run the initializer
saver = tf.train.Saver()
sess.run(tf.global_variables_initializer())
# Epochs
for n in range(30):
print("Epoch " + str(n+1))
for i in range(len(x_train) // batch_size):
X_tmp = np.reshape(x_train[i*batch_size:(i+1)*batch_size], (batch_size, -1))
#X_tmp = x_train[i*batch_size:(i+1)*batch_size]
sampled_noise = sample_Z(batch_size, Z_dim)
one_hot_sampled = one_hot(batch_size, num_classes, y_train[i*batch_size:(i+1)*batch_size])
_, D_loss_val = sess.run([D_optimizer, D_loss], feed_dict={X: X_tmp,
Y: one_hot_sampled,
Z: sampled_noise})
_, G_loss_val = sess.run([G_optimizer, G_loss], feed_dict={Y: one_hot_sampled,
Z: sampled_noise})
if i % 300 == 0:
print(str(D_loss_val) + " " + str(G_loss_val))
save_path = saver.save(sess, "./checkpoints/model.ckpt")
```
## Generate images with constraint (Y)
```
generate_number = 9
with tf.Session() as sess:
saver = tf.train.Saver()
sess.run(tf.global_variables_initializer())
saver.restore(sess, "./checkpoints/model.ckpt")
sampled_noise = sample_Z(1, Z_dim)
one_hot_sampled = one_hot(1, num_classes, [generate_number])
generated = sess.run(gen_sample, feed_dict={Y: one_hot_sampled,
Z: sampled_noise})
img_generated = np.reshape(generated, (28, 28))
plt.imshow(img_generated)
plt.show()
fig, axes = plt.subplots(4, 4)
fig.subplots_adjust(hspace=0.1)
with tf.Session() as sess:
saver = tf.train.Saver()
sess.run(tf.global_variables_initializer())
saver.restore(sess, "./checkpoints/model.ckpt")
for i, ax in enumerate(axes.flat):
generate_number = int(i / 4)
sampled_noise = sample_Z(1, Z_dim)
one_hot_sampled = one_hot(1, num_classes, [generate_number])
generated = sess.run(gen_sample, feed_dict={Y: one_hot_sampled,
Z: sampled_noise})
img_generated = np.reshape(generated, (28, 28))
ax.imshow(img_generated)
ax.set_xticks([])
ax.set_yticks([])
plt.show()
```
|
github_jupyter
|
## Accessing High Resolution Electricity Access (HREA) data with the Planetary Computer STAC API
The HREA project aims to provide open access to new indicators of electricity access and reliability across the world. Leveraging VIIRS satellite imagery with computational methods, these high-resolution data provide new tools to track progress towards reliable and sustainable energy access across the world.
This notebook provides an example of accessing HREA data using the Planetary Computer STAC API.
### Environment setup
This notebook works with or without an API key, but you will be given more permissive access to the data with an API key. The Planetary Computer Hub is pre-configured to use your API key.
```
import matplotlib.colors as colors
import matplotlib.pyplot as plt
import planetary_computer as pc
import rasterio
import rioxarray
from pystac_client import Client
from rasterio.plot import show
```
### Selecting a region and querying the API
The HREA dataset covers all of Africa as well as Ecuador. Let's pick up an area of interest that covers Djibouti and query the Planetary Computer API for data coverage for the year 2019.
```
area_of_interest = {
"type": "Polygon",
"coordinates": [
[
[41.693115234375, 10.865675826639414],
[43.275146484375, 10.865675826639414],
[43.275146484375, 12.554563528593656],
[41.693115234375, 12.554563528593656],
[41.693115234375, 10.865675826639414],
]
],
}
catalog = Client.open("https://planetarycomputer.microsoft.com/api/stac/v1")
search = catalog.search(
collections=["hrea"], intersects=area_of_interest, datetime="2019-12-31"
)
# Check how many items were returned, there could be more pages of results as well
items = [pc.sign(item) for item in search.get_items()]
print(f"Returned {len(items)} Items")
```
We found 3 items for our search. We'll grab jsut the one for Djibouti and see what data assets are available on it.
```
(item,) = [x for x in items if "Djibouti" in x.id]
data_assets = [
f"{key}: {asset.title}"
for key, asset in item.assets.items()
if "data" in asset.roles
]
print(*data_assets, sep="\n")
```
### Plotting the data
Let's pick the variable `light-composite`, and read in the entire GeoTIFF to plot.
```
light_comp_asset = item.assets["light-composite"]
data_array = rioxarray.open_rasterio(light_comp_asset.href)
fig, ax = plt.subplots(1, 1, figsize=(14, 7), dpi=100)
show(
data_array,
ax=ax,
norm=colors.PowerNorm(1, vmin=0.01, vmax=1.4),
cmap="magma",
title="Djibouti (2019)",
)
plt.axis("off")
plt.show()
```
### Read a window
Cloud Optimized GeoTIFFs (COGs) allows us to effeciently download and read sections of a file, rather than the entire file, when only part of the region is required. The COGs are stored on disk with an internal set of windows. You can read sections of any shape and size, but reading them in the file-defined window size is most efficient. Let's read the same asset, but this time only request the second window.
```
# Reading only the second window of the file, as an example
i_window = 2
with rasterio.open(light_comp_asset.href) as src:
windows = list(src.block_windows())
print("Available windows:", *windows, sep="\n")
_, window = windows[i_window]
section = data_array.rio.isel_window(window)
fig, xsection = plt.subplots(1, 1, figsize=(14, 7))
show(
section,
ax=xsection,
norm=colors.PowerNorm(1, vmin=0.01, vmax=1.4),
cmap="magma",
title="Reading a single window",
)
plt.axis("off")
plt.show()
```
### Zoom in on a region within the retrieved window
Let's plot the region around the city of Dikhil, situated within that second data window, around this bounding box (in x/y coordinates, which is latitude / longitude):
```
(42.345868941491204, 11.079694223371735, 42.40420227530527, 11.138027557181712)
```
```
fig, xsection = plt.subplots(1, 1, figsize=(14, 7))
show(
section.sel(
x=slice(42.345868941491204, 42.40420227530527),
y=slice(11.138027557181712, 11.079694223371735),
),
ax=xsection,
norm=colors.PowerNorm(1, vmin=0.01, vmax=1.4),
cmap="magma",
title="Dikhil (2019)",
)
plt.axis("off")
plt.show()
```
### Plot change over time
The HREA dataset goes back several years. Let's search again for the same area, but this time over a longer temporal span.
```
search = catalog.search(
collections=["hrea"], intersects=area_of_interest, datetime="2012-12-31/2019-12-31"
)
items = [
pc.sign(item).to_dict() for item in search.get_items() if "Djibouti" in item.id
]
print(f"Returned {len(items)} Items:")
```
We got 8 items this time, each corresponding to single year. To plot the change of light intensity over time, we'll open the same asset on each of these year-items and read in the window with Dikhil. Since we're using multiple items, we'll use `stackstac` to stack them together into a single DataArray for us.
```
import stackstac
bounds_latlon = (
42.345868941491204,
11.079694223371735,
42.40420227530527,
11.138027557181712,
)
dikhil = (
stackstac.stack(items, assets=["light-composite"], bounds_latlon=bounds_latlon)
.squeeze()
.compute()
.quantile(0.9, dim=["y", "x"])
)
fig, ax = plt.subplots(figsize=(12, 6))
dikhil.plot(ax=ax)
ax.set(title="Dikhil composite light output", ylabel="Annual light output, normalize");
```
|
github_jupyter
|
```
import os
os.environ['CUDA_VISIBLE_DEVICES'] = "0"
import numpy as np
from matplotlib import pyplot as plt
import seaborn as sns
import pandas as pd
from tqdm.auto import tqdm
import torch
from torch import nn
import gin
import pickle
import io
from sparse_causal_model_learner_rl.trainable.gumbel_switch import WithInputSwitch, sample_from_logits_simple
gin.enter_interactive_mode()
from sparse_causal_model_learner_rl.loss.losses import fit_loss
from sparse_causal_model_learner_rl.metrics.context_rewrite import context_rewriter
from sparse_causal_model_learner_rl.visual.learner_visual import graph_for_matrices
ckpt = '/home/sergei/ray_results/5x5_1f1c1k_obs_rec_nonlin_gnn_gumbel_siamese_l2_kc_dec_stop_after_completes/main_fcn_d66d7_00000_0_2021-02-01_22-09-57/checkpoint_0/checkpoint'
class LinearModel(nn.Module):
def __init__(self, input_shape):
super(LinearModel, self).__init__()
self.layer = nn.Linear(in_features=10, out_features=1, bias=True)
def forward(self, x):
return self.layer(x)
import ray
ray.init(address='10.90.38.7:6379', ignore_reinit_error=True)
# https://github.com/pytorch/pytorch/issues/16797
class CPU_Unpickler(pickle.Unpickler):
def find_class(self, module, name):
if module == 'torch.storage' and name == '_load_from_bytes':
return lambda b: torch.load(io.BytesIO(b), map_location='cpu')
else: return super().find_class(module, name)
with open(ckpt, 'rb') as f:
learner = pickle.load(f)#CPU_Unpickler(f).load()#pickle.load(f)
learner.collect_steps()
ctx = learner._context
ox = ctx['obs_x']
oy = ctx['obs_y']
ax = ctx['action_x']
obs = ctx['obs']
obs_ns = None
obs_s = None
def siamese_feature_discriminator_l2(obs, decoder, margin=1.0, **kwargs):
def loss(y_true, y_pred):
"""L2 norm for the distance, no flat."""
delta = y_true - y_pred
delta = delta.pow(2)
delta = delta.flatten(start_dim=1)
delta = delta.sum(1)
return delta
# original inputs order
batch_dim = obs.shape[0]
# random permutation for incorrect inputs
idxes = torch.randperm(batch_dim).to(obs.device)
obs_shuffled = obs[idxes]
idxes_orig = torch.arange(start=0, end=batch_dim).to(obs.device)
target_incorrect = (idxes == idxes_orig).to(obs.device)
delta_obs_obs_shuffled = (obs - obs_shuffled).pow(2).flatten(start_dim=1).max(1).values
# distance_shuffle = loss(obs, obs_shuffled)
distance_f = loss(decoder(obs), decoder(obs_shuffled))
global obs_ns, obs_s
obs_ns = obs
obs_s = obs_shuffled
# print(torch.nn.ReLU()(margin - distance_f), torch.where)
return {'loss': torch.where(~target_incorrect, torch.nn.ReLU()(margin - distance_f), distance_f).mean(),
'metrics': {'distance_plus': distance_f[~target_incorrect].mean().item(),
'distance_minus': distance_f[target_incorrect].mean().item(),
'delta_obs_obs_shuffled': delta_obs_obs_shuffled.detach().cpu().numpy(),
'same_input_frac': (1.*target_incorrect).mean().item()}
}
siamese_feature_discriminator_l2(**ctx)
#plt.hist(np.log(siamese_feature_discriminator_l2(**ctx)['metrics']['delta_obs_obs_shuffled']))
delta = obs_s - obs_ns
from collections import Counter
#learner.collect_steps()
learner.env.engine
plt.hist(delta.abs().flatten(start_dim=1).max(1).values.cpu().numpy())
opt = torch.optim.Adam(params=learner.decoder.parameters(), lr=1e-3)
from causal_util.collect_data import EnvDataCollector
learner.env = learner.create_env()
learner.collector = EnvDataCollector(learner.env)
learner.env.engine
losses = []
dplus = []
for _ in tqdm(range(1000)):
learner.collect_steps()
ctx = learner._context
for _ in range(5):
opt.zero_grad()
l = siamese_feature_discriminator_l2(**ctx, margin=1.0)
loss = l['loss']
loss.backward()
losses.append(loss.item())
dplus.append(l['metrics']['distance_plus'])
opt.step()
plt.figure(figsize=(15, 5))
plt.subplot(1, 2, 1)
plt.plot(losses)
plt.yscale('log')
plt.subplot(1, 2, 2)
plt.yscale('log')
plt.plot(dplus)
```
|
github_jupyter
|
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sklearn as sk
from sklearn import linear_model
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from sklearn.preprocessing import MinMaxScaler
# Набор данных взят с https://www.kaggle.com/aungpyaeap/fish-market
# Параметры нескольких популярных промысловых рыб
# length 1 = Body height
# length 2 = Total Length
# length 3 = Diagonal Length
fish_data = pd.read_csv("datasets/Fish.csv", delimiter=',')
print(fish_data)
# Выделим две переменных
x_label = 'Length1'
y_label = 'Weight'
data = fish_data[[x_label, y_label]]
print(data)
# Определим размер валидационной и тестовой выборок
val_test_size = round(0.2*len(data))
print(val_test_size)
# Генерируем уникальный seed
my_code = "Грушин"
seed_limit = 2 ** 32
my_seed = int.from_bytes(my_code.encode(), "little") % seed_limit
# Создадим обучающую, валидационную и тестовую выборки
random_state = my_seed
train_val, test = train_test_split(data, test_size=val_test_size, random_state=random_state)
train, val = train_test_split(train_val, test_size=val_test_size, random_state=random_state)
print(len(train), len(val), len(test))
# Преобразуем данные к ожидаемому библиотекой skleran формату
train_x = np.array(train[x_label]).reshape(-1,1)
train_y = np.array(train[y_label]).reshape(-1,1)
val_x = np.array(val[x_label]).reshape(-1,1)
val_y = np.array(val[y_label]).reshape(-1,1)
test_x = np.array(test[x_label]).reshape(-1,1)
test_y = np.array(test[y_label]).reshape(-1,1)
# Нарисуем график
plt.plot(train_x, train_y, 'o')
plt.show()
# Создадим модель линейной регрессии и обучим ее на обучающей выборке.
model1 = linear_model.LinearRegression()
model1.fit(train_x, train_y)
# Результат обучения: значения a и b: y = ax+b
print(model1.coef_, model1.intercept_)
a = model1.coef_[0]
b = model1.intercept_
print(a, b)
# Добавим полученную линию на график
x = np.linspace(min(train_x), max(train_x), 100)
y = a * x + b
plt.plot(train_x, train_y, 'o')
plt.plot(x, y)
plt.show()
# Проверим результат на валидационной выборке
val_predicted = model1.predict(val_x)
mse1 = mean_squared_error(val_y, val_predicted)
print(mse1)
# Результат не очень хорош для интерпретации, попробуем сначала нормировать значения
scaler_x = MinMaxScaler()
scaler_x.fit(train_x)
scaled_train_x = scaler_x.transform(train_x)
scaler_y = MinMaxScaler()
scaler_y.fit(train_y)
scaled_train_y = scaler_y.transform(train_y)
plt.plot(scaled_train_x, scaled_train_y, 'o')
plt.show()
# Строим модель и выводим результаты для нормированных данных
model2 = linear_model.LinearRegression()
model2.fit(scaled_train_x, scaled_train_y)
a = model2.coef_[0]
b = model2.intercept_
x = np.linspace(min(scaled_train_x), max(scaled_train_x), 100)
y = a * x + b
plt.plot(scaled_train_x, scaled_train_y, 'o')
plt.plot(x, y)
plt.show()
# Проверим результат на валидационной выборке
scaled_val_x = scaler_x.transform(val_x)
scaled_val_y = scaler_y.transform(val_y)
val_predicted = model2.predict(scaled_val_x)
mse2 = mean_squared_error(scaled_val_y, val_predicted)
print(mse2)
# Построим модель линейной регресси с L1-регуляризацией и выведем результаты для нормированных данных.
model3 = linear_model.Lasso(alpha=0.01)
model3.fit(scaled_train_x, scaled_train_y)
a = model3.coef_[0]
b = model3.intercept_
x = np.linspace(min(scaled_train_x), max(scaled_train_x), 100)
y = a * x + b
plt.plot(scaled_train_x, scaled_train_y, 'o')
plt.plot(x, y)
plt.show()
# Проверим результат на валидационной выборке
scaled_val_x = scaler_x.transform(val_x)
scaled_val_y = scaler_y.transform(val_y)
val_predicted = model3.predict(scaled_val_x)
mse3 = mean_squared_error(scaled_val_y, val_predicted)
print(mse3)
# Можете поэкспериментировать со значением параметра alpha, чтобы уменьшить ошибку
# Построим модель линейной регресси с L2-регуляризацией и выведем результаты для нормированных данных
model4 = linear_model.Ridge(alpha=0.01)
model4.fit(scaled_train_x, scaled_train_y)
a = model4.coef_[0]
b = model4.intercept_
x = np.linspace(min(scaled_train_x), max(scaled_train_x), 100)
y = a * x + b
plt.plot(scaled_train_x, scaled_train_y, 'o')
plt.plot(x, y)
plt.show()
# Проверим результат на валидационной выборке
scaled_val_x = scaler_x.transform(val_x)
scaled_val_y = scaler_y.transform(val_y)
val_predicted = model4.predict(scaled_val_x)
mse4 = mean_squared_error(scaled_val_y, val_predicted)
print(mse4)
# Можете поэкспериментировать со значением параметра alpha, чтобы уменьшить ошибку
# Построим модель линейной регресси с ElasticNet-регуляризацией и выведем результаты для нормированных данных
model5 = linear_model.ElasticNet(alpha=0.01, l1_ratio = 0.01)
model5.fit(scaled_train_x, scaled_train_y)
a = model5.coef_[0]
b = model5.intercept_
x = np.linspace(min(scaled_train_x), max(scaled_train_x), 100)
y = a * x + b
plt.plot(scaled_train_x, scaled_train_y, 'o')
plt.plot(x, y)
plt.show()
# Проверим результат на валидационной выборке
scaled_val_x = scaler_x.transform(val_x)
scaled_val_y = scaler_y.transform(val_y)
val_predicted = model5.predict(scaled_val_x)
mse5 = mean_squared_error(scaled_val_y, val_predicted)
print(mse5)
# Можете поэкспериментировать со значениями параметров alpha и l1_ratio, чтобы уменьшить ошибку
# Выведем ошибки для моделей на нормированных данных
print(mse2, mse3, mse4, mse5)
# Минимальное значение достигается для второй модели, получим итоговую величину ошибки на тестовой выборке
scaled_test_x = scaler_x.transform(test_x)
scaled_test_y = scaler_y.transform(test_y)
test_predicted = model2.predict(scaled_test_x)
mse_test = mean_squared_error(scaled_test_y, test_predicted)
print(mse_test)
# Повторите выделение данных, нормирование, и анализ 4 моделей
# (обычная линейная регрессия, L1-регуляризация, L2-регуляризация, ElasticNet-регуляризация)
# для x = Length2 и y = Width.
x_label = 'Length2'
y_label = 'Weight'
data = fish_data[[x_label, y_label]]
print(data)
x_label = 'Length2'
y_label = 'Weight'
data = fish_data[[x_label, y_label]]
print(data)
val_test_size = round(0.2*len(data))
print(val_test_size)
random_state = my_seed
train_val, test = train_test_split(data, test_size=val_test_size, random_state=random_state)
train, val = train_test_split(train_val, test_size=val_test_size, random_state=random_state)
print(len(train), len(val), len(test))
train_x = np.array(train[x_label]).reshape(-1,1)
train_y = np.array(train[y_label]).reshape(-1,1)
val_x = np.array(val[x_label]).reshape(-1,1)
val_y = np.array(val[y_label]).reshape(-1,1)
test_x = np.array(test[x_label]).reshape(-1,1)
test_y = np.array(test[y_label]).reshape(-1,1)
plt.plot(train_x, train_y, 'o')
plt.show()
model1 = linear_model.LinearRegression().fit(train_x, train_y)
print(model1.coef_, model1.intercept_)
a = model1.coef_[0]
b = model1.intercept_
print(a, b)
x = np.linspace(min(train_x), max(train_x), 100)
y = a * x + b
plt.plot(train_x, train_y, 'o')
plt.plot(x, y)
plt.show()
val_predicted = model1.predict(val_x)
mse1 = mean_squared_error(val_y, val_predicted)
print(mse1)
scaler_x = MinMaxScaler().fit(train_x)
scaled_train_x = scaler_x.transform(train_x)
scaler_y = MinMaxScaler().fit(train_y)
scaled_train_y = scaler_y.transform(train_y)
plt.plot(scaled_train_x, scaled_train_y, 'o')
plt.show()
model2 = linear_model.LinearRegression().fit(scaled_train_x, scaled_train_y)
a = model2.coef_[0]
b = model2.intercept_
x = np.linspace(min(scaled_train_x), max(scaled_train_x), 100)
y = a * x + b
plt.plot(scaled_train_x, scaled_train_y, 'o')
plt.plot(x, y)
plt.show()
scaled_val_x = scaler_x.transform(val_x)
scaled_val_y = scaler_y.transform(val_y)
val_predicted = model2.predict(scaled_val_x)
mse2 = mean_squared_error(scaled_val_y, val_predicted)
print(mse2)
model3 = linear_model.Lasso(alpha=0.01).fit(scaled_train_x, scaled_train_y)
a = model3.coef_[0]
b = model3.intercept_
x = np.linspace(min(scaled_train_x), max(scaled_train_x), 100)
y = a * x + b
plt.plot(scaled_train_x, scaled_train_y, 'o')
plt.plot(x, y)
plt.show()
scaled_val_x = scaler_x.transform(val_x)
scaled_val_y = scaler_y.transform(val_y)
val_predicted = model3.predict(scaled_val_x)
mse3 = mean_squared_error(scaled_val_y, val_predicted)
print(mse3)
model4 = linear_model.Ridge(alpha=0.01).fit(scaled_train_x, scaled_train_y)
a = model4.coef_[0]
b = model4.intercept_
x = np.linspace(min(scaled_train_x), max(scaled_train_x), 100)
y = a * x + b
plt.plot(scaled_train_x, scaled_train_y, 'o')
plt.plot(x, y)
plt.show()
scaled_val_x = scaler_x.transform(val_x)
scaled_val_y = scaler_y.transform(val_y)
val_predicted = model4.predict(scaled_val_x)
mse4 = mean_squared_error(scaled_val_y, val_predicted)
print(mse4)
model5 = linear_model.ElasticNet(alpha=0.01, l1_ratio = 0.01)
model5.fit(scaled_train_x, scaled_train_y)
a = model5.coef_[0]
b = model5.intercept_
x = np.linspace(min(scaled_train_x), max(scaled_train_x), 100)
y = a * x + b
plt.plot(scaled_train_x, scaled_train_y, 'o')
plt.plot(x, y)
plt.show()
scaled_val_x = scaler_x.transform(val_x)
scaled_val_y = scaler_y.transform(val_y)
val_predicted = model5.predict(scaled_val_x)
mse5 = mean_squared_error(scaled_val_y, val_predicted)
print(mse5)
print(mse2, mse3, mse4, mse5)
scaled_test_x = scaler_x.transform(test_x)
scaled_test_y = scaler_y.transform(test_y)
test_predicted = model2.predict(scaled_test_x)
mse_test = mean_squared_error(scaled_test_y, test_predicted)
print(mse_test)
```
|
github_jupyter
|
```
from sklearn.datasets import make_blobs
X, y = make_blobs(n_samples=50, centers=2, cluster_std=0.5, random_state=4)
y = 2 * y - 1
plt.scatter(X[y == -1, 0], X[y == -1, 1], marker='o', label="-1 class")
plt.scatter(X[y == +1, 0], X[y == +1, 1], marker='x', label="+1 class")
plt.xlabel("x1")
plt.ylabel("x2")
plt.legend()
plt.title("train data")
plt.show()
from sklearn.svm import SVC
model = SVC(kernel='linear', C=1e10).fit(X, y)
model.n_support_
model.support_
model.support_vectors_
y[model.support_]
xmin = X[:, 0].min()
xmax = X[:, 0].max()
ymin = X[:, 1].min()
ymax = X[:, 1].max()
xx = np.linspace(xmin, xmax, 10)
yy = np.linspace(ymin, ymax, 10)
X1, X2 = np.meshgrid(xx, yy)
Z = np.empty(X1.shape)
for (i, j), val in np.ndenumerate(X1):
x1 = val
x2 = X2[i, j]
p = model.decision_function([[x1, x2]])
Z[i, j] = p[0]
levels = [-1, 0, 1]
linestyles = ['dashed', 'solid', 'dashed']
plt.scatter(X[y == -1, 0], X[y == -1, 1], marker='o', label="-1 class")
plt.scatter(X[y == +1, 0], X[y == +1, 1], marker='x', label="+1 class")
plt.contour(X1, X2, Z, levels, colors='k', linestyles=linestyles)
plt.scatter(model.support_vectors_[:, 0], model.support_vectors_[:, 1], s=300, alpha=0.3)
x_new = [10, 2]
plt.scatter(x_new[0], x_new[1], marker='^', s=100)
plt.text(x_new[0] + 0.03, x_new[1] + 0.08, "test data")
plt.xlabel("x1")
plt.ylabel("x2")
plt.legend()
plt.title("SVM")
plt.show()
x_new = [10, 2]
model.decision_function([x_new])
model.coef_.dot(x_new) + model.intercept_
# dual_coef_ = a_i * y_i
model.dual_coef_
model.dual_coef_[0][0] * model.support_vectors_[0].dot(x_new) + \
model.dual_coef_[0][1] * model.support_vectors_[1].dot(x_new) + \
model.intercept_
# iris example
from sklearn.datasets import load_iris
iris = load_iris()
idx = np.in1d(iris.target, [0, 1])
X = iris.data[idx, :2]
y = (2 * iris.target[idx] - 1).astype(np.int)
model = SVC(kernel='linear', C=1e10).fit(X, y)
xmin = X[:, 0].min()
xmax = X[:, 0].max()
ymin = X[:, 1].min()
ymax = X[:, 1].max()
xx = np.linspace(xmin, xmax, 10)
yy = np.linspace(ymin, ymax, 10)
X1, X2 = np.meshgrid(xx, yy)
Z = np.empty(X1.shape)
for (i, j), val in np.ndenumerate(X1):
x1 = val
x2 = X2[i, j]
p = model.decision_function([[x1, x2]])
Z[i, j] = p[0]
levels = [-1, 0, 1]
linestyles = ['dashed', 'solid', 'dashed']
plt.scatter(X[y == -1, 0], X[y == -1, 1], marker='o', label="-1 class")
plt.scatter(X[y == +1, 0], X[y == +1, 1], marker='x', label="+1 class")
plt.contour(X1, X2, Z, levels, colors='k', linestyles=linestyles)
plt.scatter(model.support_vectors_[:, 0], model.support_vectors_[:, 1], s=300, alpha=0.3)
plt.xlabel("x1")
plt.ylabel("x2")
plt.legend()
plt.title("SVM")
plt.show()
from sklearn.datasets import load_iris
iris = load_iris()
idx = np.in1d(iris.target, [1, 2])
X = iris.data[idx, 2:]
y = (2 * iris.target[idx] - 3).astype(np.int)
model = SVC(kernel='linear', C=10).fit(X, y)
xmin = X[:, 0].min()
xmax = X[:, 0].max()
ymin = X[:, 1].min()
ymax = X[:, 1].max()
xx = np.linspace(xmin, xmax, 10)
yy = np.linspace(ymin, ymax, 10)
X1, X2 = np.meshgrid(xx, yy)
Z = np.empty(X1.shape)
for (i, j), val in np.ndenumerate(X1):
x1 = val
x2 = X2[i, j]
p = model.decision_function([[x1, x2]])
Z[i, j] = p[0]
levels = [-1, 0, 1]
linestyles = ['dashed', 'solid', 'dashed']
plt.scatter(X[y == -1, 0], X[y == -1, 1], marker='o', label="-1 class")
plt.scatter(X[y == +1, 0], X[y == +1, 1], marker='x', label="+1 class")
plt.contour(X1, X2, Z, levels, colors='k', linestyles=linestyles)
plt.scatter(model.support_vectors_[:, 0], model.support_vectors_[:, 1], s=200, alpha=0.3)
plt.xlabel("x1")
plt.ylabel("x2")
plt.legend()
plt.title("C=10")
plt.show()
from sklearn.datasets import load_digits
digits = load_digits()
N = 2
M = 5
np.random.seed(0)
fig = plt.figure(figsize=(9, 5))
plt.subplots_adjust(top=1, bottom=0, hspace=0, wspace=0.05)
klist = np.random.choice(range(len(digits.data)), N * M)
for i in range(N):
for j in range(M):
k = klist[i * M + j]
ax = fig.add_subplot(N, M, i * M + j + 1)
ax.imshow(digits.images[k], cmap=plt.cm.bone)
ax.grid(False)
ax.xaxis.set_ticks([])
ax.yaxis.set_ticks([])
plt.title(digits.target[k])
plt.tight_layout()
plt.show()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target, test_size=0.4, random_state=0)
from sklearn.svm import SVC
svc = SVC(kernel='linear').fit(X_train, y_train)
N = 2
M = 5
np.random.seed(4)
fig = plt.figure(figsize=(9, 5))
plt.subplots_adjust(top=1, bottom=0, hspace=0, wspace=0.05)
klist = np.random.choice(range(len(y_test)), N * M)
for i in range(N):
for j in range(M):
k = klist[i * M + j]
ax = fig.add_subplot(N, M, i * M + j + 1)
ax.imshow(X_test[k:(k + 1), :].reshape(8,8), cmap=plt.cm.bone)
ax.grid(False)
ax.xaxis.set_ticks([])
ax.yaxis.set_ticks([])
plt.title("%d => %d" %
(y_test[k], svc.predict(X_test[k:(k + 1), :])[0]))
plt.tight_layout()
plt.show()
from sklearn.metrics import classification_report, accuracy_score
y_pred_train = svc.predict(X_train)
y_pred_test = svc.predict(X_test)
print(classification_report(y_train, y_pred_train))
print(classification_report(y_test, y_pred_test))
```
|
github_jupyter
|
# LeetCode #804. Unique Morse Code Words
## Question
https://leetcode.com/problems/unique-morse-code-words/
International Morse Code defines a standard encoding where each letter is mapped to a series of dots and dashes, as follows: "a" maps to ".-", "b" maps to "-...", "c" maps to "-.-.", and so on.
For convenience, the full table for the 26 letters of the English alphabet is given below:
[".-","-...","-.-.","-..",".","..-.","--.","....","..",".---","-.-",".-..","--","-.","---",".--.","--.-",".-.","...","-","..-","...-",".--","-..-","-.--","--.."]
Now, given a list of words, each word can be written as a concatenation of the Morse code of each letter. For example, "cba" can be written as "-.-..--...", (which is the concatenation "-.-." + "-..." + ".-"). We'll call such a concatenation, the transformation of a word.
Return the number of different transformations among all words we have.
Example:
Input: words = ["gin", "zen", "gig", "msg"]
Output: 2
Explanation:
The transformation of each word is:
"gin" -> "--...-."
"zen" -> "--...-."
"gig" -> "--...--."
"msg" -> "--...--."
There are 2 different transformations, "--...-." and "--...--.".
Note:
The length of words will be at most 100.
Each words[i] will have length in range [1, 12].
words[i] will only consist of lowercase letters.
## My Solution
```
def uniqueMorseRepresentations(words):
morse = [".-","-...","-.-.","-..",".","..-.","--.","....","..",".---","-.-",".-..","--","-.","---",".--.","--.-",".-.","...","-","..-","...-",".--","-..-","-.--","--.."]
from string import ascii_lowercase
alphabet = list(ascii_lowercase)
morse_dic = {}
for i, a in enumerate(alphabet):
morse_dic[a] = morse[i]
res = list()
for word in words:
temp = ""
for w in word:
temp += morse_dic[w]
res.append(temp)
return len(set(res))
# test code
words = ["gin", "zen", "gig", "msg"]
uniqueMorseRepresentations(words)
```
## My Result
__Runtime__ : 16 ms, faster than 94.77% of Python online submissions for Unique Morse Code Words.
__Memory Usage__ : 11.8 MB, less than 35.71% of Python online submissions for Unique Morse Code Words.
## @lee215's Solution
```
def uniqueMorseRepresentations(words):
d = [".-", "-...", "-.-.", "-..", ".", "..-.", "--.", "....", "..", ".---", "-.-", ".-..", "--",
"-.", "---", ".--.", "--.-", ".-.", "...", "-", "..-", "...-", ".--", "-..-", "-.--", "--.."]
return len({''.join(d[ord(i) - ord('a')] for i in w) for w in words})
# test code
words = ["gin", "zen", "gig", "msg"]
uniqueMorseRepresentations(words)
```
## @lee215's Result
__Runtime__: 24 ms, faster than 51.63% of Python online submissions for Unique Morse Code Words.
__Memory Usage__ : 11.9 MB, less than 28.57% of Python online submissions for Unique Morse Code Words.
|
github_jupyter
|
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.
# 06. Distributed CNTK using custom docker images
In this tutorial, you will train a CNTK model on the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset using a custom docker image and distributed training.
## Prerequisites
* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning
* Go through the [00.configuration.ipynb]() notebook to:
* install the AML SDK
* create a workspace and its configuration file (`config.json`)
```
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
```
## Diagnostics
Opt-in diagnostics for better experience, quality, and security of future releases.
```
from azureml.telemetry import set_diagnostics_collection
set_diagnostics_collection(send_diagnostics = True)
```
## Initialize workspace
Initialize a [Workspace](https://review.docs.microsoft.com/en-us/azure/machine-learning/service/concept-azure-machine-learning-architecture?branch=release-ignite-aml#workspace) object from the existing workspace you created in the Prerequisites step. `Workspace.from_config()` creates a workspace object from the details stored in `config.json`.
```
from azureml.core.workspace import Workspace
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep = '\n')
```
## Create a remote compute target
You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) to execute your training script on. In this tutorial, you create an [Azure Batch AI](https://docs.microsoft.com/azure/batch-ai/overview) cluster as your training compute resource. This code creates a cluster for you if it does not already exist in your workspace.
**Creation of the cluster takes approximately 5 minutes.** If the cluster is already in your workspace this code will skip the cluster creation process.
```
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# choose a name for your cluster
cluster_name = "gpucluster"
try:
compute_target = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing compute target.')
except ComputeTargetException:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6',
max_nodes=6)
# create the cluster
compute_target = ComputeTarget.create(ws, cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
# Use the 'status' property to get a detailed status for the current cluster.
print(compute_target.status.serialize())
```
## Upload training data
For this tutorial, we will be using the MNIST dataset.
First, let's download the dataset. We've included the `install_mnist.py` script to download the data and convert it to a CNTK-supported format. Our data files will get written to a directory named `'mnist'`.
```
import install_mnist
install_mnist.main('mnist')
```
To make the data accessible for remote training, you will need to upload the data from your local machine to the cloud. AML provides a convenient way to do so via a [Datastore](https://docs.microsoft.com/azure/machine-learning/service/how-to-access-data). The datastore provides a mechanism for you to upload/download data, and interact with it from your remote compute targets.
Each workspace is associated with a default datastore. In this tutorial, we will upload the training data to this default datastore, which we will then mount on the remote compute for training in the next section.
```
ds = ws.get_default_datastore()
print(ds.datastore_type, ds.account_name, ds.container_name)
```
The following code will upload the training data to the path `./mnist` on the default datastore.
```
ds.upload(src_dir='./mnist', target_path='./mnist')
```
Now let's get a reference to the path on the datastore with the training data. We can do so using the `path` method. In the next section, we can then pass this reference to our training script's `--data_dir` argument.
```
path_on_datastore = 'mnist'
ds_data = ds.path(path_on_datastore)
print(ds_data)
```
## Train model on the remote compute
Now that we have the cluster ready to go, let's run our distributed training job.
### Create a project directory
Create a directory that will contain all the necessary code from your local machine that you will need access to on the remote resource. This includes the training script, and any additional files your training script depends on.
```
import os
project_folder = './cntk-distr'
os.makedirs(project_folder, exist_ok=True)
```
Copy the training script `cntk_distr_mnist.py` into this project directory.
```
import shutil
shutil.copy('cntk_distr_mnist.py', project_folder)
```
### Create an experiment
Create an [experiment](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#experiment) to track all the runs in your workspace for this distributed CNTK tutorial.
```
from azureml.core import Experiment
experiment_name = 'cntk-distr'
experiment = Experiment(ws, name=experiment_name)
```
### Create an Estimator
The AML SDK's base Estimator enables you to easily submit custom scripts for both single-node and distributed runs. You should this generic estimator for training code using frameworks such as sklearn or CNTK that don't have corresponding custom estimators. For more information on using the generic estimator, refer [here](https://docs.microsoft.com/azure/machine-learning/service/how-to-train-ml-models).
```
from azureml.train.estimator import *
script_params = {
'--num_epochs': 20,
'--data_dir': ds_data.as_mount(),
'--output_dir': './outputs'
}
estimator = Estimator(source_directory=project_folder,
compute_target=compute_target,
entry_script='cntk_distr_mnist.py',
script_params=script_params,
node_count=2,
process_count_per_node=1,
distributed_backend='mpi',
pip_packages=['cntk-gpu==2.6'],
custom_docker_base_image='microsoft/mmlspark:gpu-0.12',
use_gpu=True)
```
We would like to train our model using a [pre-built Docker container](https://hub.docker.com/r/microsoft/mmlspark/). To do so, specify the name of the docker image to the argument `custom_docker_base_image`. You can only provide images available in public docker repositories such as Docker Hub using this argument. To use an image from a private docker repository, use the constructor's `environment_definition` parameter instead. Finally, we provide the `cntk` package to `pip_packages` to install CNTK 2.6 on our custom image.
The above code specifies that we will run our training script on `2` nodes, with one worker per node. In order to run distributed CNTK, which uses MPI, you must provide the argument `distributed_backend='mpi'`.
### Submit job
Run your experiment by submitting your estimator object. Note that this call is asynchronous.
```
run = experiment.submit(estimator)
print(run.get_details())
```
### Monitor your run
You can monitor the progress of the run with a Jupyter widget. Like the run submission, the widget is asynchronous and provides live updates every 10-15 seconds until the job completes.
```
from azureml.widgets import RunDetails
RunDetails(run).show()
```
Alternatively, you can block until the script has completed training before running more code.
```
run.wait_for_completion(show_output=True)
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/skojaku/cidre/blob/second-edit/examples/example.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# About this notebook
In this notebook, we apply CIDRE to a network with communities and demonstrate how to use CIDRE and visualize the detected groups.
## Preparation
### Install CIDRE package
First, we install `cidre` package with `pip`:
```
!pip install cidre
```
### Loading libraries
Next, we load some libraries
```
import sys
import numpy as np
from scipy import sparse
import pandas as pd
import cidre
import networkx as nx
```
# Example 1
We first present an example of a small artificial network, which can be loaded by
```
# Data path
edge_file = "https://raw.githubusercontent.com/skojaku/cidre/main/data/synthe/edge-table.csv"
node_file = "https://raw.githubusercontent.com/skojaku/cidre/main/data/synthe/node-table.csv"
# Load
node_table = pd.read_csv(node_file)
A, node_labels = cidre.utils.read_edge_list(edge_file)
# Visualization
nx.draw(nx.from_scipy_sparse_matrix(A), linewidths = 1, edge_color="#8d8d8d", edgecolors="b")
```
## About this network
We constructed this synthetic network by generating a network using a stochastic block model (SBM) composed of two blocks and then adding excessive citation edges among uniformly randomly selected pairs of nodes. Each block corresponds to a community, i.e., a group of nodes that are densely connected with each other within it but sparsely connected with those in the opposite group. Such communities overshadow anomalous groups in networks.
## Community detection with graph-tool
Let's pretend that we do not know that the network is composed of two communities plus additional edges. To run CIDRE, we first need to find the communities. We use [graph-tool package](https://graph-tool.skewed.de/) to do this, which can be installed by
```bash
conda install -c conda-forge graph-tool
```
or in `Colaboratory` platform:
```
%%capture
!echo "deb http://downloads.skewed.de/apt bionic main" >> /etc/apt/sources.list
!apt-key adv --keyserver keys.openpgp.org --recv-key 612DEFB798507F25
!apt-get update
!apt-get install python3-graph-tool python3-cairo python3-matplotlib
```
Now, let's detect communities by fitting the degree-corrected stochastic block model (dcSBM) to the network and consider each detected block as a community.
```
import graph_tool.all as gt
def detect_community(A, K = None, **params):
"""Detect communities using the graph-tool package
:param A: adjacency matrix
:type A: scipy.csr_sparse_matrix
:param K: Maximum number of communities. If K = None, the number of communities is automatically determined by graph-tool.
:type K: int or None
:param params: parameters passed to graph_tool.gt.minimize_blockmodel_dl
"""
def to_graph_tool_format(adj, membership=None):
g = gt.Graph(directed=True)
r, c, v = sparse.find(adj)
nedges = v.size
edge_weights = g.new_edge_property("double")
g.edge_properties["weight"] = edge_weights
g.add_edge_list(
np.hstack([np.transpose((r, c)), np.reshape(v, (nedges, 1))]),
eprops=[edge_weights],
)
return g
G = to_graph_tool_format(A)
states = gt.minimize_blockmodel_dl(
G,
state_args=dict(eweight=G.ep.weight),
multilevel_mcmc_args = {"B_max": A.shape[0] if K is None else K },
**params
)
b = states.get_blocks()
return np.unique(np.array(b.a), return_inverse = True)[1]
group_membership = detect_community(A)
```
## Detecting anomalous groups in the network
Now, we feed the network and its community structure to CIDRE. To to this, we create a `cidre.Cidre` object and input `group_membership` along with some key parameters to `cidre.Cidre`.
```
alg = cidre.Cidre(group_membership = group_membership, alpha = 0.05, min_edge_weight = 1)
```
- `alpha` (default 0.01) is the statistical significance level.
- `min_edge_weight` is the threshold of the edge weight, i.e., the edges with weight less than this value will be removed.
Then, we input the network to `cidre.Cidre.detect`.
```
groups = alg.detect(A, threshold=0.15)
```
`groups` is a list of `Group` instances. A `Group` instance represents a group of nodes detected by CIDRE and contains information about the type of each member node (i.e., donor and recipient). We can get the donor nodes of a group, for example `groups[0]`, by
```
groups[0].donors
```
The keys and values of this dict object are the IDs of the nodes and their donor scores, respectively. The recipients and their recipient scores can be obtained by
```
groups[0].recipients
```
## Visualization
`cidre` package provides an API to visualize small groups. To use this API, we first need to import some additional libraries.
```
import seaborn as sns
import matplotlib.pyplot as plt
```
Then, plot the group by
```
# The following three lines are purely for visual enhancement, i.e., changing the saturation of the colors and font size.
sns.set_style("white")
sns.set(font_scale=1.2)
sns.set_style("ticks")
# Set the figure size
width, height = 5,5
fig, ax = plt.subplots(figsize=(width, height))
# Plot a citation group
cidre.DrawGroup().draw(groups[0], ax = ax)
```
# Example 2
Let's apply CIDRE to a much larger empirical citation network, i.e., the citation network of journals in 2013.
```
# Data path
edge_file = "https://raw.githubusercontent.com/skojaku/cidre/main/data/journal-citation/edge-table-2013.csv"
node_file = "https://raw.githubusercontent.com/skojaku/cidre/main/data/journal-citation/community-label.csv"
# Load
node_table = pd.read_csv(node_file)
A, node_labels = cidre.utils.read_edge_list(edge_file)
```
## About this network
This network is a citation network of journals in 2013 constructed from Microsoft Academic Graph.
Each edge is weighted by the number of citations made to the papers in the prior two years.
The following are basic statistics of this network.
```
print("Number of nodes: %d" % A.shape[0])
print("Number of edges: %d" % A.sum())
print("Average degree: %.2f" % (A.sum()/A.shape[0]))
print("Max in-degree: %d" % np.max(A.sum(axis = 0)))
print("Max out-degree: %d" % np.max(A.sum(axis = 1)))
print("Maximum edge weight: %d" % A.max())
print("Minimum edge weight: %d" % np.min(A.data))
```
## Communities
[In our paper](https://www.nature.com/articles/s41598-021-93572-3), we identified the communities of journals using [graph-tool](https://graph-tool.skewed.de/). `node_table` contains the community membership of each journal, from which we prepare `group_membership` array as follows.
```
# Get the group membership
node2com = dict(zip(node_table["journal_id"], node_table["community_id"]))
group_membership = [node2com[node_labels[i]] for i in range(A.shape[0])]
```
## Detecting anomalous groups in the network
As is demonstrated in the first example, we detect the anomalous groups in the network by
```
alg = cidre.Cidre(group_membership = group_membership, alpha = 0.01, min_edge_weight = 10)
groups = alg.detect(A, threshold=0.15)
print("The number of journals in the largest group: %d" % np.max([group.size() for group in groups]))
print("Number of groups detected: %d" % len(groups))
```
[In our paper](https://www.nature.com/articles/s41598-021-93572-3), we omitted the groups that have within-group citations less than 50 because we expect that anomalous citation groups contain sufficiently many within-group citations, i.e.,
```
groups = [group for group in groups if group.get_num_edges()>=50]
```
where `group.get.num_edges()` gives the sum of the weights of the non-self-loop edges within the group.
## Visualization
Let us visualize the groups detected by CIDRE. For expository purposes, we sample three groups to visualize uniformly at random.
```
groups_sampled = [groups[i] for i in np.random.choice(len(groups), 3, replace = False)]
import seaborn as sns
import matplotlib.pyplot as plt
sns.set_style("white")
sns.set(font_scale=1.2)
sns.set_style("ticks")
fig, axes = plt.subplots(ncols = 3, figsize=(6 * 3, 5))
for i in range(3):
cidre.DrawGroup().draw(groups_sampled[i], ax = axes.flat[i])
```
The numbers beside the nodes are the IDs of the journals in the network. To show the journals' names, we do the following.
First, we load node lables and make a dictionary that maps the ID of each node to the label:
```
df = pd.read_csv("https://raw.githubusercontent.com/skojaku/cidre/main/data/journal-citation/journal_names.csv")
journalid2label = dict(zip(df.journal_id.values, df.name.values)) # Dictionary from MAG journal ID to the journal name
id2label = {k:journalid2label[v] for k, v in node_labels.items()} # This is a dictionary from ID to label, i.e., {ID:journal_name}
```
Then, give `id2label` to `cidre.DrawGroup.draw`, i.e.,
```
sns.set_style("white")
sns.set(font_scale=1.2)
sns.set_style("ticks")
fig, axes = plt.subplots(ncols = 3, figsize=(9 * 3, 5))
for i in range(3):
plotter = cidre.DrawGroup()
plotter.font_size = 12 # Font size
plotter.label_node_margin = 0.7 # Margin between labels and node
plotter.draw(groups_sampled[i], node_labels = id2label, ax = axes.flat[i])
```
|
github_jupyter
|
# Fine-Tuning a BERT Model and Create a Text Classifier
In the previous section, we've already performed the Feature Engineering to create BERT embeddings from the `reviews_body` text using the pre-trained BERT model, and split the dataset into train, validation and test files. To optimize for Tensorflow training, we saved the files in TFRecord format.
Now, let’s fine-tune the BERT model to our Customer Reviews Dataset and add a new classification layer to predict the `star_rating` for a given `review_body`.

As mentioned earlier, BERT’s attention mechanism is called a Transformer. This is, not coincidentally, the name of the popular BERT Python library, “Transformers,” maintained by a company called HuggingFace.
We will use a variant of BERT called [**DistilBert**](https://arxiv.org/pdf/1910.01108.pdf) which requires less memory and compute, but maintains very good accuracy on our dataset.
```
import time
import random
import pandas as pd
from glob import glob
import argparse
import json
import subprocess
import sys
import os
import tensorflow as tf
from transformers import DistilBertTokenizer
from transformers import TFDistilBertForSequenceClassification
from transformers import DistilBertConfig
%store -r max_seq_length
try:
max_seq_length
except NameError:
print("++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++")
print("[ERROR] Please run the notebooks in the PREPARE section before you continue.")
print("++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++")
print(max_seq_length)
def select_data_and_label_from_record(record):
x = {
"input_ids": record["input_ids"],
"input_mask": record["input_mask"],
# 'segment_ids': record['segment_ids']
}
y = record["label_ids"]
return (x, y)
def file_based_input_dataset_builder(channel, input_filenames, pipe_mode, is_training, drop_remainder):
# For training, we want a lot of parallel reading and shuffling.
# For eval, we want no shuffling and parallel reading doesn't matter.
if pipe_mode:
print("***** Using pipe_mode with channel {}".format(channel))
from sagemaker_tensorflow import PipeModeDataset
dataset = PipeModeDataset(channel=channel, record_format="TFRecord")
else:
print("***** Using input_filenames {}".format(input_filenames))
dataset = tf.data.TFRecordDataset(input_filenames)
dataset = dataset.repeat(100)
dataset = dataset.prefetch(tf.data.experimental.AUTOTUNE)
name_to_features = {
"input_ids": tf.io.FixedLenFeature([max_seq_length], tf.int64),
"input_mask": tf.io.FixedLenFeature([max_seq_length], tf.int64),
# "segment_ids": tf.io.FixedLenFeature([max_seq_length], tf.int64),
"label_ids": tf.io.FixedLenFeature([], tf.int64),
}
def _decode_record(record, name_to_features):
"""Decodes a record to a TensorFlow example."""
return tf.io.parse_single_example(record, name_to_features)
dataset = dataset.apply(
tf.data.experimental.map_and_batch(
lambda record: _decode_record(record, name_to_features),
batch_size=8,
drop_remainder=drop_remainder,
num_parallel_calls=tf.data.experimental.AUTOTUNE,
)
)
dataset.cache()
if is_training:
dataset = dataset.shuffle(seed=42, buffer_size=10, reshuffle_each_iteration=True)
return dataset
train_data = "./data-tfrecord/bert-train"
train_data_filenames = glob("{}/*.tfrecord".format(train_data))
print("train_data_filenames {}".format(train_data_filenames))
train_dataset = file_based_input_dataset_builder(
channel="train", input_filenames=train_data_filenames, pipe_mode=False, is_training=True, drop_remainder=False
).map(select_data_and_label_from_record)
validation_data = "./data-tfrecord/bert-validation"
validation_data_filenames = glob("{}/*.tfrecord".format(validation_data))
print("validation_data_filenames {}".format(validation_data_filenames))
validation_dataset = file_based_input_dataset_builder(
channel="validation",
input_filenames=validation_data_filenames,
pipe_mode=False,
is_training=False,
drop_remainder=False,
).map(select_data_and_label_from_record)
test_data = "./data-tfrecord/bert-test"
test_data_filenames = glob("{}/*.tfrecord".format(test_data))
print(test_data_filenames)
test_dataset = file_based_input_dataset_builder(
channel="test", input_filenames=test_data_filenames, pipe_mode=False, is_training=False, drop_remainder=False
).map(select_data_and_label_from_record)
```
# Specify Manual Hyper-Parameters
```
epochs = 1
steps_per_epoch = 10
validation_steps = 10
test_steps = 10
freeze_bert_layer = True
learning_rate = 3e-5
epsilon = 1e-08
```
# Load Pretrained BERT Model
https://huggingface.co/transformers/pretrained_models.html
```
CLASSES = [1, 2, 3, 4, 5]
config = DistilBertConfig.from_pretrained(
"distilbert-base-uncased",
num_labels=len(CLASSES),
id2label={0: 1, 1: 2, 2: 3, 3: 4, 4: 5},
label2id={1: 0, 2: 1, 3: 2, 4: 3, 5: 4},
)
print(config)
transformer_model = TFDistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased", config=config)
input_ids = tf.keras.layers.Input(shape=(max_seq_length,), name="input_ids", dtype="int32")
input_mask = tf.keras.layers.Input(shape=(max_seq_length,), name="input_mask", dtype="int32")
embedding_layer = transformer_model.distilbert(input_ids, attention_mask=input_mask)[0]
X = tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(50, return_sequences=True, dropout=0.1, recurrent_dropout=0.1))(
embedding_layer
)
X = tf.keras.layers.GlobalMaxPool1D()(X)
X = tf.keras.layers.Dense(50, activation="relu")(X)
X = tf.keras.layers.Dropout(0.2)(X)
X = tf.keras.layers.Dense(len(CLASSES), activation="softmax")(X)
model = tf.keras.Model(inputs=[input_ids, input_mask], outputs=X)
for layer in model.layers[:3]:
layer.trainable = not freeze_bert_layer
```
# Setup the Custom Classifier Model Here
```
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metric = tf.keras.metrics.SparseCategoricalAccuracy("accuracy")
optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate, epsilon=epsilon)
model.compile(optimizer=optimizer, loss=loss, metrics=[metric])
model.summary()
callbacks = []
log_dir = "./tmp/tensorboard/"
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir)
callbacks.append(tensorboard_callback)
history = model.fit(
train_dataset,
shuffle=True,
epochs=epochs,
steps_per_epoch=steps_per_epoch,
validation_data=validation_dataset,
validation_steps=validation_steps,
callbacks=callbacks,
)
print("Trained model {}".format(model))
```
# Evaluate on Holdout Test Dataset
```
test_history = model.evaluate(test_dataset, steps=test_steps, callbacks=callbacks)
print(test_history)
```
# Save the Model
```
tensorflow_model_dir = "./tmp/tensorflow/"
!mkdir -p $tensorflow_model_dir
model.save(tensorflow_model_dir, include_optimizer=False, overwrite=True)
!ls -al $tensorflow_model_dir
!saved_model_cli show --all --dir $tensorflow_model_dir
# !saved_model_cli run --dir $tensorflow_model_dir --tag_set serve --signature_def serving_default \
# --input_exprs 'input_ids=np.zeros((1,64));input_mask=np.zeros((1,64))'
```
# Predict with Model
```
import pandas as pd
import numpy as np
from transformers import DistilBertTokenizer
tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased")
sample_review_body = "This product is terrible."
encode_plus_tokens = tokenizer.encode_plus(
sample_review_body, padding='max_length', max_length=max_seq_length, truncation=True, return_tensors="tf"
)
# The id from the pre-trained BERT vocabulary that represents the token. (Padding of 0 will be used if the # of tokens is less than `max_seq_length`)
input_ids = encode_plus_tokens["input_ids"]
# Specifies which tokens BERT should pay attention to (0 or 1). Padded `input_ids` will have 0 in each of these vector elements.
input_mask = encode_plus_tokens["attention_mask"]
outputs = model.predict(x=(input_ids, input_mask))
prediction = [{"label": config.id2label[item.argmax()], "score": item.max().item()} for item in outputs]
print("")
print('Predicted star_rating "{}" for review_body "{}"'.format(prediction[0]["label"], sample_review_body))
```
# Release Resources
```
%%html
<p><b>Shutting down your kernel for this notebook to release resources.</b></p>
<button class="sm-command-button" data-commandlinker-command="kernelmenu:shutdown" style="display:none;">Shutdown Kernel</button>
<script>
try {
els = document.getElementsByClassName("sm-command-button");
els[0].click();
}
catch(err) {
// NoOp
}
</script>
%%javascript
try {
Jupyter.notebook.save_checkpoint();
Jupyter.notebook.session.delete();
}
catch(err) {
// NoOp
}
```
|
github_jupyter
|
```
import pandas as pd
import numpy as np
from tqdm import tqdm
import seaborn as sns
import matplotlib.pyplot as plt
import re
from nltk.corpus import stopwords
stop = list(set(stopwords.words('english')))
f_train = open('../data/train_14k_split_conll.txt','r',encoding='utf8')
line_train = f_train.readlines()
f_val = open('../data/dev_3k_split_conll.txt','r',encoding='utf8')
line_val = f_val.readlines()
f_test = open('../data/Hindi_test_unalbelled_conll_updated.txt','r',encoding='utf8')
line_test = f_test.readlines()
def get_data(lines):
doc_count = 0
i = 0
#global train_word, train_word_type, train_doc_sentiment, train_uid, train_doc_id
train_word = []
train_word_type = []
train_doc_sentiment = []
train_uid = []
train_doc_id = []
while i < len(lines):
'''
if i < len(lines) and lines[i].split('\t')[0] in ['http','https']:
i += 1
while i < len(lines) and lines[i].split('\t')[0] != '/':
i += 1
i += 1
'''
if i < len(lines) and lines[i].split('\t')[0] in ['@']:
i += 2
if i < len(lines):
line = lines[i]
line = line.replace('\n','').strip().lower()
if line.split('\t')[0] == 'meta':
doc_count += 1
uid = int(line.split('\t')[1])
sentiment = line.split('\t')[2]
#if doc_count == 1575:
# print (uid)
i += 1
elif len(line.split('\t')) >= 2:
if line.split('\t')[0] not in stop:
train_uid.append(uid)
train_doc_sentiment.append(sentiment)
train_word.append(line.split('\t')[0])
train_word_type.append(line.split('\t')[1])
train_doc_id.append(doc_count)
i += 1
else:
i += 1
train_df = pd.DataFrame()
train_df['doc_id'] = train_doc_id
train_df['word'] = train_word
train_df['word_type'] = train_word_type
train_df['uid'] = train_uid
train_df['sentiment'] = train_doc_sentiment
return train_df
def get_data_test(lines):
doc_count = 0
i = 0
#global train_word, train_word_type, train_doc_sentiment, train_uid, train_doc_id
train_word = []
train_word_type = []
train_uid = []
train_doc_id = []
while i < len(lines):
'''
if i < len(lines) and lines[i].split('\t')[0] in ['http','https']:
i += 1
while i < len(lines) and lines[i].split('\t')[0] != '/':
i += 1
i += 1
'''
if i < len(lines) and lines[i].split('\t')[0] in ['@']:
i += 2
if i < len(lines):
line = lines[i]
line = line.replace('\n','').strip().lower()
if line.split('\t')[0] == 'meta':
doc_count += 1
uid = int(line.split('\t')[1])
#if doc_count == 1575:
# print (uid)
i += 1
elif len(line.split('\t')) >= 2:
if line.split('\t')[0] not in stop:
train_uid.append(uid)
train_word.append(line.split('\t')[0])
train_word_type.append(line.split('\t')[1])
train_doc_id.append(doc_count)
i += 1
else:
i += 1
train_df = pd.DataFrame()
train_df['doc_id'] = train_doc_id
train_df['word'] = train_word
train_df['word_type'] = train_word_type
train_df['uid'] = train_uid
return train_df
train_df = get_data(line_train)
val_df = get_data(line_val)
test_df = get_data_test(line_test)
train_df.shape, val_df.shape, test_df.shape
train_df.uid.nunique(), val_df.uid.nunique(), test_df.uid.nunique()
train_df = train_df[train_df.word != 'http']
train_df = train_df[train_df.word != 'https']
train_df = train_df[train_df.word_type != 'o']
print (train_df.shape)
val_df = val_df[val_df.word != 'http']
val_df = val_df[val_df.word != 'https']
val_df = val_df[val_df.word_type != 'o']
print (val_df.shape)
test_df = test_df[test_df.word != 'http']
test_df = test_df[test_df.word != 'https']
test_df = test_df[test_df.word_type != 'o']
print (test_df.shape)
train_df.word = train_df.word.apply(lambda x: re.sub("[^a-zA-Z0-9]", "",x))
train_df = train_df[train_df.word.str.len() >= 3]
print (train_df.shape)
val_df.word = val_df.word.apply(lambda x: re.sub("[^a-zA-Z0-9]", "",x))
val_df = val_df[val_df.word.str.len() >= 3]
print (val_df.shape)
test_df.word = test_df.word.apply(lambda x: re.sub("[^a-zA-Z0-9]", "",x))
test_df = test_df[test_df.word.str.len() >= 3]
print (test_df.shape)
all_words = set(pd.concat([train_df[['word']],val_df[['word']],test_df[['word']]], axis=0).word)
print ("Total number of words {}".format(len(all_words)))
all_words = pd.concat([train_df[['word']],val_df[['word']],test_df[['word']]], axis=0)
all_words = all_words.word.value_counts().reset_index()
all_words.columns = ['word','tot_count']
top_words = all_words[all_words.tot_count >= 2]
print (top_words.shape)
print (top_words.head(5))
all_bert_words = pd.DataFrame(['[PAD]','[UNK]','[CLS]','[SEP]'],columns=['word'])
all_bert_words = pd.concat([all_bert_words,top_words[['word']].drop_duplicates().reset_index(drop=True)],axis=0)
all_bert_words.to_csv("../bert_vocab.txt",index=False,header=False)
train_texts = train_df.groupby(['uid'],sort=True)['word'].apply(lambda x: " ".join(x)).reset_index()
train_texts = pd.merge(train_texts,train_df[['uid','sentiment']],how='left').drop_duplicates().reset_index(drop=True)
train_texts.columns = ['uid','text','sentiment']
#train_texts.text = train_texts.text.apply(lambda x: '[CLS] ' + x + ' [SEP]')
val_texts = val_df.groupby(['uid'],sort=True)['word'].apply(lambda x: " ".join(x)).reset_index().reset_index(drop=True)
val_texts = pd.merge(val_texts,val_df[['uid','sentiment']],how='left').drop_duplicates()
val_texts.columns = ['uid','text','sentiment']
#val_texts.text = val_texts.text.apply(lambda x: '[CLS] ' + x + ' [SEP]')
test_texts = test_df.groupby(['uid'],sort=True)['word'].apply(lambda x: " ".join(x)).reset_index().reset_index(drop=True)
test_texts.columns = ['uid','text']
#test_texts.text = test_texts.text.apply(lambda x: '[CLS] ' + x + ' [SEP]')
train_texts.head(5)
test_texts.head(5)
all_texts = pd.concat([train_texts[['uid','text','sentiment']],val_texts[['uid','text','sentiment']]],axis=0)
all_texts.sentiment.value_counts()
from transformers import BertTokenizer, BertConfig, BertModel
import torch
import math
import torch.nn as nn
import torch.nn.functional as F
from keras.preprocessing.sequence import pad_sequences
from keras.utils import to_categorical
tokenizer = BertTokenizer.from_pretrained('../bert_vocab.txt')
max_len = 20
n_output = all_texts.sentiment.nunique() #number of possible outputs
senti_dict = {'negative':0,'neutral':1,'positive':2}
in_senti_dict = {0:'negative',1:'neutral',2:'positive'}
all_texts.sentiment = all_texts.sentiment.apply(lambda x: senti_dict[x])
train_tokens = []
for text in tqdm(all_texts.text.values.tolist()):
train_tokens += [tokenizer.encode(text,add_special_tokens=False)]
test_tokens = []
for text in tqdm(test_texts.text.values.tolist()):
test_tokens += [tokenizer.encode(text,add_special_tokens=False)]
train_tokens_padded = []
train_attention_mask = []
train_seg_ids = []
for tokens in tqdm(train_tokens):
tokens = tokens[:max_len]
token_len = len(tokens)
one_mask = [1]*token_len
zero_mask = [0]*(max_len-token_len)
padded_input = tokens + zero_mask
attention_mask = one_mask + zero_mask
segments = []
first_sep = True
current_segment_id = 0
for token in tokens:
segments.append(current_segment_id)
if token == 3:
current_segment_id = 1
segments = segments + [0] * (max_len - len(tokens))
train_tokens_padded += [padded_input]
train_attention_mask += [attention_mask]
train_seg_ids += [segments]
test_tokens_padded = []
test_attention_mask = []
test_seg_ids = []
for tokens in tqdm(test_tokens):
tokens = tokens[:max_len]
token_len = len(tokens)
one_mask = [1]*token_len
zero_mask = [0]*(max_len-token_len)
padded_input = tokens + zero_mask
attention_mask = one_mask + zero_mask
segments = []
first_sep = True
current_segment_id = 0
for token in tokens:
segments.append(current_segment_id)
if token == 102:
current_segment_id = 1
segments = segments + [0] * (max_len - len(tokens))
test_tokens_padded += [padded_input]
test_attention_mask += [attention_mask]
test_seg_ids += [segments]
print (train_tokens_padded[0], train_attention_mask[0], train_seg_ids[0])
train_output = torch.LongTensor(to_categorical(all_texts.sentiment))
train_tokens_padded = torch.LongTensor(np.asarray(train_tokens_padded))
train_attention_mask = torch.LongTensor(np.asarray(train_attention_mask))
train_seg_ids = torch.LongTensor(np.asarray(train_seg_ids))
test_tokens_padded = torch.LongTensor(np.asarray(test_tokens_padded))
test_attention_mask = torch.LongTensor(np.asarray(test_attention_mask))
test_seg_ids = torch.LongTensor(np.asarray(test_seg_ids))
print (train_tokens_padded.shape, train_attention_mask.shape, train_seg_ids.shape, test_tokens_padded.shape, test_attention_mask.shape, test_seg_ids.shape)
dev_tokens_padded = train_tokens_padded[train_texts.shape[0]:]
train_tokens_padded = train_tokens_padded[:train_texts.shape[0]]
dev_attention_mask = train_attention_mask[train_texts.shape[0]:]
train_attention_mask = train_attention_mask[:train_texts.shape[0]]
dev_seg_ids = train_seg_ids[train_texts.shape[0]:]
train_seg_ids = train_seg_ids[:train_texts.shape[0]]
dev_output = train_output[train_texts.shape[0]:]
train_output = train_output[:train_texts.shape[0]]
from torch.utils.data import DataLoader, TensorDataset
from sklearn.metrics import f1_score, accuracy_score
batch_size = 128
train_data = TensorDataset(train_tokens_padded, train_output)
val_data = TensorDataset(dev_tokens_padded, dev_output)
#dataloader
train_loader = DataLoader(train_data, batch_size=batch_size, shuffle=True)
val_loader = DataLoader(val_data, batch_size=batch_size, shuffle=True)
class TransformerModel(nn.Module):
def __init__(self, ntoken, ninp, nhead, nhid, nlayers, nout, dropout=0.5):
super(TransformerModel, self).__init__()
from torch.nn import TransformerEncoder, TransformerEncoderLayer
self.model_type = 'Transformer'
self.src_mask = None
self.pos_encoder = PositionalEncoding(ninp, dropout)
encoder_layers = TransformerEncoderLayer(ninp, nhead, nhid, dropout)
self.transformer_encoder = TransformerEncoder(encoder_layers, nlayers)
self.encoder = nn.Embedding(ntoken, ninp)
self.ninp = ninp
self.decoder = nn.Linear(ninp, nout)
self.init_weights()
def _generate_square_subsequent_mask(self, sz):
mask = (torch.triu(torch.ones(sz, sz)) == 1).transpose(0, 1)
mask = mask.float().masked_fill(mask == 0, float('-inf')).masked_fill(mask == 1, float(0.0))
return mask
def init_weights(self):
initrange = 0.1
self.encoder.weight.data.uniform_(-initrange, initrange)
self.decoder.bias.data.zero_()
self.decoder.weight.data.uniform_(-initrange, initrange)
def forward(self, src):
if self.src_mask is None or self.src_mask.size(0) != len(src):
device = src.device
mask = self._generate_square_subsequent_mask(len(src)).to(device)
self.src_mask = mask
src = self.encoder(src) * math.sqrt(self.ninp)
#print (src.shape)
src = self.pos_encoder(src)
output = self.transformer_encoder(src, self.src_mask)
output = self.decoder(torch.mean(output,1))
return output
class PositionalEncoding(nn.Module):
def __init__(self, d_model, dropout=0.1, max_len=max_len):
super(PositionalEncoding, self).__init__()
self.dropout = nn.Dropout(p=dropout)
pe = torch.zeros(max_len, d_model)
position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1)
div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-math.log(10000.0) / d_model))
pe[:, 0::2] = torch.sin(position * div_term)
pe[:, 1::2] = torch.cos(position * div_term)
pe = pe.unsqueeze(0) #.transpose(0, 1)
self.register_buffer('pe', pe)
def forward(self, x):
#print (self.pe.shape)
x = x + self.pe
return self.dropout(x)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
ntokens = tokenizer.vocab_size # the size of vocabulary
emsize = 200 # embedding dimension
nhid = 200 # the dimension of the feedforward network model in nn.TransformerEncoder
nlayers = 4 # the number of nn.TransformerEncoderLayer in nn.TransformerEncoder
nhead = 8 # the number of heads in the multiheadattention models
dropout = 0.2 # the dropout value
nout = all_texts.sentiment.nunique()
model = TransformerModel(ntokens, emsize, nhead, nhid, nlayers, nout, dropout)
epochs = 20
model
print ("Total number of parameters to learn {}".format(sum(p.numel() for p in model.parameters() if p.requires_grad)))
optimizer = torch.optim.Adam(model.parameters(),lr=.001)
criterion = torch.nn.CrossEntropyLoss()
model = model.to(device)
criterion = criterion.to(device)
def binary_accuracy(preds, y):
"""
Returns accuracy per batch, i.e. if you get 8/10 right, this returns 0.8, NOT 8
"""
#round predictions to the closest integer
if device == 'cpu':
rounded_preds = preds.detach().numpy().argmax(1)
rounded_correct = y.detach().numpy().argmax(1)
else:
rounded_preds = preds.detach().cpu().numpy().argmax(1)
rounded_correct = y.detach().cpu().numpy().argmax(1)
return accuracy_score(rounded_correct,rounded_preds)
def f1_torch(preds, y):
"""
Returns accuracy per batch, i.e. if you get 8/10 right, this returns 0.8, NOT 8
"""
#round predictions to the closest integer
if device == 'cpu':
rounded_preds = preds.detach().numpy().argmax(1)
rounded_correct = y.detach().numpy().argmax(1)
else:
rounded_preds = preds.detach().cpu().numpy().argmax(1)
rounded_correct = y.detach().cpu().numpy().argmax(1)
return f1_score(rounded_correct,rounded_preds,average='macro')
def train(model, train_loader, optimizer, criterion):
global predictions, labels, loss
epoch_loss = 0
epoch_acc = 0
f1_scores = 0
model.train()
counter = 0
for tokens, labels in tqdm(train_loader):
counter += 1
optimizer.zero_grad()
predictions = model(tokens) #.squeeze(1)
predictions = torch.softmax(predictions,dim=-1)
#loss = criterion(predictions, labels)
loss = criterion(predictions, torch.max(labels, 1)[1])
acc = binary_accuracy(predictions, labels)
f1_score_batch = f1_torch(predictions, labels)
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_acc += acc
f1_scores += f1_score_batch
return epoch_loss / counter, epoch_acc / counter, f1_scores/counter
def evaluate(model, val_loader, criterion):
epoch_loss = 0
epoch_acc = 0
f1_scores = 0
model.eval()
counter = 0
with torch.no_grad():
for tokens, labels in tqdm(val_loader):
counter += 1
predictions = model(tokens) #.squeeze(1)
predictions = torch.softmax(predictions,dim=-1)
#loss = criterion(predictions, labels)
loss = criterion(predictions, torch.max(labels, 1)[1])
acc = binary_accuracy(predictions, labels)
f1_score_batch = f1_torch(predictions, labels)
epoch_loss += loss.item()
epoch_acc += acc
f1_scores += f1_score_batch
return epoch_loss / counter, epoch_acc / counter, f1_scores/counter
import time
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
best_valid_loss = 999
for epoch in range(epochs):
start_time = time.time()
train_loss, train_acc, train_f1 = train(model, train_loader, optimizer, criterion)
valid_loss, valid_acc, valid_f1 = evaluate(model, val_loader, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), '../models/model_transformer.pt')
print(f'Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}% | Train F1: {train_f1*100:.2f}%')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. Acc: {valid_acc*100:.2f}% | Val F1: {valid_f1*100:.2f}%')
```
|
github_jupyter
|
# 2-22: Intro to scikit-learn
<img src="https://www.cityofberkeley.info/uploadedImages/Public_Works/Level_3_-_Transportation/DSC_0637.JPG" style="width: 500px; height: 275px;" />
---
** Regression** is useful for predicting a value that varies on a continuous scale from a bunch of features. This lab will introduce the regression methods available in the scikit-learn extension to scipy, focusing on ordinary least squares linear regression, LASSO, and Ridge regression.
*Estimated Time: 45 minutes*
---
### Table of Contents
1 - [The Test-Train-Validation Split](#section 1)<br>
2 - [Linear Regression](#section 2)<br>
3 - [LASSO Regression](#section 3)<br>
4 - [Ridge Regression](#section 4)<br>
5 - [Choosing a Model](#section 5)<br>
**Dependencies:**
```
import numpy as np
from datascience import *
import datetime as dt
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.model_selection import train_test_split
from sklearn.linear_model import Ridge, Lasso, LinearRegression
from sklearn.model_selection import KFold
```
## The Data: Bike Sharing
In your time at Cal, you've probably passed by one of the many bike sharing station around campus. Bike sharing systems have become more and more popular as traffic and concerns about global warming rise. This lab's data describes one such bike sharing system in Washington D.C., from [UC Irvine's Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Bike+Sharing+Dataset).
```
bike=Table().read_table(('data/Bike-Sharing-Dataset/day.csv'))
# reformat the date column to integers representing the day of the year, 001-366
bike['dteday'] = pd.to_datetime(bike['dteday']).strftime('%j')
# get rid of the index column
bike = bike.drop(0)
bike.show(4)
```
Take a moment to get familiar with the data set. In data science, you'll often hear rows referred to as **records** and columns as **features**. Before you continue, make sure you can answer the following:
- How many records are in this data set?
- What does each record represent?
- What are the different features?
- How is each feature represented? What values does it take, and what are the data types of each value?
Use Table methods and check the UC Irvine link for more information.
```
# explore the data set here
```
---
## 1. The Test-Train-Validation Split <a id='section 1'></a>
When we train a model on a data set, we run the risk of [**over-fitting**](http://scikit-learn.org/stable/auto_examples/model_selection/plot_underfitting_overfitting.html). Over-fitting happens when a model becomes so complex that it makes very accurate predictions for the data it was trained on, but it can't generalize to make good predictions on new data.
We can reduce the risk of overfitting by using a **test-train split**.
1. Randomly divide our data set into two smaller sets: one for training and one for testing
2. Train the data on the training set, changing our model along the way to increase accuracy
3. Test the data's predictions using the test set.
Scikit-learn's [`test_train_split`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) function will help here. First, separate your data into two parts: a Table containing the features used to make our prediction, and an array of the true values. To start, let's predict the *total number of riders* (y) using *every feature that isn't a rider count* (X).
Note: for the function to work, X can't be a Table. Save X as a pandas DataFrame by calling `.to_df()` on the feature Table.
```
# the features used to predict riders
X = bike.drop('casual', 'registered', 'cnt')
X = X.to_df()
# the number of riders
y = bike['cnt']
```
Next, set the random seed using `np.random.seed(...)`. This will affect the way numpy pseudo-randomly generates the numbers it uses to decide how to split the data into training and test sets. Any seed number is fine- the important thing is to document the number you used in case we need to recreate this pseudorandom split in the future.
Then, call `train_test_split` on your X and y. Also set the parameters `train_size=` and `test_size=` to set aside 80% of the data for training and 20% for testing.
```
# set the random seed
np.random.seed(10)
# split the data
# train_test_split returns 4 values: X_train, X_test, y_train, y_test
X_train, X_test, y_train, y_test = train_test_split(X, y,
train_size=0.80, test_size=0.20)
```
### The Validation Set
Our test data should only be used once: after our model has been selected, trained, and tweaked. Unfortunately, it's possible that in the process of tweaking our model, we could still overfit it to the training data and only find out when we return a poor test data score. What then?
A **validation set** can help here. By trying your trained models on a validation set, you can (hopefully) weed out models that don't generalize well.
Call `train_test_split` again, this time on your X_train and y_train. We want to set aside 25% of the data to go to our validation set, and keep the remaining 75% for our training set.
Note: This means that out of the original data, 20% is for testing, 20% is for validation, and 60% is for training.
```
# split the data
# Returns 4 values: X_train, X_validate, y_train, y_validate
X_train, X_validate, y_train, y_validate = train_test_split(X_train, y_train,
train_size=0.75, test_size=0.25)
```
## 2. Linear Regression (Ordinary Least Squares) <a id='section 2'></a>
Now, we're ready to start training models and making predictions. We'll start with a **linear regression** model.
[Scikit-learn's linear regression](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html#sklearn.linear_model.LinearRegression.score) is built around scipy's ordinary least squares, which you used in the last lab. The syntax for each scikit-learn model is very similar:
1. Create a model by calling its constructor function. For example, `LinearRegression()` makes a linear regression model.
2. Train the model on your training data by calling `.fit(train_X, train_y)` on the model
Create a linear regression model in the cell below.
```
# create a model
lin_reg = LinearRegression(normalize=True)
# fit the model
lin_model = lin_reg.fit(X_train, y_train)
```
With the model fit, you can look at the best-fit slope for each feature using `.coef_`, and you can get the intercept of the regression line with `.intercept_`.
```
print(lin_model.coef_)
print(lin_model.intercept_)
```
Now, let's get a sense of how good our model is. We can do this by looking at the difference between the predicted values and the actual values, also called the error.
We can see this graphically using a scatter plot.
- Call `.predict(X)` on your linear regression model, using your training X and training y, to return a list of predicted number of riders per hour. Save it to a variable `lin_pred`.
- Using a scatter plot (`plt.scatter(...)`), plot the predicted values against the actual values (`y_train`)
```
# predict the number of riders
lin_pred = lin_model.predict(X_train)
# plot the residuals on a scatter plot
plt.scatter(y_train, lin_pred)
plt.title('Linear Model (OLS)')
plt.xlabel('actual value')
plt.ylabel('predicted value')
plt.show()
```
Question: what should our scatter plot look like if our model was 100% accurate?
**ANSWER:** All points (i.e. errors) would fall on a line with a slope of one: the predicted value would always equal the actual value.
We can also get a sense of how well our model is doing by calculating the **root mean squared error**. The root mean squared error (RMSE) represents the average difference between the predicted and the actual values.
To get the RMSE:
- subtract each predicted value from its corresponding actual value (the errors)
- square each error (this prevents negative errors from cancelling positive errors)
- average the squared errors
- take the square root of the average (this gets the error back in the original units)
Write a function `rmse` that calculates the mean squared error of a predicted set of values.
```
def rmse(pred, actual):
return np.sqrt(np.mean((pred - actual) ** 2))
```
Now calculate the mean squared error for your linear model.
```
rmse(lin_pred, y_train)
```
## 3. Ridge Regression <a id='section 3'></a>
Now that you've gone through the process for OLS linear regression, it's easy to do the same for [**Ridge Regression**](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html). In this case, the constructor function that makes the model is `Ridge()`.
```
# make and fit a Ridge regression model
ridge_reg = Ridge()
ridge_model = ridge_reg.fit(X_train, y_train)
# use the model to make predictions
ridge_pred = ridge_model.predict(X_train)
# plot the predictions
plt.scatter(y_train, ridge_pred)
plt.title('Ridge Model')
plt.xlabel('actual values')
plt.ylabel('predicted values')
plt.show()
# calculate the rmse for the Ridge model
rmse(ridge_pred, y_train)
```
Note: the documentation for Ridge regression shows it has lots of **hyperparameters**: values we can choose when the model is made. Now that we've tried it using the defaults, look at the [documentation](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html) and try changing some parameters to see if you can get a lower RMSE (`alpha` might be a good one to try).
## 4. LASSO Regression <a id='section 4'></a>
Finally, we'll try using [LASSO regression](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html). The constructor function to make the model is `Lasso()`.
You may get a warning message saying the objective did not converge. The model will still work, but to get convergence try increasing the number of iterations (`max_iter=`) when you construct the model.
```
# create and fit the model
lasso_reg = Lasso(max_iter=10000)
lasso_model = lasso_reg.fit(X_train, y_train)
# use the model to make predictions
lasso_pred = lasso_model.predict(X_train)
# plot the predictions
plt.scatter(y_train, lasso_pred)
plt.title('LASSO Model')
plt.xlabel('actual values')
plt.ylabel('predicted values')
plt.show()
# calculate the rmse for the LASSO model
rmse(lasso_pred, y_train)
```
Note: LASSO regression also has many tweakable hyperparameters. See how changing them affects the accuracy!
Question: How do these three models compare on performance? What sorts of things could we do to improve performance?
**ANSWER:** All three models have very similar accuracy, around 900 RMSE for each.
We could try changing which features we use or adjust the hyperparameters.
---
## 5. Choosing a model <a id='section 5'></a>
### Validation
Once you've tweaked your models' hyperparameters to get the best possible accuracy on your training sets, we can compare your models on your validation set. Make predictions on `X_validate` with each one of your models, then calculate the RMSE for each set of predictions.
```
# make predictions for each model
lin_vpred = lin_model.predict(X_validate)
ridge_vpred = ridge_model.predict(X_validate)
lasso_vpred = lasso_model.predict(X_validate)
# calculate RMSE for each set of validation predictions
print("linear model rmse: ", rmse(lin_vpred, y_validate))
print("Ridge rmse: ", rmse(ridge_vpred, y_validate))
print("LASSO rmse: ", rmse(lasso_vpred, y_validate))
```
How do the RMSEs for the validation data compare to those for the training data? Why?
Did the model that performed best on the training set also do best on the validation set?
**YOUR ANSWER:** The RMSE for the validation set tends to be larger than for the training set, simply because the models were fit to the training data.
### Predicting the Test Set
Finally, select one final model to make predictions for your test set. This is often the model that performed best on the validation data.
```
# make predictions for the test set using one model of your choice
final_pred = lin_model.predict(X_test)
# calculate the rmse for the final predictions
print('Test set rmse: ', rmse(final_pred, y_test))
```
Coming up this semester: how to select your models, model parameters, and features to get the best performance.
---
Notebook developed by: Keeley Takimoto
Data Science Modules: http://data.berkeley.edu/education/modules
|
github_jupyter
|
# Author: Faique Ali
## Task 01 : Prediction Using Supervised ML
<p>
Using Linear Regression, predict the percentage of an student based on his no. of study hours.
</p>
# Imports
```
import pandas as pd
from pandas import DataFrame
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn import metrics
%matplotlib inline
```
# Constants
```
DATA_SOURCE_LINK = 'http://bit.ly/w-data'
COL_HRS = 'Hours'
COL_SCORES = 'Scores'
SCATTER_PLT_TITLE = 'Scatter Plot showing Students Nr. of Hours Studied VS Scored Marks'
SCATTER_PLT_XLABEL = 'HOURS'
SCATTER_PLT_YLABEL = 'SCORES'
```
## Step1: Gather the Data
```
data = pd.read_csv(DATA_SOURCE_LINK)
# Show first 5 rows
data.head()
# Checking for null values in the data set
data.info()
```
## Step 2: Visualizing the Data
#### Splitting Hours and Scores into two varibales
```
X = DataFrame(data, columns=[COL_HRS])
y = DataFrame(data, columns=[COL_SCORES])
```
#### Creating a scatter plot
```
plt.figure(figsize=(9,6))
# Specifying plot type
plt.scatter(X,y, s=120, alpha=0.7)
# Adding labels to the plot
plt.title(SCATTER_PLT_TITLE)
plt.xlabel(SCATTER_PLT_XLABEL)
plt.ylabel(SCATTER_PLT_YLABEL)
#
plt.show()
```
## Step 3: Training the Model on Testing Data
```
X_train, X_test, y_train, y_test = train_test_split(X,y, random_state=0, test_size=0.2)
# Creating an obj
regression = LinearRegression()
regression.fit(X_train, y_train)
```
#### Slope coefficient (Theta_1)
```
regression.coef_
```
#### Intercept
The intercept here is a +ve value. It represents that a student with `0hrs` of preparation can score at least `2 marks`
```
regression.intercept_
```
## Step 4: Testing outcomes on Test Data
```
# Making predictions on testing data
predicted_scores = regression.predict(X_test)
predicted_scores
# Convert numpy nd array to 1d array
predicted_scores = predicted_scores.flatten()
# Extracting Hrs cols from X_test
nr_of_hrs = X_test['Hours'].to_numpy()
# Displaying our test results into a Data frame
test_df = pd.DataFrame(data={'Nr.of Hrs': nr_of_hrs, 'Scored Marks': predicted_scores})
test_df
```
## Step 5: Plotting Regression Line
```
plt.figure(figsize=(9,6))
# Specifying plot type
plt.scatter(X,y, s=120, alpha=0.85)
# To plot regression line
plt.plot(X, regression.predict(X), color='red', linewidth=2, alpha= 0.9)
# Adding labels to the plot
plt.title(SCATTER_PLT_TITLE)
plt.xlabel(SCATTER_PLT_XLABEL)
plt.ylabel(SCATTER_PLT_YLABEL)
#
plt.show()
```
## Step 6: Comparing & Evaluating our Model
```
# Extracting Scores cols from y_test
actual_scores = y_test['Scores'].to_numpy()
# Displaying our test results into a Data frame
act_vs_pred_df = pd.DataFrame(data={'Actual Scores': actual_scores, 'Predicted Scores': predicted_scores})
act_vs_pred_df
```
### Estimating training and testing accuracy or $R^{2}$
```
print(f'Training Accuracy: ', regression.score(X_train,y_train))
print(f'Testing Accuracy: ', regression.score(X_test,y_test))
```
### Evaluating model
```
print(f'Mean Squared Error: ', metrics.mean_squared_error(y_test,predicted_scores))
print(f'Mean Absolute Error: ', metrics.mean_absolute_error(y_test,predicted_scores))
```
# `Making Predictions `
<b>What will be the predicted score if a student studies for 9.25 hrs/day?<b>
```
hrs = 9.25
pred = regression.predict([[hrs]])
print(f'Nr of hrs studied: {hrs}')
print(f'Obtained marks/score: {pred[0][0]}')
```
|
github_jupyter
|
**Chapter 19 – Training and Deploying TensorFlow Models at Scale**
_This notebook contains all the sample code in chapter 19._
<table align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/ageron/handson-ml2/blob/master/19_training_and_deploying_at_scale.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
</table>
# Setup
First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
```
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
!echo "deb http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal" > /etc/apt/sources.list.d/tensorflow-serving.list
!curl https://storage.googleapis.com/tensorflow-serving-apt/tensorflow-serving.release.pub.gpg | apt-key add -
!apt update && apt-get install -y tensorflow-model-server
!pip install -q -U tensorflow-serving-api
IS_COLAB = True
except Exception:
IS_COLAB = False
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
if not tf.config.list_physical_devices('GPU'):
print("No GPU was detected. CNNs can be very slow without a GPU.")
if IS_COLAB:
print("Go to Runtime > Change runtime and select a GPU hardware accelerator.")
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
tf.random.set_seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deploy"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
```
# Deploying TensorFlow models to TensorFlow Serving (TFS)
We will use the REST API or the gRPC API.
## Save/Load a `SavedModel`
```
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.mnist.load_data()
X_train_full = X_train_full[..., np.newaxis].astype(np.float32) / 255.
X_test = X_test[..., np.newaxis].astype(np.float32) / 255.
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
X_new = X_test[:3]
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28, 1]),
keras.layers.Dense(100, activation="relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-2),
metrics=["accuracy"])
model.fit(X_train, y_train, epochs=10, validation_data=(X_valid, y_valid))
np.round(model.predict(X_new), 2)
model_version = "0001"
model_name = "my_mnist_model"
model_path = os.path.join(model_name, model_version)
model_path
!rm -rf {model_name}
tf.saved_model.save(model, model_path)
for root, dirs, files in os.walk(model_name):
indent = ' ' * root.count(os.sep)
print('{}{}/'.format(indent, os.path.basename(root)))
for filename in files:
print('{}{}'.format(indent + ' ', filename))
!saved_model_cli show --dir {model_path}
!saved_model_cli show --dir {model_path} --tag_set serve
!saved_model_cli show --dir {model_path} --tag_set serve \
--signature_def serving_default
!saved_model_cli show --dir {model_path} --all
```
Let's write the new instances to a `npy` file so we can pass them easily to our model:
```
np.save("my_mnist_tests.npy", X_new)
input_name = model.input_names[0]
input_name
```
And now let's use `saved_model_cli` to make predictions for the instances we just saved:
```
!saved_model_cli run --dir {model_path} --tag_set serve \
--signature_def serving_default \
--inputs {input_name}=my_mnist_tests.npy
np.round([[1.1739199e-04, 1.1239604e-07, 6.0210604e-04, 2.0804715e-03, 2.5779348e-06,
6.4079795e-05, 2.7411186e-08, 9.9669880e-01, 3.9654213e-05, 3.9471846e-04],
[1.2294615e-03, 2.9207937e-05, 9.8599273e-01, 9.6755642e-03, 8.8930705e-08,
2.9156188e-04, 1.5831805e-03, 1.1311053e-09, 1.1980456e-03, 1.1113169e-07],
[6.4066830e-05, 9.6359509e-01, 9.0598064e-03, 2.9872139e-03, 5.9552520e-04,
3.7478798e-03, 2.5074568e-03, 1.1462728e-02, 5.5553433e-03, 4.2495009e-04]], 2)
```
## TensorFlow Serving
Install [Docker](https://docs.docker.com/install/) if you don't have it already. Then run:
```bash
docker pull tensorflow/serving
export ML_PATH=$HOME/ml # or wherever this project is
docker run -it --rm -p 8500:8500 -p 8501:8501 \
-v "$ML_PATH/my_mnist_model:/models/my_mnist_model" \
-e MODEL_NAME=my_mnist_model \
tensorflow/serving
```
Once you are finished using it, press Ctrl-C to shut down the server.
Alternatively, if `tensorflow_model_server` is installed (e.g., if you are running this notebook in Colab), then the following 3 cells will start the server:
```
os.environ["MODEL_DIR"] = os.path.split(os.path.abspath(model_path))[0]
%%bash --bg
nohup tensorflow_model_server \
--rest_api_port=8501 \
--model_name=my_mnist_model \
--model_base_path="${MODEL_DIR}" >server.log 2>&1
!tail server.log
import json
input_data_json = json.dumps({
"signature_name": "serving_default",
"instances": X_new.tolist(),
})
repr(input_data_json)[:1500] + "..."
```
Now let's use TensorFlow Serving's REST API to make predictions:
```
import requests
SERVER_URL = 'http://localhost:8501/v1/models/my_mnist_model:predict'
response = requests.post(SERVER_URL, data=input_data_json)
response.raise_for_status() # raise an exception in case of error
response = response.json()
response.keys()
y_proba = np.array(response["predictions"])
y_proba.round(2)
```
### Using the gRPC API
```
from tensorflow_serving.apis.predict_pb2 import PredictRequest
request = PredictRequest()
request.model_spec.name = model_name
request.model_spec.signature_name = "serving_default"
input_name = model.input_names[0]
request.inputs[input_name].CopyFrom(tf.make_tensor_proto(X_new))
import grpc
from tensorflow_serving.apis import prediction_service_pb2_grpc
channel = grpc.insecure_channel('localhost:8500')
predict_service = prediction_service_pb2_grpc.PredictionServiceStub(channel)
response = predict_service.Predict(request, timeout=10.0)
response
```
Convert the response to a tensor:
```
output_name = model.output_names[0]
outputs_proto = response.outputs[output_name]
y_proba = tf.make_ndarray(outputs_proto)
y_proba.round(2)
```
Or to a NumPy array if your client does not include the TensorFlow library:
```
output_name = model.output_names[0]
outputs_proto = response.outputs[output_name]
shape = [dim.size for dim in outputs_proto.tensor_shape.dim]
y_proba = np.array(outputs_proto.float_val).reshape(shape)
y_proba.round(2)
```
## Deploying a new model version
```
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28, 1]),
keras.layers.Dense(50, activation="relu"),
keras.layers.Dense(50, activation="relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-2),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10, validation_data=(X_valid, y_valid))
model_version = "0002"
model_name = "my_mnist_model"
model_path = os.path.join(model_name, model_version)
model_path
tf.saved_model.save(model, model_path)
for root, dirs, files in os.walk(model_name):
indent = ' ' * root.count(os.sep)
print('{}{}/'.format(indent, os.path.basename(root)))
for filename in files:
print('{}{}'.format(indent + ' ', filename))
```
**Warning**: You may need to wait a minute before the new model is loaded by TensorFlow Serving.
```
import requests
SERVER_URL = 'http://localhost:8501/v1/models/my_mnist_model:predict'
response = requests.post(SERVER_URL, data=input_data_json)
response.raise_for_status()
response = response.json()
response.keys()
y_proba = np.array(response["predictions"])
y_proba.round(2)
```
# Deploy the model to Google Cloud AI Platform
Follow the instructions in the book to deploy the model to Google Cloud AI Platform, download the service account's private key and save it to the `my_service_account_private_key.json` in the project directory. Also, update the `project_id`:
```
project_id = "onyx-smoke-242003"
import googleapiclient.discovery
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "my_service_account_private_key.json"
model_id = "my_mnist_model"
model_path = "projects/{}/models/{}".format(project_id, model_id)
model_path += "/versions/v0001/" # if you want to run a specific version
ml_resource = googleapiclient.discovery.build("ml", "v1").projects()
def predict(X):
input_data_json = {"signature_name": "serving_default",
"instances": X.tolist()}
request = ml_resource.predict(name=model_path, body=input_data_json)
response = request.execute()
if "error" in response:
raise RuntimeError(response["error"])
return np.array([pred[output_name] for pred in response["predictions"]])
Y_probas = predict(X_new)
np.round(Y_probas, 2)
```
# Using GPUs
```
tf.test.is_gpu_available()
tf.test.gpu_device_name()
tf.test.is_built_with_cuda()
from tensorflow.python.client.device_lib import list_local_devices
devices = list_local_devices()
devices
```
# Distributed Training
```
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
def create_model():
return keras.models.Sequential([
keras.layers.Conv2D(filters=64, kernel_size=7, activation="relu",
padding="same", input_shape=[28, 28, 1]),
keras.layers.MaxPooling2D(pool_size=2),
keras.layers.Conv2D(filters=128, kernel_size=3, activation="relu",
padding="same"),
keras.layers.Conv2D(filters=128, kernel_size=3, activation="relu",
padding="same"),
keras.layers.MaxPooling2D(pool_size=2),
keras.layers.Flatten(),
keras.layers.Dense(units=64, activation='relu'),
keras.layers.Dropout(0.5),
keras.layers.Dense(units=10, activation='softmax'),
])
batch_size = 100
model = create_model()
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-2),
metrics=["accuracy"])
model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid), batch_size=batch_size)
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
distribution = tf.distribute.MirroredStrategy()
# Change the default all-reduce algorithm:
#distribution = tf.distribute.MirroredStrategy(
# cross_device_ops=tf.distribute.HierarchicalCopyAllReduce())
# Specify the list of GPUs to use:
#distribution = tf.distribute.MirroredStrategy(devices=["/gpu:0", "/gpu:1"])
# Use the central storage strategy instead:
#distribution = tf.distribute.experimental.CentralStorageStrategy()
#resolver = tf.distribute.cluster_resolver.TPUClusterResolver()
#tf.tpu.experimental.initialize_tpu_system(resolver)
#distribution = tf.distribute.experimental.TPUStrategy(resolver)
with distribution.scope():
model = create_model()
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-2),
metrics=["accuracy"])
batch_size = 100 # must be divisible by the number of workers
model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid), batch_size=batch_size)
model.predict(X_new)
```
Custom training loop:
```
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
K = keras.backend
distribution = tf.distribute.MirroredStrategy()
with distribution.scope():
model = create_model()
optimizer = keras.optimizers.SGD()
with distribution.scope():
dataset = tf.data.Dataset.from_tensor_slices((X_train, y_train)).repeat().batch(batch_size)
input_iterator = distribution.make_dataset_iterator(dataset)
@tf.function
def train_step():
def step_fn(inputs):
X, y = inputs
with tf.GradientTape() as tape:
Y_proba = model(X)
loss = K.sum(keras.losses.sparse_categorical_crossentropy(y, Y_proba)) / batch_size
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
return loss
per_replica_losses = distribution.experimental_run(step_fn, input_iterator)
mean_loss = distribution.reduce(tf.distribute.ReduceOp.SUM,
per_replica_losses, axis=None)
return mean_loss
n_epochs = 10
with distribution.scope():
input_iterator.initialize()
for epoch in range(n_epochs):
print("Epoch {}/{}".format(epoch + 1, n_epochs))
for iteration in range(len(X_train) // batch_size):
print("\rLoss: {:.3f}".format(train_step().numpy()), end="")
print()
batch_size = 100 # must be divisible by the number of workers
model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid), batch_size=batch_size)
```
## Training across multiple servers
A TensorFlow cluster is a group of TensorFlow processes running in parallel, usually on different machines, and talking to each other to complete some work, for example training or executing a neural network. Each TF process in the cluster is called a "task" (or a "TF server"). It has an IP address, a port, and a type (also called its role or its job). The type can be `"worker"`, `"chief"`, `"ps"` (parameter server) or `"evaluator"`:
* Each **worker** performs computations, usually on a machine with one or more GPUs.
* The **chief** performs computations as well, but it also handles extra work such as writing TensorBoard logs or saving checkpoints. There is a single chief in a cluster. If no chief is specified, then the first worker is the chief.
* A **parameter server** (ps) only keeps track of variable values, it is usually on a CPU-only machine.
* The **evaluator** obviously takes care of evaluation. There is usually a single evaluator in a cluster.
The set of tasks that share the same type is often called a "job". For example, the "worker" job is the set of all workers.
To start a TensorFlow cluster, you must first specify it. This means defining all the tasks (IP address, TCP port, and type). For example, the following cluster specification defines a cluster with 3 tasks (2 workers and 1 parameter server). It's a dictionary with one key per job, and the values are lists of task addresses:
```
{
"worker": ["my-worker0.example.com:9876", "my-worker1.example.com:9876"],
"ps": ["my-ps0.example.com:9876"]
}
```
Every task in the cluster may communicate with every other task in the server, so make sure to configure your firewall to authorize all communications between these machines on these ports (it's usually simpler if you use the same port on every machine).
When a task is started, it needs to be told which one it is: its type and index (the task index is also called the task id). A common way to specify everything at once (both the cluster spec and the current task's type and id) is to set the `TF_CONFIG` environment variable before starting the program. It must be a JSON-encoded dictionary containing a cluster specification (under the `"cluster"` key), and the type and index of the task to start (under the `"task"` key). For example, the following `TF_CONFIG` environment variable defines a simple cluster with 2 workers and 1 parameter server, and specifies that the task to start is the first worker:
```
import os
import json
os.environ["TF_CONFIG"] = json.dumps({
"cluster": {
"worker": ["my-work0.example.com:9876", "my-work1.example.com:9876"],
"ps": ["my-ps0.example.com:9876"]
},
"task": {"type": "worker", "index": 0}
})
print("TF_CONFIG='{}'".format(os.environ["TF_CONFIG"]))
```
Some platforms (e.g., Google Cloud ML Engine) automatically set this environment variable for you.
Then you would write a short Python script to start a task. The same script can be used on every machine, since it will load the `TF_CONFIG` variable, which will tell it which task to start:
```
import tensorflow as tf
resolver = tf.distribute.cluster_resolver.TFConfigClusterResolver()
worker0 = tf.distribute.Server(resolver.cluster_spec(),
job_name=resolver.task_type,
task_index=resolver.task_id)
```
Another way to specify the cluster specification is directly in Python, rather than through an environment variable:
```
cluster_spec = tf.train.ClusterSpec({
"worker": ["127.0.0.1:9901", "127.0.0.1:9902"],
"ps": ["127.0.0.1:9903"]
})
```
You can then start a server simply by passing it the cluster spec and indicating its type and index. Let's start the two remaining tasks (remember that in general you would only start a single task per machine; we are starting 3 tasks on the localhost just for the purpose of this code example):
```
#worker1 = tf.distribute.Server(cluster_spec, job_name="worker", task_index=1)
ps0 = tf.distribute.Server(cluster_spec, job_name="ps", task_index=0)
os.environ["TF_CONFIG"] = json.dumps({
"cluster": {
"worker": ["127.0.0.1:9901", "127.0.0.1:9902"],
"ps": ["127.0.0.1:9903"]
},
"task": {"type": "worker", "index": 1}
})
print(repr(os.environ["TF_CONFIG"]))
distribution = tf.distribute.experimental.MultiWorkerMirroredStrategy()
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
os.environ["TF_CONFIG"] = json.dumps({
"cluster": {
"worker": ["127.0.0.1:9901", "127.0.0.1:9902"],
"ps": ["127.0.0.1:9903"]
},
"task": {"type": "worker", "index": 1}
})
#CUDA_VISIBLE_DEVICES=0
with distribution.scope():
model = create_model()
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-2),
metrics=["accuracy"])
import tensorflow as tf
from tensorflow import keras
import numpy as np
# At the beginning of the program (restart the kernel before running this cell)
distribution = tf.distribute.experimental.MultiWorkerMirroredStrategy()
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.mnist.load_data()
X_train_full = X_train_full[..., np.newaxis] / 255.
X_test = X_test[..., np.newaxis] / 255.
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
X_new = X_test[:3]
n_workers = 2
batch_size = 32 * n_workers
dataset = tf.data.Dataset.from_tensor_slices((X_train[..., np.newaxis], y_train)).repeat().batch(batch_size)
def create_model():
return keras.models.Sequential([
keras.layers.Conv2D(filters=64, kernel_size=7, activation="relu",
padding="same", input_shape=[28, 28, 1]),
keras.layers.MaxPooling2D(pool_size=2),
keras.layers.Conv2D(filters=128, kernel_size=3, activation="relu",
padding="same"),
keras.layers.Conv2D(filters=128, kernel_size=3, activation="relu",
padding="same"),
keras.layers.MaxPooling2D(pool_size=2),
keras.layers.Flatten(),
keras.layers.Dense(units=64, activation='relu'),
keras.layers.Dropout(0.5),
keras.layers.Dense(units=10, activation='softmax'),
])
with distribution.scope():
model = create_model()
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-2),
metrics=["accuracy"])
model.fit(dataset, steps_per_epoch=len(X_train)//batch_size, epochs=10)
# Hyperparameter tuning
# Only talk to ps server
config_proto = tf.ConfigProto(device_filters=['/job:ps', '/job:worker/task:%d' % tf_config['task']['index']])
config = tf.estimator.RunConfig(session_config=config_proto)
# default since 1.10
strategy.num_replicas_in_sync
```
|
github_jupyter
|
# Longest Palindromic Subsequence
In this notebook, you'll be tasked with finding the length of the *Longest Palindromic Subsequence* (LPS) given a string of characters.
As an example:
* With an input string, `ABBDBCACB`
* The LPS is `BCACB`, which has `length = 5`
In this notebook, we'll focus on finding an optimal solution to the LPS task, using dynamic programming. There will be some similarities to the Longest Common Subsequence (LCS) task, which is outlined in detail in a previous notebook. It is recommended that you start with that notebook before trying out this task.
### Hint
**Storing pre-computed values**
The LPS algorithm depends on looking at one string and comparing letters to one another. Similar to how you compared two strings in the LCS (Longest Common Subsequence) task, you can compare the characters in just *one* string with one another, using a matrix to store the results of matching characters.
For a string on length n characters, you can create an `n x n` matrix to store the solution to subproblems. In this case, the subproblem is the length of the longest palindromic subsequence, up to a certain point in the string (up to the end of a certain substring).
It may be helpful to try filling up a matrix on paper before you start your code solution. If you get stuck with this task, you may look at some example matrices below (see the section titled **Example matrices**), before consulting the complete solution code.
```
# imports for printing a matrix, nicely
import pprint
pp = pprint.PrettyPrinter()
# complete LPS solution
def lps(input_string):
n = len(input_string)
# create a lookup table to store results of subproblems
L = [[0 for x in range(n)] for x in range(n)]
# strings of length 1 have LPS length = 1
for i in range(n):
L[i][i] = 1
# consider all substrings
for s_size in range(2, n+1):
for start_idx in range(n-s_size+1):
end_idx = start_idx + s_size - 1
if s_size == 2 and input_string[start_idx] == input_string[end_idx]:
# match with a substring of length 2
L[start_idx][end_idx] = 2
elif input_string[start_idx] == input_string[end_idx]:
# general match case
L[start_idx][end_idx] = L[start_idx+1][end_idx-1] + 2
else:
# no match case, taking the max of two values
L[start_idx][end_idx] = max(L[start_idx][end_idx-1], L[start_idx+1][end_idx]);
# debug line
# pp.pprint(L)
return L[0][n-1] # value in top right corner of matrix
def test_function(test_case):
string = test_case[0]
solution = test_case[1]
output = lps(string)
print(output)
if output == solution:
print("Pass")
else:
print("Fail")
string = "TACOCAT"
solution = 7
test_case = [string, solution]
test_function(test_case)
string = 'BANANA'
solution = 5
test_case = [string, solution]
test_function(test_case)
string = 'BANANO'
solution = 3
test_case = [string, solution]
test_function(test_case)
```
### Example matrices
Example LPS Subproblem matrix 1:
```
input_string = 'BANANO'
LPS subproblem matrix:
B A N A N O
B [[1, 1, 1, 3, 3, 3],
A [0, 1, 1, 3, 3, 3],
N [0, 0, 1, 1, 3, 3],
A [0, 0, 0, 1, 1, 1],
N [0, 0, 0, 0, 1, 1],
O [0, 0, 0, 0, 0, 1]]
LPS length: 3
```
Example LPS Subproblem matrix 2:
```
input_string = 'TACOCAT'
LPS subproblem matrix:
T A C O C A T
T [[1, 1, 1, 1, 3, 5, 7],
A [0, 1, 1, 1, 3, 5, 5],
C [0, 0, 1, 1, 3, 3, 3],
O [0, 0, 0, 1, 1, 1, 1],
C [0, 0, 0, 0, 1, 1, 1],
A [0, 0, 0, 0, 0, 1, 1],
T [0, 0, 0, 0, 0, 0, 1]]
LPS length: 7
```
Note: The lower diagonal values will remain 0 in all cases.
### The matrix rules
You can efficiently fill up this matrix one cell at a time. Each grid cell only depends on the values in the grid cells that are directly on bottom and to the left of it, or on the diagonal/bottom-left. The rules are as follows:
* Start with an `n x n ` matrix where n is the number of characters in a given string; the diagonal should all have the value 1 for the base case, the rest can be zeros.
* As you traverse your string:
* If there is a match, fill that grid cell with the value to the bottom-left of that cell *plus* two.
* If there is not a match, take the *maximum* value from either directly to the left or the bottom cell, and carry that value over to the non-match cell.
* After completely filling the matrix, **the top-right cell will hold the final LPS length**.
<span class="graffiti-highlight graffiti-id_d28fhk7-id_3yrlf09"><i></i><button>Show Solution</button></span>
### Complexity
What was the complexity of this?
In the solution, we are looping over the elements of our `input_string` using two `for` loops; these are each of $O(N)$ and nested this becomes $O(N^2)$. This behavior dominates our optimized solution.
|
github_jupyter
|
# Centerpartiets budgetmotion 2022
https://www.riksdagen.se/sv/dokument-lagar/dokument/motion/centerpartiets-budgetmotion-2022_H9024121
```
import pandas as pd
import requests
pd.options.mode.chained_assignment = None
multiplier = 1_000_000
docs = [
{'utgiftsområde': 1, 'dok_id': 'H9024141'},
{'utgiftsområde': 2, 'dok_id': 'H9024140'},
{'utgiftsområde': 3, 'dok_id': 'H9024142'},
{'utgiftsområde': 4, 'dok_id': 'H9024143'},
{'utgiftsområde': 5, 'dok_id': 'H9024144'},
{'utgiftsområde': 6, 'dok_id': 'H9024145'},
{'utgiftsområde': 7, 'dok_id': 'H9024146'},
{'utgiftsområde': 8, 'dok_id': 'H9024147'},
{'utgiftsområde': 9, 'dok_id': 'H9024128'},
{'utgiftsområde': 10, 'dok_id': 'H9024148'},
{'utgiftsområde': 11, 'dok_id': 'H9024149'},
{'utgiftsområde': 12, 'dok_id': 'H9024150'},
{'utgiftsområde': 13, 'dok_id': 'H9024127'},
{'utgiftsområde': 14, 'dok_id': 'H9024129'},
{'utgiftsområde': 15, 'dok_id': 'H9024125'},
{'utgiftsområde': 16, 'dok_id': 'H9024126'},
{'utgiftsområde': 17, 'dok_id': 'H9024130'},
{'utgiftsområde': 18, 'dok_id': 'H9024122'},
{'utgiftsområde': 19, 'dok_id': 'H9024123'},
{'utgiftsområde': 20, 'dok_id': 'H9024124'},
{'utgiftsområde': 21, 'dok_id': 'H9024136'},
{'utgiftsområde': 22, 'dok_id': 'H9024135'},
{'utgiftsområde': 23, 'dok_id': 'H9024134'},
{'utgiftsområde': 24, 'dok_id': 'H9024133'},
{'utgiftsområde': 25, 'dok_id': 'H9024132'}]
def find_matching_table(tables, loc=None):
if loc:
return tables[loc]
for table in tables:
if table.columns.shape == (5,):
break
return table
def fetch_table(url, area):
tables = pd.read_html(url, encoding='utf8', header=2)
cols = ['Anslag', 'Namn', '2022', '2023', '2024']
loc = 3 if area in [14, 24] else None
df = find_matching_table(tables, loc)
df.columns = cols
df = df.dropna(how='all')
df = df[~df.Anslag.str.startswith('Summa', na=False)]
df['Utgiftsområde'] = area
return df
tables = []
for doc in docs:
url = f'http://data.riksdagen.se/dokument/{doc["dok_id"]}.html'
table = fetch_table(url, area=doc['utgiftsområde'])
tables.append(table)
df = pd.concat(tables, sort=False)
df = df.dropna(how='all')
for col in ['2022', '2023', '2024']:
df[col] = df[col].astype(str)
df[col] = df[col].str.split('.', expand=True)[0]
df[col] = df[col].str.replace('±0', '0', regex=False)
df[col] = df[col].str.replace('\s+', '', regex=True)
df[col] = df[col].str.replace('−', '-')
df[col] = df[col].astype(int) * multiplier
df.to_csv('../data/budgetmotion-2022-c.csv', index=False)
```
|
github_jupyter
|
## Plotting of profile results
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# common
import os
import os.path as op
# pip
import numpy as np
import pandas as pd
import math
import xarray as xr
import matplotlib.pyplot as plt
from matplotlib import gridspec
# DEV: override installed teslakit
import sys
sys.path.insert(0, op.join(os.path.abspath(''), '..', '..', '..'))
# teslakit
from teslakit.database import Database, hyswan_db
# interactive widgets
from ipywidgets import interact, interact_manual, interactive, HBox, Layout, VBox
from ipywidgets import widgets
from natsort import natsorted, ns
from moviepy.editor import *
from IPython.display import display, Image, Video
sys.path.insert(0, op.join(os.getcwd(),'..'))
# bluemath swash module (bluemath.DD.swash
path_swash='/media/administrador/HD/Dropbox/Guam/wrapswash-1d'
sys.path.append(path_swash)
from lib.wrap import SwashProject, SwashWrap
from lib.plots import SwashPlot
from lib.io import SwashIO
from lib.MDA import *
from lib.RBF import *
def Plot_profile(profile):
colors=['royalblue','crimson','gold','darkmagenta','darkgreen','darkorange','mediumpurple','coral','pink','lightgreen','darkgreen','darkorange']
fig=plt.figure(figsize=[17,4])
gs1=gridspec.GridSpec(1,1)
ax=fig.add_subplot(gs1[0])
ax.plot(profile.Distance_profile, -profile.Elevation,linewidth=3,color=colors[prf],alpha=0.7,label='Profile: ' + str(prf))
s=np.where(profile.Elevation<0)[0][0]
ax.plot(profile.Distance_profile[s],-profile.Elevation[s],'s',color=colors[prf],markersize=10)
s=np.argmin(profile.Elevation.values)
ax.plot(profile.Distance_profile[s],-profile.Elevation[s],'d',color=colors[prf],markersize=10)
ax.plot([0,1500],[0,0],':',color='plum',alpha=0.7)
ax.set_xlabel(r'Distance (m)', fontsize=14)
ax.set_ylabel(r'Elevation (m)', fontsize=14)
ax.legend()
ax.set_xlim([0,np.nanmax(profile.Distance_profile)])
ax.set_ylim(-profile.Elevation[0], -np.nanmin(profile.Elevation)+3)
def get_bearing(lat1,lon1,lat2,lon2):
dLon = np.deg2rad(lon2) - np.deg2rad(lon1);
y = math.sin(dLon) * math.cos(np.deg2rad(lat2));
x = math.cos(np.deg2rad(lat1))*math.sin(np.deg2rad(lat2)) - math.sin(np.deg2rad(lat1))*math.cos(np.deg2rad(lat2))*math.cos(dLon);
brng = np.rad2deg(math.atan2(y, x));
if brng < 0: brng+= 360
return brng
# --------------------------------------
# Teslakit database
p_data = r'/media/administrador/HD/Dropbox/Guam/teslakit/data'
# p_data=r'/Users/laurac/Dropbox/Guam/teslakit/data'
db = Database(p_data)
# set site
db.SetSite('GUAM')
#Define profile to run
prf=11
# sl=0 #Sea level
p_out = os.path.join(p_data, 'sites', 'GUAM','HYSWASH')
if not os.path.exists(p_out): os.mkdir(p_out)
p_dataset = op.join(p_out, 'dataset_prf'+str(prf)+'.pkl')
p_subset = op.join(p_out, 'subset_prf'+str(prf)+'.pkl')
p_waves = op.join(p_out, 'waves_prf'+str(prf)+'.pkl')
ds_output = op.join(p_out, 'reconstruction_p{0}.nc'.format(prf))
# Create the project directory
p_proj = op.join(p_out, 'projects') # swash projects main directory
n_proj = 'Guam_prf_{0}'.format(prf) # project name
sp = SwashProject(p_proj, n_proj)
sw = SwashWrap(sp)
si = SwashIO(sp)
sm = SwashPlot(sp)
```
### Set profile and load data
```
min_depth=-20
profiles=xr.open_dataset('/media/administrador/HD/Dropbox/Guam/bati guam/Profiles_Guam_curt.nc')
profile=profiles.sel(profile=prf)
profile['Orientation']=get_bearing(profile.Lat[0],profile.Lon[0],profile.Lat[-1],profile.Lon[-1])
s=np.where(profile.Elevation>min_depth)[0]
profile = xr.Dataset({'Lon': (['number_points'],np.flipud(profile.Lon[s])),
'Lat': (['number_points'],np.flipud(profile.Lat[s])),
'Elevation': (['number_points'],-np.flipud(profile.Elevation[s])),
'Distance_profile': (['number_points'],(profile.Distance_profile[s])),
'Rep_coast_distance': (profile.Rep_coast_distance),
'Orientation': (profile.Orientation),
},
coords={'number_points': range(len(s)),
})
print(profile)
Plot_profile(profile)
```
### Cut profile after maximum
```
extra_positions=[25,28,7,25,-30,-9,-10,15,-22,12,20,18] #Cut profile at maximum + extra_positions
s=np.argmin(profile.Elevation.values)
profile=profile.isel(number_points=range(s+extra_positions[prf]))
print(profile)
Plot_profile(profile)
profile.to_netcdf(path=os.path.join(p_out,'Prf_'+str(prf)+'.nc'))
```
### Load waves
<span style="font-family: times, Times New Roman; font-size:12pt; color:black;">
In the following cell, the input paths are defined. The user must specified the path to the hydraulic boundary conditions, considering the wind forcing and sea conditions. For simplicity, those files must be in NetCDF format. </span>
* `profile` <span style="font-family: times, Times New Roman; font-size:12pt; color:black;"> : profile id </span><br>
* `sl`: <span style="font-family: times, Times New Roman; font-size:12pt; color:black;"> sea level (sl) with respect to msl</span>
* `orientation`: <span style="font-family: times, Times New Roman; font-size:12pt; color:black;"> angle (º) between the</span>
* `waves` <span style="font-family: times, Times New Roman; font-size:12pt; color:black;"> : significant wave height (Hs) and peak period (Tp). </span><br>
* `wind`: <span style="font-family: times, Times New Roman; font-size:12pt; color:black;"> wind speed (W), wind direction (B) </span>
```
tp_lim=3
SIM=pd.read_pickle(os.path.join(db.paths.site.SIMULATION.nearshore,'Simulations_profile_'+str(prf)))
print(len(SIM['Tp'][np.where(SIM['Tp']<=tp_lim)[0]]))
SIM['Tp'][np.where(SIM['Tp']<=tp_lim)[0]]=tp_lim
```
## Keep 200 years of simulation for MDA
```
SIM=SIM.loc[np.where(SIM.time<(SIM.time[0]+len(np.unique(SIM.time))/5))[0],:]
SIM=SIM.reset_index().drop(columns=['index'])
SIM
```
### Load SLR
```
SLR=xr.open_dataset(db.paths.site.CLIMATE_CHANGE.slr_nc)
sl=np.tile(SLR.SLR.values, (100,1))[:len(SIM)] #We repeat the level 100 times (10simulations of 1000 years)
SIM['slr1']=sl[:,0]
SIM['slr2']=sl[:,1]
SIM['slr3']=sl[:,2]
SIM
fig=plt.figure(figsize=[18,9])
vars=['Hs','Tp','Dir','level','wind_dir','wind_speed']
units=[' (m)',' (s)',' (º)',' (m)',' (º)',' (m/s)']
gs1=gridspec.GridSpec(2,3)
for a in range(len(vars)):
ax=fig.add_subplot(gs1[a])
ax.hist(SIM[vars[a]][:np.int(len(SIM)/100)],50,density=True,color='navy') #Plot a small fraction of data
ax.set_xlabel(vars[a] + units[a],fontsize=13)
```
## **1. Clustering and selection method MDA**
<span style="font-family: times, Times New Roman; font-size:12pt; color:black; text-align: justify">
The high computational cost of propagating the entire hindcast dataset requires statistical tools to reduce the set of data to a number of representative cases to perform hybrid downscaling. The maximum dissimilarity algorithm (MDA) defined in the work of Camus et al., 2011, is implemented for this purpose.<br>
<br>
Given a data sample $X=\{x_{1},x_{2},…,x_{N}\}$ consisting of $N$ $n$-dimensional vectors, a subset of $M$ vectors $\{v_{1},…,v_{M}\}$ representing the diversity of the data is obtained by applying this algorithm. The selection starts initializing the subset by transferring one vector from the data sample ${v1}$. The rest of the $M-1$ elements are selected iteratively, calculating the dissimilarity between each remaining data in the database and the elements of the subset and transferring the most dissimilar one to the subset. The process finishes when the algorithm reaches $M$ iterations.
The MDA will be applied to 2 data sets, on the one hand, to the aggregated spectrum parameters and on the other hand to the spectrum partitions, differentiating seas and swells. In this way, the wind sea will have two added dimensions, wind speed and wind direction.
<br>
Representative variables for the different hydraulic boundary conditions:<br>
<br>
(1) Waves $H_{s}$, $T_p$<br>
(2) Wind $W_{x}$<br>
</span><br>
### **1.1 Data preproccesing**
<span style="font-family: times, Times New Roman; font-size:12pt; color:black; text-align: justify">
Select $H_{s}$, $T_p$, $W$, $W_{dir}$ from waves dataset and proyect wind velocity over the shore-cross profile (Wx). Unlike not being able to model the direction of the waves, Swash allows to model the wind direction. In order to reduce the MDA dimensions, working with the projected wind is desirable.
</span><br>
```
# Dataset
# waves = waves.squeeze()
SIM = pd.DataFrame(
{
'time': SIM.time.values,
'hs': SIM.Hs.values,
'tp': SIM.Tp.values,
'w':SIM.wind_speed.values,
'wdir': SIM.wind_dir.values,
'sl':SIM.level.values,
'slr1':SIM.slr1.values,
'slr2':SIM.slr2.values,
'slr3':SIM.slr3.values
}
)
#TEST WIND PROJECTION
# rel_beta=np.nanmin([np.abs(xds_states.wdir- profile.Orientation.values),np.abs(xds_states.wdir+360-profile.Orientation.values)],axis=0)
# rel_beta[np.where(rel_beta>=90)[0]]=np.nan
# rad_beta = (rel_beta*np.pi)/180
# xds_states['wx'] = xds_states.w.values*np.cos(rad_beta)
# #Test that wind projection is right
# pos_test=86281
# print('Orientation: ' + str(profile.Orientation.values))
# print('Wind speed: ' + str(xds_states.w[pos_test]))
# print('Wind dir: ' + str(xds_states.wdir[pos_test]))
# print('Beta relativo: ' + str(rel_beta[pos_test]))
# print('Wind proyected: ' + str(xds_states.wx[pos_test]))
# Proyect wind direction over bathymetry orientation
#Check wind is correctily projected
SIM = proy_wind(profile.Orientation.values, SIM)
SIM=SIM.drop(columns=['time', 'w', 'wdir']).dropna().reset_index()
SIM
dataset = pd.DataFrame(
{
'hs': np.tile(SIM.hs,4),
'tp': np.tile(SIM.tp,4),
'wx': np.tile(SIM.wx,4),
'level':np.concatenate((SIM.sl,SIM.slr1+SIM.sl,SIM.slr2+SIM.sl,SIM.slr3+SIM.sl))
}
)
# dataset=np.column_stack((np.tile(SIM.hs,4),np.tile(SIM.tp,4),np.tile(SIM.wx,4),np.concatenate((SIM.sl,SIM.slr1+SIM.sl,SIM.slr2+SIM.sl,SIM.slr3+SIM.sl))))
# print(len(dataset))
# dataset
fig=plt.figure(figsize=[25,6])
gs1=gridspec.GridSpec(1,1)
ax=fig.add_subplot(gs1[0])
ax.plot(SIM.slr3+SIM.sl,linewidth=0.4)
ax.plot(SIM.slr2+SIM.sl,linewidth=0.4)
ax.plot(SIM.slr1+SIM.sl,linewidth=0.4)
ax.plot(SIM.sl,linewidth=0.4)
```
### **1.2 MDA algorithm**
```
# dataset = SIM.drop(columns=['time', 'sl', 'w', 'wdir'])
# data = np.array(dataset.dropna())[:,:]
# subset, scalar and directional indexes
ix_scalar = [0, 1, 2, 3] # hs, tp, wx, level
ix_directional = [] #
n_subset = 1500 # subset size
# MDA algorithm
out = MaxDiss_Simplified_NoThreshold(
np.array(dataset),
n_subset,
ix_scalar, ix_directional
)
subset = pd.DataFrame({'hs':out[:, 0],'tp':out[:, 1],'wx':out[:, 2],'level':out[:, 3]})
print(subset.info())
# store dataset and subset
SIM.to_pickle(p_dataset)
subset.to_pickle(p_subset)
# Plot subset-dataset
fig = scatter_mda(dataset.loc[np.arange(1,len(dataset),100),:], subset, names = ['Hs(m)', 'Tp(s)', 'Wx(m/s)', 'Level (m)'], figsize=(15,15))
fig.savefig(op.join(p_out, 'mda_profile_'+str(prf)+'.png'),facecolor='w')
```
## **2. Numerical model SWASH**
### **2.1 Data preprocessing**
<span style="font-family: times, Times New Roman; font-size:12pt; color:black;">
In this section, the computational grid is defined from the bathymetric data and, optionally, wave dissipation characteristics due to the bottom friction or vegetation. The input grids will be considered uniform and rectangular, with the computational grid covering the whole bathymetric region. <br>
#### **2.1.1 Cross-shore profile**
<span style="font-family: times, Times New Roman; font-size:12pt; color:black;">
Model boundaries should be far enough from the area of interest and away from steep topography to avoid unrealistic frictional or numerical dispersion effects but close enough to remain computationally feasible </span> <span style="font-family: times, Times New Roman; font-size:11pt; color:black; background:whitesmoke"> kh < 5. </span> </span> <span style="font-family: times, Times New Roman; font-size:12pt; color:black;"> As a recommendation, the area of interest should be kept at least two wave lengths away from the boundary. In the following cells, different input choices for defining the cross-shore profile will be given. </span>
* `dxL` <span style="font-family: times, Times New Roman; font-size:12pt; color:black;"> : number of nodes per wavelength. This command sets the grid resolution from the number of nodes desired per wavelength in 1m depth (assuming that in the beach due to the infragravigity waves the water colum can reach 1m heigh). </span><br><br>
* `dxinp`: <span style="font-family: times, Times New Roman; font-size:12pt; color:black;"> The resolution of the bathymetric grid is not the same as that of the computational grid. It is advised to avoid extremely steep bottom slopes or sharp obstacles as much as posible. </span>
<span style="font-family: times, Times New Roman; font-size:12pt; color:black;">
Land points are defined as negative while wet points are defined as positive.
</span>
```
# Import depth FILE
sp.dxL = 30 # nº nodes per wavelength
sp.dxinp = np.abs(profile.Distance_profile.values[0]- profile.Distance_profile.values[1])
sp.depth=profile.Elevation.values
fig = sm.plot_depthfile()
```
#### **2.1.2 Friction**
<p style="font-family: times, Times New Roman; font-size:12pt; color:black;">
With this option the user can activate the bottom friction controlled by the Manning formula. As the friction coefficient may vary over the computational region, the friction data can be read from file or defined by specifyng the start and end point along it is defined the frictional area (e.g. reef).
</p>
```
# Set a constant friction between two points of the profile
sp.friction_file = False
sp.friction = True
sp.Cf = 0.02 # manning frictional coefficient (m^-1/3 s)
sp.cf_ini = 0 # first point along the profile
sp.cf_fin = sp.dxinp*np.where(profile.Elevation<0)[0][0]+5 # last point along the profile
fig = sm.plot_depthfile()
plt.savefig(op.join(p_out, 'profile_'+str(prf)+'.png'),facecolor='w')
```
### **2.2 Boundary conditions**<br>
<p style="font-family: times, Times New Roman; font-size:12pt; color:black;">
The boundaries of the computational grid in SWASH are either land, beach or water. The wave condition is imposed on the west boundary of the computational domain, so that the wave propagation is pointing eastward. To simulate entering waves without some reflections at the wavemaker boundary, a weakly-reflective boundary condition allowing outgoing waves is adopted. For this test case, a time series synthesized from parametric information (wave height, period, etc.) will be given as wavemaker. Here, the wavemaker must be defined as irregular unidirectional waves by means of 1D spectrum. Both the initial water level and velocity components are set to zero.
</p>
```
# Set the simulation period and grid resolution
sp.tendc = 3600 # simulation period (SEC)
sp.warmup = 0.15 * sp.tendc # spin-up time (s) (default 15%)
```
#### **2.2.1 Sea state**<br>
<p style="font-family: times, Times New Roman; font-size:12pt; color:black;">
The input wave forcing is set as a 1D Jonswap spectrum with $\gamma$ parameter 3. As the water level is a deterministic variable, it can be included considering different disccrete values in the range of low-high tide.
</p>
<p style="font-family: times, Times New Roman; font-size:12pt; font-style:italic; font-weight:bold; color:royalblue;">
Water level
</p>
```
low_level = 0
high_level = 0
step = 0.5
wl = np.arange(low_level, high_level+step, step) # water level (m)
print('Water levels: {0}'.format(wl))
subset
```
<p style="font-family: times, Times New Roman; font-size:12pt; font-style:italic; font-weight:bold; color:royalblue;">
Jonswap spectrum
</p>
```
dir_wx = np.full([len(subset.wx)],0)
dir_wx = np.where(subset.wx<=0,180,0)
print(dir_wx)
# Define JONSWAP spectrum by means of the following spectral parameters
sp.gamma = 10
waves = pd.DataFrame(
{
"forcing": ['Jonswap'] * n_subset,
"WL": subset.level,
"Hs": subset.hs,
"Tp": subset.tp,
'Wx': np.abs(subset.wx),
'Wdir': dir_wx,
"gamma": np.full([n_subset],sp.gamma),
"warmup": np.full([n_subset],sp.warmup)
}
)
waves
# Create wave series and save 'waves.bnd' file
sp.deltat = 0.5 # delta time over which the wave series is defined
series = sw.make_waves_series(waves)
```
#### **2.2.2 Wind**
<p style="font-family: times, Times New Roman; font-size:12pt; color:black;">
The user can optionally specify wind speed, direction and wind drag assuming constant values in the domain. As the test case is using cartesian coordinates, please set the direction where the wind cames from.
</p>
```
# Define wind parameters
sp.wind = True
sp.Wdir = waves.Wdir # wind direction at 10 m height (º)
sp.Vel = waves.Wx # wind speed at 10 m height (m/s)Distance_profileÇ
sp.Ca = 0.0026 # dimensionless coefficient (default 0.002)
```
### **2.3. Run**
<span style="font-family: times, Times New Roman; font-size:12pt; color:black;">
In the following, a series of predefined options have been choosen: <br></span>
* <span style="font-family: times, Times New Roman; font-size:12pt; color:black;font-weight:bold; background:khaki;">Grid resolution</span> <span style="font-family: times, Times New Roman; font-size:12pt; color:black;"> is determined through a number of points per wavelength criteria: Courant number for numerical stability, number of points per wavelength, and manual upper and lower limits for grid cell sizes.<br></span>
* <span style="font-family: times, Times New Roman; font-size:12pt; color:black;">The default value for the maximun </span><span style="font-family: times, Times New Roman; font-size:12pt; color:black; background:khaki; font-weight:bold;">wave breaking steepness</span> <span style="font-family: times, Times New Roman; font-size:12pt; color:black;">parameter is $ \alpha = 0.6$<br></span>
* <span style="font-family: times, Times New Roman; font-size:12pt; color:black;"> For high, nonlinear waves, or wave interaction with structures with steep slopes (e.g. jetties, quays), a Courant number of 0.5 is advised. Here, a dynamically adjusted </span><span style="font-family: times, Times New Roman; color:black; font-size:12pt; background:khaki; font-weight:bold;">time step</span> <span style="font-family: times, Times New Roman; font-size:12pt; color:black;"> controlled by a Courant number range of (0.1 - 0.5) is implemented<br></span>
<span style="font-family: times, Times New Roman; font-size:12pt; color:black;">
User parametes:<br></span>
* `Nonhydrostatic` <span style="font-family: times, Times New Roman; font-size:12pt; color:black;">to include the non-hydrostatic pressure in the shallow water equations. Hydrostatic pressure assumption can be made in case of propagation of long waves, such as large-scale ocean circulations, tides and storm surges. This assumption does not hold in case of propagation of short waves, flows over a steep bottom, unstable stratified flows, and other small-scale applications where vertical acceleration is dominant </span>
* `vert` <span style="font-family: times, Times New Roman; font-size:12pt; color:black;">this command set the number of vertical layers in case that the run will be in multi-layered mode </span><br><br>
```
# Create swash wrap
sp.Nonhydrostatic = True # True or False
sp.vert = 1 # vertical layers
sp.delttbl = 1 # time between output fields (s)
sw = SwashWrap(sp)
waves = sw.build_cases(waves)
waves.to_pickle(p_waves)
# Run cases
sw.run_cases()
## TARGET: Extract output from files
#do_extract=1
#if do_extract==1:
# target = sw.metaoutput(waves)
# target = target.rename({'dim_0': 'case'})
# print(target)
# target.to_netcdf(op.join(p_out, 'xds_out_prf'+str(prf)+'.nc'))
#else:
# target = xr.open_dataset(op.join(p_out, 'xds_out_prf'+str(prf)+'.nc'))
#df_target = pd.DataFrame({'ru2':target.Ru2.values, 'q': target.q.values})
#print(df_target)
#df_dataset=subset
#df_dataset['Ru2']=target.Ru2.values
#df_dataset['q']=target.q.values
#print(df_dataset)
```
|
github_jupyter
|
# Dropout regularization with gluon
```
import mxnet as mx
import numpy as np
from mxnet import gluon
from tqdm import tqdm_notebook as tqdm
```
## Context
```
ctx = mx.cpu()
```
## The MNIST Dataset
```
batch_size = 64
num_inputs = 784
num_outputs = 10
def transform(data, label):
return data.astype(np.float32) / 255, label.astype(np.float32)
train_data = gluon.data.DataLoader(dataset=gluon.data.vision.MNIST(train=True, transform=transform),
batch_size=batch_size,
shuffle=True)
test_data = gluon.data.DataLoader(dataset=gluon.data.vision.MNIST(train=False, transform=transform),
batch_size=batch_size,
shuffle=False)
```
## Define the model
```
num_hidden = 256
net = gluon.nn.Sequential()
with net.name_scope():
###########################
# Adding first hidden layer
###########################
net.add(gluon.nn.Dense(units=num_hidden,
activation="relu"))
###########################
# Adding dropout with rate .5 to the first hidden layer
###########################
net.add(gluon.nn.Dropout(rate=0.5))
###########################
# Adding first hidden layer
###########################
net.add(gluon.nn.Dense(units=num_hidden,
activation="relu"))
###########################
# Adding dropout with rate .5 to the second hidden layer
###########################
net.add(gluon.nn.Dropout(rate=0.5))
###########################
# Adding the output layer
###########################
net.add(gluon.nn.Dense(units=num_outputs))
```
## Parameter initialization
```
net.collect_params().initialize(mx.init.Xavier(magnitude=2.24), ctx=ctx)
softmax_cross_entropy = gluon.loss.SoftmaxCrossEntropyLoss()
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': .1})
```
## Evaluation
```
def evaluate_accuracy(data_iterator, net, mode='train'):
acc = mx.metric.Accuracy()
for i, (data, label) in enumerate(data_iterator):
data = data.as_in_context(ctx).reshape([-1, 784])
label = label.as_in_context(ctx)
if mode == 'train':
with mx.autograd.train_mode():
output = net(data)
else:
with mx.autograd.predict_mode():
output = net(data)
predictions = mx.nd.argmax(output, axis=1)
acc.update(preds=predictions, labels=label)
return acc.get()[1]
```
## Training
```
epochs = 10
smoothing_constant = .01
for e in tqdm(range(epochs)):
for i, (data, label) in tqdm(enumerate(train_data)):
data = data.as_in_context(ctx).reshape([-1, 784])
label = label.as_in_context(ctx)
with mx.autograd.record():
output = net(data)
loss = softmax_cross_entropy(output, label)
loss.backward()
trainer.step(data.shape[0])
##########################
# Keep a moving average of the losses
##########################
curr_loss = mx.nd.mean(loss).asscalar()
moving_loss = (curr_loss if ((i == 0) and (e == 0))
else (1 - smoothing_constant) * moving_loss + (smoothing_constant) * curr_loss)
test_accuracy = evaluate_accuracy(test_data, net, mode='test')
train_accuracy = evaluate_accuracy(train_data, net, mode='train')
print("Epoch %s. Loss: %s, Train_acc %s, Test_acc %s" %
(e, moving_loss, train_accuracy, test_accuracy))
```
## Predict for the first batch
```
for i, (data, label) in enumerate(test_data):
data = data[0].as_in_context(ctx).reshape([-1, 784])
label = label[0].as_in_context(ctx)
with mx.autograd.record(train_mode=False):
output = net(data)
predictions = mx.nd.argmax(output, axis=1)
print(predictions)
break
```
## Testing if the accuracy is calculated correctly
```
test_accuracy = evaluate_accuracy(test_data, net, mode='test')
test_accuracy
test_accuracy = evaluate_accuracy(test_data, net, mode='test')
test_accuracy
```
|
github_jupyter
|
# Update rules
```
import numpy as np
import matplotlib.pyplot as plt
import mpl_toolkits.mplot3d.axes3d as p3
import matplotlib.animation as animation
from IPython.display import HTML
from matplotlib import cm
from matplotlib.colors import LogNorm
def sgd(f, df, x0, y0, lr, steps):
x = np.zeros(steps + 1)
y = np.zeros(steps + 1)
x[0] = x0
y[0] = y0
for i in range(steps):
(dx, dy) = df(x[i], y[i])
x[i + 1] = x[i] - lr * dx
y[i + 1] = y[i] - lr * dy
z = f(x, y)
return [x, y, z]
def nesterov(f, df, x0, y0, lr, steps, momentum):
x = np.zeros(steps + 1)
y = np.zeros(steps + 1)
x[0] = x0
y[0] = y0
dx_v = 0
dy_v = 0
for i in range(steps):
(dx_ahead, dy_ahead) = df(x[i] + momentum * dx_v, y[i] + momentum * dy_v)
dx_v = momentum * dx_v - lr * dx_ahead
dy_v = momentum * dy_v - lr * dy_ahead
x[i + 1] = x[i] + dx_v
y[i + 1] = y[i] + dy_v
z = f(x, y)
return [x, y, z]
def adagrad(f, df, x0, y0, lr, steps):
x = np.zeros(steps + 1)
y = np.zeros(steps + 1)
x[0] = x0
y[0] = y0
dx_cache = 0
dy_cache = 0
for i in range(steps):
(dx, dy) = df(x[i], y[i])
dx_cache += dx ** 2
dy_cache += dy ** 2
x[i + 1] = x[i] - lr * dx / (1e-8 + np.sqrt(dx_cache))
y[i + 1] = y[i] - lr * dy / (1e-8 + np.sqrt(dy_cache))
z = f(x, y)
return [x, y, z]
def rmsprop(f, df, x0, y0, lr, steps, decay_rate):
x = np.zeros(steps + 1)
y = np.zeros(steps + 1)
x[0] = x0
y[0] = y0
dx_cache = 0
dy_cache = 0
for i in range(steps):
(dx, dy) = df(x[i], y[i])
dx_cache = decay_rate * dx_cache + (1 - decay_rate) * dx ** 2
dy_cache = decay_rate * dy_cache + (1 - decay_rate) * dy ** 2
x[i + 1] = x[i] - lr * dx / (1e-8 + np.sqrt(dx_cache))
y[i + 1] = y[i] - lr * dy / (1e-8 + np.sqrt(dy_cache))
z = f(x, y)
return [x, y, z]
def adam(f, df, x0, y0, lr, steps, beta1, beta2):
# adam with bias correction
x = np.zeros(steps + 1)
y = np.zeros(steps + 1)
x[0] = x0
y[0] = y0
dx_v = 0
dy_v = 0
dx_cache = 0
dy_cache = 0
for i in range(steps):
(dx, dy) = df(x[i], y[i])
dx_v = beta1 * dx_v + (1 - beta1) * dx
dx_v_hat = dx_v / (1 - beta1 ** (i + 1))
dx_cache = beta2 * dx_cache + (1 - beta2) * dx ** 2
dx_cache_hat = dx_cache / (1 - beta2 ** (i + 1))
dy_v = beta1 * dy_v + (1 - beta1) * dy
dy_v_hat = dy_v / (1 - beta1 ** (i + 1))
dy_cache = beta2 * dy_cache + (1 - beta2) * dy ** 2
dy_cache_hat = dy_cache / (1 - beta2 ** (i + 1))
x[i + 1] = x[i] - lr * dx_v_hat / (1e-8 + np.sqrt(dx_cache_hat))
y[i + 1] = y[i] - lr * dy_v_hat / (1e-8 + np.sqrt(dy_cache_hat))
z = f(x, y)
return [x, y, z]
def update_lines(num, dataLines, lines):
for line, data in zip(lines, dataLines):
# NOTE: there is no .set_data() for 3 dim data...
line.set_data(data[0:2, :num])
line.set_3d_properties(data[2, :num])
line.set_marker('o')
line.set_markevery([-1])
return lines
def create_and_save_animation(func_title, f, df, params={}, plot_params={}):
x0 = params.get('x0', 0)
y0 = params.get('y0', 0)
lr = params.get('lr', .1)
steps = params.get('steps', 8)
momentum = params.get('momentum', .9)
decay_rate = params.get('decay_rate', .9)
beta1 = params.get('beta1', .9)
beta2 = params.get('beta2', .999)
# sgd params
x0_sgd = params.get('x0_sgd', x0)
y0_sgd = params.get('y0_sgd', y0)
lr_sgd = params.get('lr_sgd', lr)
# nesterov params
x0_nesterov = params.get('x0_nesterov', x0)
y0_nesterov = params.get('y0_nesterov', y0)
lr_nesterov = params.get('lr_nesterov', lr)
# adagrad params
x0_adagrad = params.get('x0_adagrad', x0)
y0_adagrad = params.get('y0_adagrad', y0)
lr_adagrad = params.get('lr_adagrad', lr)
# rmsprop params
x0_rmsprop = params.get('x0_rmsprop', x0)
y0_rmsprop = params.get('y0_rmsprop', y0)
lr_rmsprop = params.get('lr_rmsprop', lr)
# adam params
x0_adam = params.get('x0_adam', x0)
y0_adam = params.get('y0_adam', y0)
lr_adam = params.get('lr_adam', lr)
azim = plot_params.get('azim', -29)
elev = plot_params.get('elev', 49)
rotation = plot_params.get('rotation', -7)
# attaching 3D axis to the figure
fig = plt.figure(figsize=(12, 8))
ax = p3.Axes3D(fig, azim=azim, elev=elev)
# plot the surface
x = np.arange(-6.5, 6.5, 0.1)
y = np.arange(-6.5, 6.5, 0.1)
x, y = np.meshgrid(x, y)
z = f(x, y)
ax.plot_surface(x, y, z, rstride=1, cstride=1,
norm = LogNorm(), cmap = cm.jet)
ax.set_title(func_title, rotation=rotation)
# lines to plot in 3D
sgd_data = sgd(f, df, x0_sgd, y0_sgd, lr_sgd, steps)
nesterov_data = nesterov(f, df, x0_nesterov, y0_nesterov, lr_nesterov, steps, momentum)
adagrad_data = adagrad(f, df, x0_adagrad, y0_adagrad, lr_adagrad, steps)
rmsprop_data = rmsprop(f, df, x0_rmsprop, y0_rmsprop, lr_rmsprop, steps, decay_rate)
adam_data = adam(f, df, x0_adam, y0_adam, lr_adam, steps, beta1, beta2)
data = np.array([sgd_data, nesterov_data, adagrad_data, rmsprop_data, adam_data])
# NOTE: Can't pass empty arrays into 3d version of plot()
lines = [ax.plot(dat[0, 0:1], dat[1, 0:1], dat[2, 0:1])[0] for dat in data]
ax.legend(lines, ['SGD', 'Nesterov Momentum', 'Adagrad', 'RMSProp', 'Adam'])
ax.set_xlabel("x")
ax.set_ylabel("y")
ax.set_zlabel("z")
plt.rcParams['animation.html'] = 'html5'
line_ani = animation.FuncAnimation(fig, update_lines, steps+2, fargs=(data, lines),
interval=500, blit=False, repeat=False)
plt.close()
line_ani.save(f'optimization_{func_title}.gif', writer='imagemagick',fps=500/100)
return line_ani
func_title = 'sphere_function'
def f(x, y):
return x ** 2 + y ** 2
def df(x, y):
return (2 * x, 2 * y)
create_and_save_animation(func_title, f, df,
params={
'steps': 15,
'lr': .2,
'x0_sgd': -4,
'y0_sgd': -4,
'x0_nesterov': -4.2,
'y0_nesterov': -3.8,
'x0_adagrad': -4,
'y0_adagrad': 4,
'x0_rmsprop': -4.2,
'y0_rmsprop': 3.8,
'x0_adam': -4,
'y0_adam': 4.2,
},
plot_params={
'azim': 15,
'elev': 60,
'rotation': -7
})
func_title = 'himmelblau_function'
def f(x, y):
return (x ** 2 + y - 11) ** 2 + (x + y ** 2 - 7) ** 2
def df(x, y):
return (4 * x * (x ** 2 + y - 11) + 2 * (x + y ** 2 - 7),
2 * (x ** 2 + y - 11) + 4 * y * (x + y ** 2 - 7))
create_and_save_animation(func_title, f, df,
params={
'steps': 25,
'lr': .005,
'x0': 0,
'y0': -3,
'lr_adagrad': .5,
'lr_rmsprop': .5,
'lr_adam': .5
},
plot_params={
'azim': -29,
'elev': 70,
'rotation': 17
})
```
|
github_jupyter
|
# Sorting Objects in Instance Catalogs
_Bryce Kalmbach_
This notebook provides a series of commands that take a Twinkles Phosim Instance Catalog and creates different pandas dataframes for different types of objects in the catalog. It first separates the full sets of objects in the Instance Catalogs before picking out the sprinkled strongly lensed systems for further analysis. The complete object dataframes contain:
* Stars: All stars in the Instance Catalog
* Galaxies: All bulge and disk components of galaxies in the Instance Catalog
* AGN: All AGN in the Instance Catalog
* SNe: The supernovae that are present in the Instance Catalog
Then there are sprinkled strongly lensed systems dataframes containing:
* Sprinkled AGN galaxies: The images of the lensed AGNs
* Lens Galaxies: These are the foreground galaxies in the lens system.
* **(Not Default)** Sprinkled AGN Host galaxies: While these were turned off in Run 1 of Twinkles the original motivation for this notebook was to find these objects in a catalog to help development of lensed hosts at the DESC 2017 SLAC Collaboration Meeting Hack Day.
## Requirements
If you already have an instance catalog from Twinkles on hand all you need now are:
* Pandas
* Numpy
```
import pandas as pd
import numpy as np
```
### Parsing the Instance Catalog
Here we run through the instance catalog and store which rows belong to which class of object. This is necessary since the catalog objects do not all have the same number of properties so we cannot just import them all and then sort within a dataframe.
```
filename = 'twinkles_phosim_input_230.txt'
i = 0
not_star_rows = []
not_galaxy_rows = []
not_agn_rows = []
not_sne_rows = []
with open(filename, 'r') as f:
for line in f:
new_str = line.split(' ')
#Skip through the header
if len(new_str) < 4:
not_star_rows.append(i)
not_galaxy_rows.append(i)
not_agn_rows.append(i)
not_sne_rows.append(i)
i+=1
continue
if new_str[5].startswith('starSED'):
#star_rows.append(i)
not_galaxy_rows.append(i)
not_agn_rows.append(i)
not_sne_rows.append(i)
elif new_str[5].startswith('galaxySED'):
#galaxy_rows.append(i)
not_star_rows.append(i)
not_agn_rows.append(i)
not_sne_rows.append(i)
elif new_str[5].startswith('agnSED'):
#agn_rows.append(i)
not_star_rows.append(i)
not_galaxy_rows.append(i)
not_sne_rows.append(i)
elif new_str[5].startswith('spectra_files'):
#sne_rows.append(i)
not_star_rows.append(i)
not_galaxy_rows.append(i)
not_agn_rows.append(i)
i += 1
```
### Populating Dataframes
Now we load the dataframes for the overall sets of objects.
```
df_star = pd.read_csv(filename, delimiter=' ', header=None,
names = ['prefix', 'uniqueId', 'raPhosim', 'decPhoSim',
'phoSimMagNorm', 'sedFilepath', 'redshift',
'shear1', 'shear2', 'kappa', 'raOffset', 'decOffset',
'spatialmodel', 'internalExtinctionModel',
'galacticExtinctionModel', 'galacticAv', 'galacticRv'],
skiprows=not_star_rows)
df_star[:3]
df_galaxy = pd.read_csv(filename, delimiter=' ', header=None,
names=['prefix', 'uniqueId', 'raPhoSim', 'decPhoSim',
'phoSimMagNorm', 'sedFilepath',
'redshift', 'shear1', 'shear2', 'kappa',
'raOffset', 'decOffset', 'spatialmodel',
'majorAxis', 'minorAxis', 'positionAngle', 'sindex',
'internalExtinctionModel', 'internalAv', 'internalRv',
'galacticExtinctionModel', 'galacticAv', 'galacticRv'],
skiprows=not_galaxy_rows)
df_galaxy[:3]
df_agn = pd.read_csv(filename, delimiter=' ', header=None,
names=['prefix', 'uniqueId', 'raPhoSim', 'decPhoSim',
'phoSimMagNorm', 'sedFilepath', 'redshift',
'shear1', 'shear2', 'kappa', 'raOffset', 'decOffset',
'spatialmodel', 'internalExtinctionModel',
'galacticExtinctionModel', 'galacticAv', 'galacticRv'],
skiprows = not_agn_rows)
df_agn[:3]
df_sne = pd.read_csv(filename, delimiter=' ', header=None,
names=['prefix', 'uniqueId', 'raPhoSim', 'decPhoSim',
'phoSimMagNorm', 'shorterFileNames', 'redshift',
'shear1', 'shear2', 'kappa', 'raOffset', 'decOffset',
'spatialmodel', 'internalExtinctionModel',
'galacticExtinctionModel', 'galacticAv', 'galacticRv'],
skiprows = not_sne_rows)
df_sne[:3]
```
### Sort out sprinkled Strong Lensing Systems
Now we will pick out the pieces of strongly lensed systems that were sprinkled into the instance catalogs for the Twinkles project.
#### Lensed AGN
We start with the Lensed AGN. In Twinkles Instance Catalogs the lensed AGN have larger `uniqueId`s than normal since we added information about the systems into the `uniqueId`s. We use this to find them in the AGN dataframe.
```
sprinkled_agn = df_agn[df_agn['uniqueId'] > 20000000000]
```
Below we see a pair of lensed images from a double.
```
sprinkled_agn[:2]
```
Now we will extract the extra information we have stored in the `uniqueId`. This information is the Twinkles System number in our custom OM10 catalog in the `data` directory in Twinkles and the Twinkles Image Number which identifies which image in that particular system refers to that line in the catalog.
```
# This step undoes the step in CatSim that gives each component of a galaxy a different offset
twinkles_nums = []
for agn_id in sprinkled_agn['uniqueId']:
twinkles_ids = np.right_shift(agn_id-28, 10)
twinkles_nums.append(twinkles_ids)
#This parses the information added in the last 4 digits of the unshifted ID
twinkles_system_num = []
twinkles_img_num = []
for lens_system in twinkles_nums:
lens_system = str(lens_system)
twinkles_id = lens_system[-4:]
twinkles_id = np.int(twinkles_id)
twinkles_base = np.int(np.floor(twinkles_id/4))
twinkles_img = twinkles_id % 4
twinkles_system_num.append(twinkles_base)
twinkles_img_num.append(twinkles_img)
```
We once again look at the two images we showed earlier. We see that they are image 0 and image 1 from Twinkles System 24.
```
print twinkles_system_num[:2], twinkles_img_num[:2]
```
We now add this information into our sprinkled AGN dataframe and reset the indices.
```
sprinkled_agn = sprinkled_agn.reset_index(drop=True)
sprinkled_agn['twinkles_system'] = twinkles_system_num
sprinkled_agn['twinkles_img_num'] = twinkles_img_num
sprinkled_agn.iloc[:2, [1, 2, 3, -2, -1]]
```
The last step is to now add a column with the lens galaxy `uniqueId` for each system so that we can cross-reference between the lensed AGN and the lens galaxy dataframe we will create next. We start by finding the `uniqueId`s for the lens galaxies.
```
#The lens galaxy ids do not have the extra 4 digits at the end so we remove them
#and then do the shift back to the `uniqueID`.
lens_gal_ids = np.left_shift((np.array(twinkles_nums))/10000, 10) + 26
sprinkled_agn['lens_galaxy_uID'] = lens_gal_ids
```
We now see that the same system has the same lens galaxy `uniqueId` as we expect.
```
sprinkled_agn.iloc[:2, [1, 2, 3, -3, -2, -1]]
```
#### Lens Galaxies
Now we will create a dataframe with the Lens Galaxies.
```
lens_gal_locs = []
for idx in lens_gal_ids:
lens_gal_locs.append(np.where(df_galaxy['uniqueId'] == idx)[0])
lens_gals = df_galaxy.iloc[np.unique(lens_gal_locs)]
lens_gals = lens_gals.reset_index(drop=True)
```
We now have the lens galaxies in their own dataframe that can be joined on the lensed AGN dataframe by the `uniqueId`.
```
lens_gals[:1]
```
And we can check how many systems there are by checking the length of this dataframe.
```
len(lens_gals)
```
Showing that we 198 systems in the Twinkles field!
#### Lensed AGN Host Galaxies (Not in Twinkles 1 catalogs)
In Twinkles 1 catalogs we do not have host galaxies around our lensed AGN, but in the future we will want to be able to include this. We experimented with this at the [2017 DESC SLAC Collaboration Meeting Hack Day](https://confluence.slac.stanford.edu/display/LSSTDESC/SLAC+2017+-+Friday+Hack+Day) since Nan Li, Matt Wiesner and others are working adding lensed hosts into images.
Therefore, I have included the capacity to find the host galaxies here for future use.
To start we once again cut based on the `uniqueId` which will be larger than a normal galaxy.
```
host_gals = df_galaxy[df_galaxy['uniqueId'] > 178465239310]
host_gals = df_galaxy[df_galaxy['uniqueId'] > 170000000000]
host_gals[:2]
```
Then like the lensed AGN we add in the info from the longer Ids and the lens galaxy info along with resetting the index.
```
twinkles_gal_nums = []
for gal_id in host_gals['uniqueId']:
twinkles_ids = np.right_shift(gal_id-26, 10)
twinkles_gal_nums.append(twinkles_ids)
host_twinkles_system_num = []
host_twinkles_img_num = []
for host_gal in twinkles_gal_nums:
host_gal = str(host_gal)
host_twinkles_id = host_gal[-4:]
host_twinkles_id = np.int(host_twinkles_id)
host_twinkles_base = np.int(np.floor(host_twinkles_id/4))
host_twinkles_img = host_twinkles_id % 4
host_twinkles_system_num.append(host_twinkles_base)
host_twinkles_img_num.append(host_twinkles_img)
host_lens_gal_ids = np.left_shift((np.array(twinkles_gal_nums))/10000, 10) + 26
host_gals = host_gals.reset_index(drop=True)
host_gals['twinkles_system'] = host_twinkles_system_num
host_gals['twinkles_img_num'] = host_twinkles_img_num
host_gals['lens_galaxy_uID'] = host_lens_gal_ids
host_gals.iloc[:2, [1, 2, 3, -3, -2, -1]]
```
Notice that there are different numbers of sprinkled AGN and host galaxy entries.
```
len(sprinkled_agn), len(host_gals)
```
This is because some host galaxies have both bulge and disk components, but not all do. The example we have been using does have both components and thus we have four entries for the doubly lensed system in the host galaxy dataframe.
```
host_gals[host_gals['lens_galaxy_uID'] == 21393434].iloc[:, [1, 2, 3, -3, -2, -1]]
```
## Final Thoughts
The main point of being able to break up the instance catalogs like this is for validation and future development. Being able to find the sprinkled input for Twinkles images helps us validate what appears in our output catalogs. Storing this input in pandas dataframes makes it easy to find and compare against the output catalogs that are accessed using tools in the DESC Monitor. In addition, this is a useful tool for future development like the creation of lensed images for the AGN host galaxies that we hope to add in the next iteration of Twinkles.
|
github_jupyter
|
# Cloud-based machine learning || 云端机器学习
Thus far, we have looked at building and fitting ML models “locally.” True, the notebooks have been located in the cloud themselves, but the models with all of their predictive and classification power are stuck in those notebooks. To use these models, you would have to load data into your notebooks and get the results there.
到目前为止,我们已经在“本地”构建和調適機器學習模型。的确,这些notebook檔案本身已经在云端,但是所有预测和分类功能的模型也都只在这些notebook中。 要使用这些模型,您必须将数据加载到其中以获取结果。
In practice, we want those models accessible from a number of locations. And while the management of production ML models has a lifecycle all its own, one part of that is making models accessible from the web. One way to do so is to develop them using third-party cloud tools, such as [Microsoft Azure ML Studio](https://studio.azureml.net) (not to be confused with Microsoft Azure Machine Learning sService, which provides end-to-end lifecycle management for ML models).
在实际操作中,我们希望可以从多个位置取得这些模型。 尽管生产ML模型的管理具有其自己的生命周期,使模型可以从网络获取是其中一部分。使用第三方云端工具是一种开发方式,例如Microsoft Azure ML Studio。(不要与Microsoft Azure Machine Learning sService混淆,后者为ML模型提供端到端的生命周期管理)。
Alternatively, we can develop and deploy a function that can be accessed by other programs over the web—a web service—that runs within Azure ML Studio, and we can do so entirely from a Python notebook. In this section, we will use the [`azureml`](https://github.com/Azure/Azure-MachineLearning-ClientLibrary-Python) package to deploy an Azure ML web service directly from within a Python notebook (or other Python environment).
或者,我们可以开发和部署一个功能,该功能可以在Azure ML Studio中运行,并且我们可以通过Web上的其他程序通过Web服务来访问,并且可以完全通过Python notebook进行操作。 在本节中,我们将使用azureml package直接在一個Python notebook(或其他Python环境)中部署Azure ML Web服务。
> <font color=red>**Note:**</font> The `azureml` package presently works only with Python 2. If your notebook is not currently running Python 2, change it in the menu at the top of the notebook by clicking **Kernel > Change kernel > Python 2**.
> 注意:azureml软件包目前仅支持Python2。如果您的notebook当前未运行Python 2,请在上方菜单中单击进行更改
Kernel>Change Kernel> Python 2。
## Create and connect to an Azure ML Studio workspace || 创建并连接到Azure ML Studio工作区
The `azureml` package is installed by default with Azure Notebooks, so we don't have to worry about that. It uses an Azure ML Studio workspace ID and authorization token to connect your notebook to the workspace; you will obtain the ID and token by following these steps:
Azure笔记本会把Azureml软件包默认安装,它使用Azure ML Studio工作区ID和授权令牌确认将笔记本连接到工作区。遵循以下步骤,您将获取ID和令牌:
1. Open [Azure ML Studio](https://studio.azureml.net) in a new browser tab and sign in with a Microsoft account. Azure ML Studio is free and does not require an Azure subscription. Once signed in with your Microsoft account (the same credentials you’ve used for Azure Notebooks), you're in your “workspace.”
在新的浏览器选项标签中打开Azure ML Studio,然后使用Microsoft帐户登录。 Azure ML Studio是免费的,不用认购就能使用。 使用Microsoft帐户(与Azure notebook使用的凭证相同)登录后,就可进入“工作区”。
2. On the left pane, click **Settings**.
在左窗格中单击设置。
<br/><br/>
3. On the **Name** tab, the **Workspace ID** field contains your workspace ID. Copy that ID into the `workspace_id` value in the code cell in Step 5 of the notebook below.
在``名称''选项标签上的``工作区ID''字段上有您的工作区ID。 将该ID复制到下面第5步代码单元中的workspace_id值。
<br/><br/>
4. Click the **Authorization Tokens** tab, and then copy either token into the `authorization_token` value in the code cell in Step 5 of the notebook.
单击“授权token”选项,然后复制任一token至第5步代码单元中的authorization_token”值中。
<br/><br/>
5. Run the code cell below; if it runs without error, you're ready to continue.
运行下面的代码单元; 如果没有错误可继续运行。
```
from azureml import Workspace
# Replace the values with those from your own Azure ML Studio instance; see Prerequisites
# The workspace_id is a string of hexadecimal characters; the token is a long string of random characters.
# 将值替换为您自己的Azure ML Studio实例中的值; 请参阅前提条件(Prerequisites)
# workspace_id是十六进制字符的字符串; token是一长串随机字符。
workspace_id = 'deff291c5b7b42f1b5fdc60669aa8f8b'
authorization_token = 'token'
ws = Workspace(workspace_id, authorization_token)
```
## Explore forest fire data
Let’s look at a meteorological dataset collected by Cortez and Morais for 2007 to study the burned area of forest fires in the northeast region of Portugal.
让我们看一下由Cortez和Morais于2007年收集到的气象数据集,研究的是葡萄牙东北部森林大火燃烧面积
> P. Cortez and A. Morais. A Data Mining Approach to Predict Forest Fires using Meteorological Data.
In J. Neves, M. F. Santos and J. Machado Eds., New Trends in Artificial Intelligence,
Proceedings of the 13th EPIA 2007 - Portuguese Conference on Artificial Intelligence, December,
Guimaraes, Portugal, pp. 512-523, 2007. APPIA, ISBN-13 978-989-95618-0-9.
> P. Cortez和A. Morais。一种使用气象数据预测森林火灾的数据挖掘方法。。 参见J. Neves,MF Santos和J. Machado编辑,《人工智能的新趋势》,第13届EPIA大会论文集-葡萄牙人工智能会议,12月,葡萄牙吉马良斯,第512-523页,2007年。APPIA,ISBN -13 978-989-95618-0-9。
The dataset contains the following features:
数据集包含以下特征:
- **`X`**: x-axis spatial coordinate within the Montesinho park map: 1 to 9
- **`Y`**: y-axis spatial coordinate within the Montesinho park map: 2 to 9
- **`month`**: month of the year: "1" to "12" jan-dec
- **`day`**: day of the week: "1" to "7" sun-sat
- **`FFMC`**: FFMC index from the FWI system: 18.7 to 96.20
- **`DMC`**: DMC index from the FWI system: 1.1 to 291.3
- **`DC`**: DC index from the FWI system: 7.9 to 860.6
- **`ISI`**: ISI index from the FWI system: 0.0 to 56.10
- **`temp`**: temperature in Celsius degrees: 2.2 to 33.30
- **`RH`**: relative humidity in %: 15.0 to 100
- **`wind`**: wind speed in km/h: 0.40 to 9.40
- **`rain`**: outside rain in mm/m2 : 0.0 to 6.4
- **`area`**: the burned area of the forest (in ha): 0.00 to 1090.84
- X:Montesinho公园地图内的x轴空间坐标:1到9
- Y:Montesinho公园地图内的y轴空间坐标:2到9
- month:一年中的月份:“ 1”至“ 12” 指一月至十二月
- 日期:一星期的天數:“ 1”至“ 7”指星期一至星期日
- FFMC:FWI系统中的FFMC指数:18.7至96.20
- DMC:来自FWI系统的DMC指数:1.1至291.3
- DC:FWI系统中的DC指数:7.9至860.6
- ISI:FWI系统中的ISI指数:0.0至56.10
- temp:摄氏温度:2.2至33.30
- RH:相对湿度(%):15.0至100
- 风:以km / h为单位的风速:0.40至9.40
- 雨水:室外雨水(mm / m2):0.0至6.4
- 面积:森林被烧毁的面积(公顷):0.00至1090.84
Let's load the dataset and visualize the area that was burned in relation to the temperature in that region.
让我们加载数据集,并可视化区域温度与燃烧区域的关系。
```
import pandas as pd
df = pd.DataFrame(pd.read_csv('./Data/forestfires.csv'))
%matplotlib inline
from ggplot import *
ggplot(aes(x='temp', y='area'), data=df) + geom_line() + geom_point()
```
Intuitively, the hotter the weather, the more hectares burned in forest fires.
直观地说,天气越热,森林大火烧毁的公顷数就越多。
## Transfer your data to Azure ML Studio
We have our data, but how do we get it into Azure ML Studio in order to use it there? That is where the `azureml` package comes in. It enables us to load data and models into Azure ML Studio from an Azure Notebook (or any Python environment).
我们有我们的数据,但是我们如何将其放入Azure ML Studio以便在那里使用?这就是“azureml”包的好处。它使我们能够从Azure笔记本(或任何Python环境)将数据和模型加载到Azure ML Studio中。
The first code cell of this notebook is what establishes the connection with *your* Azure ML Studio account.
此笔记本的第一个代码单元是与*您的*Azure ML Studio帐户建立连接。
Now that you have your notebook talking to Azure ML Studio, you can export your data to it:
现在您的笔记本已经与Azure ML Studio进行了对话,您可以将数据导出到云端:
```
# from azureml import DataTypeIds
# dataset = ws.datasets.add_from_dataframe(
# dataframe=df,
# data_type_id=DataTypeIds.GenericCSV,
# name='Forest Fire Data 2',
# description='Paulo Cortez and Aníbal Morais (Univ. Minho) @ 2007'
# )
```
After running the code above, you can see the dataset listed in the **Datasets** section of the Azure Machine Learning Studio workspace. (**Note**: You might need to switch between browser tabs and refresh the page in order to see the dataset.)
运行上述代码后,您可以在Azure Machine Learning Studio工作区的**数据集**部分中看到列出的数据集
<br/>
It is also straightforward to list the datasets available in the workspace and transfer datasets from the workspace to the notebook:
列出工作区中可用的数据集并将数据集从工作区传输到笔记本也很简单:
```
print('\n'.join([i.name for i in ws.datasets if not i.is_example])) # only list user-created datasets
```
You can also interact with and examine the dataset in Azure ML Studio directly from your notebook:
您还可以直接从笔记本与Azure ML Studio中的数据集进行操作和检查:
```
# Read some more of the metadata
ds = ws.datasets['Forest Fire Data 2']
print(ds.name)
print(ds.description)
print(ds.family_id)
print(ds.data_type_id)
print(ds.created_date)
print(ds.size)
# Read the contents
df2 = ds.to_dataframe()
df2.head()
```
## Create your model || 创建模型
We're now back into familiar territory: prepping data for the model and fitting the model. To keep it interesting, we'll use the scikit-learn `train_test_split()` function with a slight change of parameters to select 75 percent of the data points for training and 25 percent for validation (testing).
现在我们又回到了熟悉的领域:为模型准备数据并拟合模型。
我们将使用scikit learn“train_test_split()”,只需稍微更改参数,就可以选择75%的数据点进行训练,25%的数据点进行验证(测试)。
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
df[['wind','rain','month','RH']],
df['temp'],
test_size=0.25,
random_state=42
)
```
Did you see what we did there? Rather than select all of the variables for the model, we were more selective and just chose windspeed, rainfall, month, and relative humidity in order to predict temperature.
我们没有为模型选择所有的变量,而是选择了[风速、降雨量、月份,相对湿度]来预测温度。
Fit scikit-learn's `DecisionTreeRegressor` model using the training data. This algorithm is a combination of the linear regression and decision tree classification that you worked with in Section 6.
使用训练数据拟合scikit learn的“决策树处理器”模型。此算法是您在第6节中使用的线性回归和决策树分类的组合。
```
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import r2_score
regressor = DecisionTreeRegressor(random_state=42)
regressor.fit(X_train, y_train)
y_test_predictions = regressor.predict(X_test)
print('R^2 for true vs. predicted test set forest temperature: {:0.2f}'.format(r2_score(y_test, y_test_predictions)))
# Play around with this algorithm. 玩玩这个算法。
# Can you get better results changing the variables you select for the training and test data?
# 改变你为训练和测试数据选择的变量,你能得到更好的结果吗?
# What if you look at different variables for the response?
# 改变选择的变量来预测温度?
```
## Deploy your model as a web service || 将模型部署为web服务
This is the important part. Once deployed as a web service, your model can be accessed from anywhere. This means that rather than refit a model every time you need a new prediction for a business or humanitarian use case, you can send the data to the pre-fitted model and get back a prediction.
这是重要的部分。一旦部署为web服务,就可以从任何地方访问您的模型。
First, deploy the model as a predictive web service. To do so, create a wrapper function that takes input data as an argument and calls `predict()` with your trained model and this input data, returning the results.
首先,将模型部署为预测性web服务。 然后,创建一个"包装器"函数,该函数将输入数据作为参数,并使用经过训练的模型和该输入数据调用“predict()”,返回结果。
```
from azureml import services
@services.publish(workspace_id, authorization_token)
@services.types(wind=float, rain=float, month=int, RH=float)
@services.returns(float)
# The name of your web service is set to this function's name
# web服务的名称将设置为此函数的名称
def forest_fire_predictor(wind, rain, month, RH):
return regressor.predict([wind, rain, month, RH])
# Hold onto information about your web service so
# you can call it within the notebook later
service_url = forest_fire_predictor.service.url
api_key = forest_fire_predictor.service.api_key
help_url = forest_fire_predictor.service.help_url
service_id = forest_fire_predictor.service.service_id
```
You can also go to the **Web Services** section of your Azure ML Studio workspace to see the predictive web service running there.
您还可以转到Azure ML Studio工作区的**Web服务**部分,查看在那里运行的Web服务。
## Consuming the web service || 使用web服务
Next, consume the web service. To see if this works, try it here from the notebook session in which the web service was created. Just call the predictor directly:
接下来,使用web服务。若要查看此方法是否有效,请在创建web服务的笔记本直接预测:
```
forest_fire_predictor.service(5.4, 0.2, 9, 22.1)
```
At any later time, you can use the stored API key and service URL to call the service. In the example below, data can be packaged in JavaScript Object Notation (JSON) format and sent to the web service.
以后任何时候,您都可以使用存储的API密钥和服务URL来调用服务。在下面的示例中,数据可以用(JSON)格式打包并发送到web服务。
```
import urllib2
import json
data = {"Inputs": {
"input1": {
"ColumnNames": [ "wind", "rain", "month", "RH"],
"Values": [["5.4", "0.2", "9", "22.1"]]
}
}, # Specified feature values
"GlobalParameters": {}
}
body = json.dumps(data)
headers = {'Content-Type':'application/json', 'Authorization':('Bearer '+ api_key)}
req = urllib2.Request(service_url, body, headers)
try:
response = urllib2.urlopen(req)
result = json.loads(response.read()) # load JSON-formatted string response as dictionary
print(result['Results']['output1']['value']['Values'][0][0]) # Get the returned prediction
except urllib2.HTTPError, error:
print("The request failed with status code: " + str(error.code))
print(error.info())
print(json.loads(error.read()))
from sklearn.datasets import load_iris
'''
Iris plants dataset
Data Set Characteristics:
Number of Instances
150 (50 in each of three classes)
Number of Attributes
4 numeric, predictive attributes and the class
Attribute Information
sepal length in cm
sepal width in cm
petal length in cm
petal width in cm
class:
Iris-Setosa
Iris-Versicolour
Iris-Virginica
Missing Attribute Values
None
Class Distribution
33.3% for each of 3 classes.
Creator
R.A. Fisher
'''
# 鸢尾花植物数据集
# 数据集特征:
# 实例数
# 150(三类各50个)
# 属性数
# 4个数值,预测属性和类别
# 属性信息
# 花萼长度(厘米)
# 花萼宽度(厘米)
# 花瓣长度(厘米)
# 花瓣宽度(厘米)
# 类:
# 山鸢尾
# 杂色鸢尾
# 维吉尼亚鸢尾
# 缺少属性值
# 没有
# 分布
# 3个类别各33.3%。
# 创作者
# 费舍尔(R.A. Fisher)
'''
The famous Iris database, first used by Sir R.A. Fisher. The dataset is taken from Fisher’s paper. Note that it’s the same as in R, but not as in the UCI Machine Learning Repository, which has two wrong data points.
This is perhaps the best known database to be found in the pattern recognition literature. Fisher’s paper is a classic in the field and is referenced frequently to this day. (See Duda & Hart, for example.) The data set contains 3 classes of 50 instances each, where each class refers to a type of iris plant. One class is linearly separable from the other 2; the latter are NOT linearly separable from each other.
'''
#著名的鸢尾花数据库,最早是由费舍尔(R.A. Fisher)所采用。该数据集摘自费舍尔的论文。请注意,它与R中的相同,但与UCI机器学习数据集中的R不同,后者有两个错误的数据点。
#这也许是模式识别文献中最有名的数据库。费舍尔的论文是该领域的经典之作,至今一直被引用。 (Duda&Hart的论文就是一个例子)数据集包含3类,每类50个实例,其中每一类都涉及一种鸢尾植物。有一类与另两类可线性分离;但这两类彼此不能线性分离。
# for photos of the flowers: https://www.jianshu.com/p/147c62ad4d2f 鸢尾花照片:https://www.jianshu.com/p/147c62ad4d2f
iris_data_raw = load_iris()
X, y = load_iris(return_X_y=True)
print(list(iris_data_raw.target_names))
iris_df = pd.DataFrame(iris_data_raw.data, columns=iris_data_raw.feature_names)
iris_df.head()
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=.2, random_state=42)
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.metrics import accuracy_score
# for more details about ensemble methods and gradient boosting please see: https://sklearn.apachecn.org/docs/master/12.html
# 更多集成方法与梯度提升信息,请参见:https://sklearn.apachecn.org/docs/master/12.html
gbc = GradientBoostingClassifier(random_state=42)
gbc.fit(X_train, y_train)
y_prediction_gbc = gbc.predict(X_test)
#预测精度
print('accuracy_score for true vs. predicted test set forest temperature: {:0.2f}'.format(accuracy_score(y_test, y_prediction_gbc)))
#(实际值 , 预测值)
print('actual', 'predicted')
zip(y_test, y_prediction_gbc)
'''
as you can see we were able to achieve a perfect prediction using one of the most powerful models in sklearn
如您所见,我们能够使用sklearn其中一个最强大的模型,来获得完美的预测
and the best part is... we did not even do a parameter tuneing step (if we did we can expect potentially significantly results i.e. on harder problems or regression)
最棒的部分是...我们甚至没有执行参数调整步骤(如果有的话,我们可以预期获得明显更好的结果(在更困难的回归问题上))
sklearn is highly optimized with reasonable default parameters / priors
sklearn是经过高度优化,具有合理的默认参数/先验值
when applying the right model to the right data, most of the time it works "right outside of the box/right off the shelf"
当在正确的数据上应用正确的模型,多数/经常 它工作起来是“易用的” / “现成的"
'''
'''
now you can try to follow the same steps from above and deploy gbc model
现在您可以试试使用上面相同的步骤把gbc ML模型部署到云端
'''
```
You have now created your own ML web service. Let's now see how you can also interact with existing ML web services for even more sophisticated applications.
### Exercise: 练习
Try this same process of training and hosting a model through Azure ML Studio with the Pima Indians Diabetes dataset (in CSV format in your data folder). The dataset has nine columns; use any of the eight features you see fit to try and predict the ninth column, Outcome (1 = diabetes, 0 = no diabetes).
尝试使用Pima Indians Diabetes数据集(数据文件夹中的CSV格式)通过Azure ML Studio进行相同的模型培训和部署过程.数据集有九列;使用您认为合适的八个特性中的任何一个来尝试和预测第九列,结果(1=糖尿病,0=无糖尿病)。
> **Takeaway**: In this part, you explored fitting a model and deploying it as a web service. You did this by using now-familiar tools in an Azure Notebook to build a model relating variables surrounding forest fires and then posting that as a function in Azure ML Studio. From there, you saw how you and others can access the pre-fitted models to make predictions on new data from anywhere on the web.
在本部分中,探讨了如何拟合模型并将其部署为web服务。通过在Azure笔记本中使用Data Science必备的工具来构建一个与森林火灾相关的模型,然后将其作为函数发布到Azure ML Studio中。从那里,您看到了您和其他人如何访问预先安装的模型,以便从web上的任何位置对新数据进行预测。
|
github_jupyter
|
# Mumbai House Price Prediction - Supervised Machine Learning-Regression Problem
## Data Preprocessing
# The main goal of this project is to Predict the price of the houses in Mumbai using their features.
# Import Libraries
```
# importing necessary libraries
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
import numpy as np
from scipy import stats
import re
```
# Load dataset
```
# Load the dataset
df=pd.read_csv('house_scrape.csv')
df.head(5)
df = df.drop(['type_of_sale'], axis = 1)
df.shape
duplicate = df[df.duplicated()]
duplicate
# Drops the duplicate entires in the dataset.
df=df.drop_duplicates()
# As number of rows would vary we need to reset index.
df=df.reset_index()
df.head()
# Dropping unnecessary columns in dataset.
df=df.drop(labels='index',axis=1)
df.head()
```
# Exploratory Data Analysis
```
df.shape
df.info()
#we have 3 numeric variables and 5 categorical variables
#we have column price in lakhs
df.describe()
#observe 75% and max value it shows huge diff
sns.pairplot(df)
plt.show()
# area_insqft and price(L) have slightly linear correlation with some outliers
# value count of each feature
def value_count(df):
for var in df.columns:
print(df[var].value_counts())
print("--------------------------------")
value_count(df)
# correlation heatmap
sns.heatmap(df.corr(),cmap="coolwarm", annot=True)
plt.show()
```
# Preare Data for Machine Learning Model
# Data Cleaning
```
df.isnull().sum() # find the homuch missing data available
df.isnull().mean()*100 # % of measing value
# visualize missing value using heatmap to get idea where is the value missing
plt.figure(figsize=(16,9))
sns.heatmap(df.isnull())
```
# Handling the null values of sale_type
```
df.loc[df['construction_status'] == 'Under Construction', 'Sale_type'] = 'new'
df
#so here we can replace the null values for sale_type with help of construnction_status column we can replace 'new' in sale_type where status is 'Under Construntion'.
df1 = df['Sale_type']
df1
df = df.drop(['Sale_type'],axis = 1)
df.head()
#we can drop the Sale_type as we will concatenate it in df.
df1 = df1.fillna(method='ffill')
df1.isnull().sum()
#to handle rest of the null values in Sale_type we used ffill() method
df = pd.concat([df, df1], axis=1)
df.head()
```
# Handling the null values of Bathroom
```
#we need to extract the numeric value from string first
df["Bathroom"] = df.assign(Bathroom = lambda x: x['Bathroom'].str.extract('(\d+)'))
#lets convert the bathroom from object type to numeric
df["Bathroom"] = pd.to_numeric(df["Bathroom"])
df2 = df['Bathroom']
df2
df = df.drop(['Bathroom'],axis = 1)
df.head()
#we can drop the Bathroom as we will concatenate it in df.
df2 = df2.fillna(method='bfill')
df2.isnull().sum()
#to handle rest of the null values in Sale_type we used ffill() method
df = pd.concat([df, df2], axis=1)
df.head()
df.isnull().sum() #check for the null values
#so our data has no null values now we can proceed further now with other data preprocessing
#lets convert the rate_persqft from object type to numeric
# got error cannot convert str "price" at position 604 so replacing price with rate/sqft value.
df["rate_persqft"] = df["rate_persqft"].replace("Price", 8761)
df["rate_persqft"] = pd.to_numeric(df["rate_persqft"])
#now we can check the description of the data again
df.describe()
sns.heatmap(df.corr(), annot=True)
plt.show()
```
# Finding outliers and removing them
```
# function to create histogram, Q-Q plot and boxplot
# for Q-Q plots
import scipy.stats as stats
def diagnostic_plots(df, variable):
# function takes a dataframe (df) and
# the variable of interest as arguments
# define figure size
plt.figure(figsize=(16, 4))
# histogram
plt.subplot(1, 3, 1)
sns.distplot(df[variable], bins=30)
plt.title('Histogram')
# Q-Q plot
plt.subplot(1, 3, 2)
stats.probplot(df[variable], dist="norm", plot=plt)
plt.ylabel('Variable quantiles')
# boxplot
plt.subplot(1, 3, 3)
sns.boxplot(y=df[variable])
plt.title('Boxplot')
plt.show()
num_var = ["BHK","price(L)","rate_persqft","area_insqft","Bathroom"]
for var in num_var:
print("******* {} *******".format(var))
diagnostic_plots(df, var)
# here we observe outlier using histogram,, qq plot and boxplot
#here we can see there are outliers in some features that we will remove to balance our dataset so other variable didnt get affected.
df3 = df[['price(L)', 'rate_persqft', 'area_insqft']].copy()
df3
#here we make a new data frame to remove the outliers of the features needed and then concatenate with previous dataframe
df = df.drop(['price(L)','rate_persqft','area_insqft'],axis = 1)
df.head()
#droping the values so we can concat the new clean features
z_scores = stats.zscore(df3)
abs_z_scores = np.abs(z_scores)
filtered_entries = (abs_z_scores < 3).all(axis=1)
df3 = df3[filtered_entries]
#using Z-sore to remove the outliers from the features selected
df3
#this is our new dataframe with removed outliers affecting our data
sns.boxplot(x=df3['price(L)'])
#we can compare the above box plots and see the difference outliers has been removed the ones remaining are relevant to our data
sns.boxplot(x=df3['rate_persqft'])
sns.boxplot(x=df3['area_insqft'])
df = pd.concat([df, df3], axis=1)
df.head()
#concatenate to our previous dataframe
df.isnull().sum()
#after we removed the outliers we get some na values
df = df.dropna()
df
#we can drop those values and reset the index so we get all aligned dataset
df=df.reset_index()
#resetting the index
df=df.drop(labels='index',axis=1)
df.head()
#drop the extra index created that we dont need
```
# Categorical variable encoding
# Encoding Construction_status
```
for cat_var in ["Under Construction","Ready to move"]:
df["construction_status"+cat_var] = np.where(df['construction_status']==cat_var, 1,0)
df.shape
```
# Encoding Sale_type
```
for cat_var in ["new","resale"]:
df["Sale_type"+cat_var] = np.where(df['Sale_type']==cat_var, 1,0)
df.shape
```
# Encoding Location
```
# here we are selecting only the location which have count above 50
location_value_count = df['location'].value_counts()
location_value_count
location_get_50 = location_value_count[location_value_count>=50].index
location_get_50
for cat_var in location_get_50:
df['location_'+cat_var]=np.where(df['location']==cat_var, 1,0)
df.shape
df.head()
```
# Drop categorical variable
```
df = df.drop(["location","construction_status",'Sale_type'], axis =1)
df.shape
df.head()
df.to_csv('final_house_scrape.csv', index=False)
```
|
github_jupyter
|
```
from scipy.stats import ranksums, sem
import numpy as np
from statannot import add_stat_annotation
import copy
import os
import matplotlib.pyplot as plt
import matplotlib
save_dir = os.path.join("/analysis/fabiane/documents/publications/patch_individual_filter_layers/MIA_revision")
plt.style.use('ggplot')
matplotlib.use("pgf")
matplotlib.rcParams.update({
"pgf.texsystem": "pdflatex",
'font.family': 'serif',
'font.size':8,
'text.usetex': True,
'pgf.rcfonts': False,
})
def get_runtime(filename):
all_runtime_lines = ! grep "Total time elapsed:" $filename
all_iter_lines = ! grep "output_dir = " $filename
times = []
iterations = []
for runtime_line in all_runtime_lines:
time = 0
# convert runtime
for idx, inner_line in enumerate(runtime_line.split(":")[1:4]):
inner_line = inner_line.strip().split("\\")[0]
if idx == 0:
# remove time symbol and convert to seconds
time += int(inner_line.split("h")[0]) * 3600
elif idx == 1:
# remove time symbol and convert to seconds
time += int(inner_line.split("m")[0]) * 60
elif idx == 2:
# remove time symbol
time += int(inner_line.split("s")[0])
times.append(time)
# UKB baseline full set didn't finish reporting for the last of the 10 runs
# it finished running but did not print out its running time.
# Therefore, we ran the model again in a separate file for that run.
if len(times) == 9:
failed_time = 5 * 3600 + 42 * 60 + 49 # taken from individual run of final iteration
times.append(failed_time)
# convert iterations
assert(len(all_iter_lines) == 1)
iter_line = all_iter_lines[0].split("\"")[2]
all_checkpoints = !ls -l $iter_line
for checkpoint in all_checkpoints:
if checkpoint.endswith("FINAL.h5"):
iterations.append(int(checkpoint.split("_")[-2]))
#assert(len(times) == 50 or len(times) == 10)
return times, iterations
# cleanup script for old runs
#!destination="/ritter/share/projects/Methods/Eitel_local_filter/experiments_submission/models/MS/full_set/10xrandom_splits/experiment_r3/backup" find "/ritter/share/projects/Methods/Eitel_local_filter/experiments_submission/models/MS/full_set/10xrandom_splits/experiment_r3/" -type f -newermt "2021-01-01 00:00" -not -newermt "2021-01-22 15:55" -exec bash -c ' dirname=$(dirname {}); mkdir -p "${destination}/${dirname}"; echo ! mv {} ${destination}/${dirname}/' \;;
times, iterations = get_runtime("ADNI_experiment-20_percent-10xrandom_sampling-random_search-Copy1.ipynb")
filename_list = {
"ADNI_small" : [
"ADNI_baseline-20_percent-10xrandom_sampling-random_search.ipynb",
"ADNI_LiuPatches-20_percent-10xrandom_sampling_random_search.ipynb",
"ADNI_experiment-20_percent-10xrandom_sampling-random_search-Copy1.ipynb"
],
"UKB_small" : [
"UKB_sex_baseline-20_percent-10xrandom_sampling_random_search-Copy1.ipynb",
"UKB_sex_LiuPatches-20_percent-10xrandom_sampling_random_search.ipynb",
"UKB_sex_experiment-20_percent-10xrandom_sampling-random_search-Copy1.ipynb"
],
"MS_small" : [
"MS_baseline-full_set-10xrandom_splits-random_search-Copy2.ipynb",
"MS_LiuPatches-full_set-10xrandom_splits.ipynb",
"MS_experiment-full_set-10xrandom_splits-random_search-Copy1.ipynb"
],
"ADNI_big" : [
"ADNI_baseline-full_set-10xrandom_sampling-random_search.ipynb",
"ADNI_LiuPatches-full_set-10xrandom_sampling.ipynb",
"ADNI_experiment-full_set-10xrandom_sampling-random_search.ipynb"
],
"UKB_big" : [
"UKB_sex_baseline-full_set-10xrandom_sampling-random_search-Copy1.ipynb",
"UKB_sex_LiuPatches-full_set-10xrandom_sampling_random_search.ipynb",
"UKB_sex_experiment-full_set-10xrandom_sampling-random_search-Copy1.ipynb"
],
}
"""fig = plt.Figure()
axs = []
for i, experiment in enumerate(filename_list):
print(experiment)
time_base, iter_base = get_runtime(filename_list[experiment][0])
time_pif, iter_pif = get_runtime(filename_list[experiment][1])
# run statistical test
test_time = ranksums(time_base, time_pif)
test_iter = ranksums(iter_base, iter_pif)
print(f"Avg time baseline in seconds: {np.mean(time_base)}")
print(f"Avg time PIF in seconds: {np.mean(time_pif)}")
print(test_time)
print(f"Avg number of iterations baseline: {np.mean(iter_base)}")
print(f"Avg number of iterations PIF: {np.mean(iter_pif)}")
print(test_iter)
# plot results
i *= 3
ax = plt.bar([i, i+1],
[np.mean(time_base), np.mean(time_pif)],
color=["tab:blue", "tab:orange"],
label=["Baseline", "PIF"])
axs.append(ax)
plt.errorbar(x=[i, i+1],
y=[np.mean(time_base), np.mean(time_pif)],
yerr=[sem(time_base), sem(time_pif)],
color="black",
ls="none",
label="_Errorbar")
x1, x2 = i, i+1
y, h, col = np.max(time_base) + 70, 2, 'k'
#plt.plot([x1, x1, x2, x2], [y, y+h, y+h, y], lw=1.5, c=col)
if test_time.pvalue < 0.05:
plt.text((x1+x2)*.5, y+h, "*", ha='center', va='bottom', color=col)
#else:
# plt.text((x1+x2)*.5, y+h, "ns", ha='center', va='bottom', color=col)
leg = plt.legend(axs, ["Baseline", "PIF"])
leg.legendHandles[0].set_color('tab:blue')
leg.legendHandles[1].set_color('tab:orange')
plt.xticks(np.arange(0.5, 13, step=3), ["ADNI small", "ADNI big", "UKB small", "UKB big", "VIMS"])
plt.ylabel("Seconds")
plt.title("Runtime in seconds")
plt.show()"""
fig, axes = plt.subplots(1, 2, figsize=(12, 6))
fig.set_size_inches(w=6.1, h=3.1)
def inc_y(y):
if y > 1000:
y += 2500
else:
y += 23
return y
def sub_plot(i, ax, base_data, patch_data, pif_data, test_base_patch, test_base_pif, test_patch_pif):
i *= 4
ax.bar([i, i+1, i+2],
[np.mean(base_data), np.mean(patch_data), np.mean(pif_data)],
color=["tab:gray", "tab:blue", "tab:orange"],
) #label=["Baseline", "PIF"])
ax.errorbar(x=[i, i+1, i+2],
y=[np.mean(base_data), np.mean(patch_data), np.mean(pif_data)],
yerr=[sem(base_data), sem(patch_data), sem(pif_data)],
color="black",
ls="none")
# define coords for significance labels
y, col = np.mean(patch_data), 'k'
if y > 360:
y += 800
h = 500
else:
y += 5
h = 2
# test between baseline and patch based
x1, x2 = i, i+1
if test_base_patch.pvalue < 0.001:
ax.text((x1+x2)*.5, y+h/2, "**", ha='center', va='bottom', color=col)
ax.plot([x1, x1, x2, x2], [y, y+h, y+h, y], lw=1.5, c=col)
elif test_base_patch.pvalue < 0.01:
ax.text((x1+x2)*.5, y+h/2, "*", ha='center', va='bottom', color=col)
ax.plot([x1, x1, x2, x2], [y, y+h, y+h, y], lw=1.5, c=col)
# test between baseline and PIF
x1, x2 = i, i+2
if test_base_pif.pvalue < 0.001:
y = inc_y(y)
ax.text((x1+x2)*.5, y+h/2, "**", ha='center', va='bottom', color=col)
ax.plot([x1, x1, x2, x2], [y, y+h, y+h, y], lw=1.5, c=col)
elif test_base_pif.pvalue < 0.01:
y = inc_y(y)
ax.text((x1+x2)*.5, y+h/2, "*", ha='center', va='bottom', color=col)
ax.plot([x1, x1, x2, x2], [y, y+h, y+h, y], lw=1.5, c=col)
# test between patch-based and PIF
x1, x2 = i+1, i+2
if test_patch_pif.pvalue < 0.001:
y = inc_y(y)
ax.text((x1+x2)*.5, y+h/2, "**", ha='center', va='bottom', color=col)
ax.plot([x1, x1, x2, x2], [y, y+h, y+h, y], lw=1.5, c=col)
elif test_patch_pif.pvalue < 0.05:
y = inc_y(y)
ax.text((x1+x2)*.5, y+h/2, "*", ha='center', va='bottom', color=col)
ax.plot([x1, x1, x2, x2], [y, y+h, y+h, y], lw=1.5, c=col)
for i, experiment in enumerate(filename_list):
print(experiment)
time_base, iter_base = get_runtime(filename_list[experiment][0])
time_patch, iter_patch = get_runtime(filename_list[experiment][1])
time_pif, iter_pif = get_runtime(filename_list[experiment][2])
# run statistical test
test_time_base_pif = ranksums(time_base, time_pif)
test_time_base_patch = ranksums(time_base, time_patch)
test_time_patch_pif = ranksums(time_patch, time_pif)
test_iter_base_pif = ranksums(iter_base, iter_pif)
test_iter_base_patch = ranksums(iter_base, iter_patch)
test_iter_patch_pif = ranksums(iter_patch, iter_pif)
print(f"Avg time baseline in seconds: {np.mean(time_base)}")
print(f"Avg time patch-based in seconds: {np.mean(time_patch)}")
print(f"Avg time PIF in seconds: {np.mean(time_pif)}")
print("Test time base vs patch ", test_time_base_patch)
print("Test time base vs pif ", test_time_base_pif)
print("Test time patch vs pif ", test_time_patch_pif)
print(f"Avg number of iterations baseline: {np.mean(iter_base)}")
print(f"Avg number of iterations patch-based: {np.mean(iter_patch)}")
print(f"Avg number of iterations PIF: {np.mean(iter_pif)}")
print("Test iter base vs patch ", test_iter_base_patch)
print("Test iter base vs pif ", test_iter_base_pif)
print("Test iter patch vs pif ", test_iter_patch_pif)
# plot run time results
sub_plot(i, axes[0], time_base, time_patch, time_pif, test_time_base_patch, test_time_base_pif, test_time_patch_pif)
# plot training iters results
sub_plot(i, axes[1], iter_base, iter_patch, iter_pif, test_iter_base_patch, test_iter_base_pif, test_iter_patch_pif)
for ax_idx, ax in enumerate(axes):
ax.set_xticks(np.arange(1, 20, step=4))
#ax.set_xticklabels(["ADNI small", "ADNI big", "UKB small", "UKB big", "VIMS"], rotation=45)
ax.set_xticklabels(["ADNI", "UKB", "VIMS", "ADNI", "UKB"], rotation=45)
ax.annotate('Small', (0.22,0), (0, -42), color="gray", xycoords='axes fraction', textcoords='offset points', va='top')
ax.annotate('Big', (0.74,0), (0, -42), color="gray", xycoords='axes fraction', textcoords='offset points', va='top')
trans = ax.get_xaxis_transform()
#ax.annotate('Big', (0.7,0), (0, -30), xycoords=trans, textcoords='offset points', ha='center', va='top')
#ax.annotate('Neonatal', xy=(1, -.1), xycoords=trans, ha="center", va="top")
ax.plot([-.4,-.4,10,10],[-.20,-.20-0.03,-.20-0.03,-.20], color="gray", transform=trans, clip_on=False) # line small
ax.plot([12,12,19,19],[-.20,-.20-0.03,-.20-0.03,-.20], color="gray", transform=trans, clip_on=False) # line big
if ax_idx == 0:
ax.set_ylabel("Seconds")
ax.set_title("Run time in seconds")
handles, labels = ax.get_legend_handles_labels()
leg = ax.legend(["Baseline", "Patch-based", "PIF"])
leg.legendHandles[0] = matplotlib.patches.Rectangle(xy=(-0, -0), width=20, height=7, angle=0)
leg.legendHandles[0].set_color('tab:gray')
leg.legendHandles[1] = matplotlib.patches.Rectangle(xy=(-0, -0), width=20, height=7, angle=0)
leg.legendHandles[1].set_color('tab:blue')
leg.legendHandles[2] = matplotlib.patches.Rectangle(xy=(-0, -0), width=20, height=7, angle=0)
leg.legendHandles[2].set_color('tab:orange')
ax.legend(leg.legendHandles, ["Baseline", "Patch-based", "PIF"], loc="upper left")
else:
ax.set_ylabel("Iterations")
ax.set_title("Number of iterations")
#leg = plt.legend(axes, ["Baseline", "PIF"])
#plt.show()
fig.savefig(os.path.join(save_dir, "Training_speed_comparison.pgf"), bbox_inches='tight', dpi=250)
#fig.show()
```
|
github_jupyter
|
```
# Import dependencies pandas,
# requests, gmaps, census, and finally config's census_key and google_key
# Declare a variable "c" and set it to the census with census_key.
# https://github.com/datamade/census
# We're going to use the default year 2016, however feel free to use another year.
# Run a census search to retrieve data on estimate of male, female, population, and unemployment count for each zip code.
# https://api.census.gov/data/2013/acs5/variables.html
# Show the output of census_data
# Create a variable census_pd and set it to a dataframe made with the census_data's list of dictionaries
# Rename census_pd with appropriate columns "Male", "Female", "Population", "Unemployment Count", and "Zipcode"
# Show the first 5 rows of census_pd
# Create a new variable calc_census_pd and set it to census_pd
# Calculate the % of male to female ratio and add them as new columns Male % and Female %.
# Calculate the unemployment rate based on population
# Show the first 5 rows of calc_census_pd
# Get the correlation coefficients of calc_census_pd
```
### Critical Thinking: From the above correlation table. What does the unemployment rate tell you about its correlation with the number of males or females?
#### ANSWER:
```
# Use the describe function to get a quick glance at calc_census_pd.
```
### Do you see anything strange about male % or female % in the describe above?
#### ANSWER:
```
# Create two variables called "male_zipcode_outliers" and "female_zipcode_outliers"
# Set them to queries where male or female is are the outliers based on the described data from previous task.
# Example: anything greater than 0.95 is an outlier
# Show all rows for either "male_zipcode_outliers" and "female_zipcode_outliers"
```
### What is a possible cause of some outliers with larger populations?
Hint: Look up the zipcode for larger population of either male or female outliers.
What information do these zipcodes have in common?
### ANSWER:
# Heatmap of population
```
# Create a variable "zip_lng_lat_data" and using pandas import the zip_codes_states.csv from Resources folder.
# https://www.gaslampmedia.com/download-zip-code-latitude-longitude-city-state-county-csv/
# HINT: When loading zipcodes they may turn into integers and lose their 0's.
# To correct this check out dtype in the documentation:
# https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html
# Show the first 5 rows of zip_lng_lat_data
# Get the longitude and latitude based calc_census_pd by merging them on their zip code columns.
# Show the first 5 rows of merged_table
# Configure gmaps with API key
# Define locations as a dataframe of latitude and longitude from merged_table.
# HINT: You'll need to drop the NaN before storing into locations or population
# Define population as the population from merged_table
# HINT: You'll need to drop the NaN before storing into locations or population
# Create a population Heatmap layer
# Note you may need to run the following in your terminal to show the gmaps figure.
# jupyter nbextension enable --py --sys-prefix widgetsnbextension
# jupyter nbextension enable --py --sys-prefix gmaps
# Recommended settings for heatmap layer: max_intensity=2000000 and point radius = 1
# Adjust heat_layer setting to help with heatmap dissipating on zoom
```
### What is a downfall of using zip codes for mapping?
### ANSWER:
https://gis.stackexchange.com/questions/5114/obtaining-up-to-date-list-of-us-zip-codes-with-latitude-and-longitude-geocodes
ZIP codes are a habitually abused geography. It's understandable that people want to use them because they are so visible and well-known but they aren't well suited to any use outside the USPS. ZIP codes aren't associated with polygons, they are associated with carrier routes and the USPS doesn't like to share those. Some ZIP codes are points e.g. a ZIP code may be associated with a post office, a university or a large corporate complex. They are used to deliver mail.
The Census Bureau creates ZIP Code Tabulation Areas (ZCTA) based on their address database. If it's appropriate to your work you could try taking the centroid of the ZCTAs. ZCTA Geography from the 2010 Census is available on the Census Bureau website.
|
github_jupyter
|
# Speech Identity Inference
Let's check if the pretrained model can really identify speakers.
```
import os
import numpy as np
import pandas as pd
from sklearn import metrics
from tqdm.notebook import tqdm
from IPython.display import Audio
from matplotlib import pyplot as plt
%matplotlib inline
import tensorflow as tf
import tensorflow_io as tfio
import tensorflow_addons as tfa
from train_speech_id_model import BaseSpeechEmbeddingModel
from create_audio_tfrecords import AudioTarReader, PersonIdAudio
sr = 48000
m = BaseSpeechEmbeddingModel()
m.summary()
# 90.cpkt: auc = 0.9525
# 110.cpkt: auc = 0.9533
chkpt = 'temp/cp-0110.ckpt'
m.load_weights(chkpt)
m.compile(
optimizer=tf.keras.optimizers.Adam(0.0006),
loss=tfa.losses.TripletSemiHardLoss()
)
# m.save('speech-id-model-110')
# changing the corpus to other languages allows evaluating how the model transfers between languages
dev_dataset = tfrecords_audio_dataset = tf.data.TFRecordDataset(
'data/cv-corpus-7.0-2021-07-21-en.tar.gz_dev.tfrecords.gzip', compression_type='GZIP',
# 'data/cv-corpus-7.0-2021-07-21-en.tar.gz_test.tfrecords.gzip', compression_type='GZIP',
num_parallel_reads=4
).map(PersonIdAudio.deserialize_from_tfrecords)
samples = [x for x in dev_dataset.take(2500)]
# decode audio
samples = [(tfio.audio.decode_mp3(x[0])[:, 0], x[1]) for x in samples]
# is the audio decoded correctly?
Audio(samples[10][0], rate=sr)
# compute the embeddings
embeddings = []
for audio_data, person_id in tqdm(samples):
cur_emb = m.predict(
tf.expand_dims(audio_data, axis=0)
)[0]
embeddings.append(cur_emb)
```
## Check embedding quality
Ideally, embeddings from the same person should look the same.
```
n_speakers = len(set([x[1].numpy() for x in samples]))
print(f'Loaded {n_speakers} different speakers')
pairwise_diff = {'same': [], 'different': []}
for p in tqdm(range(len(samples))):
for q in range(p + 1, len(samples)):
id_1 = samples[p][1]
id_2 = samples[q][1]
dist = np.linalg.norm(embeddings[p] - embeddings[q])
if id_1 == id_2:
pairwise_diff['same'].append(dist)
else:
pairwise_diff['different'].append(dist)
plt.figure(figsize=(12, 8))
plt.boxplot([pairwise_diff[x] for x in pairwise_diff])
plt.xticks([k + 1 for k in range(len(pairwise_diff))], [x for x in pairwise_diff])
plt.ylabel('Embedding distance')
plt.title('Boxplot of speaker identifiability')
# what do we care about?
# given that 2 samples are different, we don't want to predict `same`
# secondarily, given that 2 samples are the same, we want to predict `same`
# threshold - alpha from 0 (median of same) to 1 (median of different)
alpha = 0.2
# if using the validation set, we can calibrate t
t = np.median(pairwise_diff['same']) + alpha * (np.median(pairwise_diff['different']) - np.median(pairwise_diff['same']))
specificity = np.sum(np.array(pairwise_diff['different']) > t) / len(pairwise_diff['different'])
sensitivity = np.sum(np.array(pairwise_diff['same']) < t) / len(pairwise_diff['same'])
print('Sensitivity, specificity = ', sensitivity, specificity)
same_lbl = [0] * len(pairwise_diff['same'])
diff_lbl = [1] * len(pairwise_diff['different'])
scores = pairwise_diff['same'] + pairwise_diff['different']
# scale scores to range [0,1] and chande threshold accordingly
scores = np.array(scores) * 0.5
t = t * 0.5
labels = same_lbl + diff_lbl
len(scores), len(labels)
fpr, tpr, thresholds = metrics.roc_curve(labels, scores, pos_label=1)
plt.figure(figsize=(12, 8))
roc_auc = metrics.roc_auc_score(labels, scores)
plt.title(f'ROC curve: AUC = {np.round(roc_auc, 4)} {chkpt}')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.plot(fpr, tpr)
plt.plot([0, 1], [0, 1])
plt.figure(figsize=(12, 8))
plt.title('Point of operation')
plt.plot(thresholds, 1 - fpr, label='Specificity')
plt.plot(thresholds, tpr, label='Sensitivity')
plt.plot([t, t], [0, 1], label='Threshold')
plt.xlabel('Threshold level')
plt.xlim([0, 1])
plt.legend()
```
## Select best model on validation
Strategy: compute loss but don't sort validation set, so there are multiple voice repeats in a batch. Also makes the evaluation consistent. Batch size should be as big as possible.
```
triplet_loss = tfa.losses.TripletSemiHardLoss()
# compute all predictions
def mp3_decode_fn(audio_bytes, audio_class):
audio_data = tfio.audio.decode_mp3(audio_bytes)[:, 0]
return audio_data, audio_class
all_preds = []
all_labels = []
for x in tqdm(dev_dataset.take(1300).map(mp3_decode_fn)):
s = x[0]
all_preds.append(m.predict(
tf.expand_dims(x[0], axis=0)
)[0])
all_labels.append(x[1].numpy())
len(all_preds)
batch_size = 128
n_batches = len(all_preds) // batch_size
vec_size = len(all_preds[0])
np_preds = np.reshape(all_preds[0:batch_size * n_batches], (n_batches, batch_size, vec_size))
np_labls = np.reshape(all_labels[0:batch_size * n_batches], (n_batches, batch_size))
total_loss = 0
for lbl, pred in zip(np_labls, np_preds):
total_loss += triplet_loss(lbl, pred).numpy()
total_loss = total_loss / len(lbl)
print(f'Total loss: {total_loss}')
all_checkpoints = [x.split('.')[0] + '.ckpt' for x in os.listdir('temp') if 'ckpt.index' in x]
all_results = []
for checkpoint in tqdm(all_checkpoints):
m.load_weights(os.path.join('temp', checkpoint))
all_preds = []
all_labels = []
n_items = 4600
for x in tqdm(dev_dataset.take(n_items).map(mp3_decode_fn),
total=n_items, leave=False):
# for x in tqdm(dev_dataset.map(mp3_decode_fn),
# leave=False):
s = x[0]
all_preds.append(m.predict(
tf.expand_dims(x[0], axis=0)
)[0])
all_labels.append(x[1].numpy())
batch_size = 128
n_batches = len(all_preds) // batch_size
vec_size = len(all_preds[0])
np_preds = np.reshape(all_preds[0:batch_size * n_batches], (n_batches, batch_size, vec_size))
np_labls = np.reshape(all_labels[0:batch_size * n_batches], (n_batches, batch_size))
total_loss = 0
for lbl, pred in zip(np_labls, np_preds):
total_loss += triplet_loss(lbl, pred).numpy()
total_loss = total_loss / len(lbl)
cur_result = {
'checkpoint': checkpoint,
'val_loss': total_loss
}
print(cur_result)
all_results.append(cur_result)
df_val = pd.DataFrame(all_results)
df_val['idx'] = df_val.checkpoint.apply(lambda z: int(z.split('.')[0].split('-')[1]))
df_val = df_val.set_index('idx')
df_val.to_csv('val_triplet_loss.csv')
# df_val
df_val.plot()
```
|
github_jupyter
|
```
import numpy as np
import cs_vqe as c
import ast
import os
from openfermion import qubit_operator_sparse
import conversion_scripts as conv_scr
import scipy as sp
from openfermion import qubit_operator_sparse
import conversion_scripts as conv_scr
from openfermion.ops import QubitOperator
# with open("hamiltonians.txt", 'r') as input_file:
# hamiltonians = ast.literal_eval(input_file.read())
working_dir = os.getcwd()
data_dir = os.path.join(working_dir, 'data')
data_hamiltonians_file = os.path.join(data_dir, 'hamiltonians.txt')
with open(data_hamiltonians_file, 'r') as input_file:
hamiltonians = ast.literal_eval(input_file.read())
for key in hamiltonians.keys():
print(f"{key: <25} n_qubits: {hamiltonians[key][1]:<5.0f}")
mol_key = 'H2_6-31G_singlet'
# mol_key ='H2-O1_STO-3G_singlet'
# currently index 2 is contextual part
# ''''''''''''''''3 is NON contextual part
# join together for full Hamiltonian:
ham = hamiltonians[mol_key][2]
ham.update(hamiltonians[mol_key][3]) # full H
ham
print(f"n_qubits: {hamiltonians[mol_key][1]}")
```
# Get non-contextual H
```
nonH_guesses = c.greedy_dfs(ham, 10, criterion='weight')
nonH = max(nonH_guesses, key=lambda x:len(x)) # largest nonCon part found by dfs alg
```
Split into:
$$H = H_{c} + H_{nc}$$
```
nonCon_H = {}
Con_H = {}
for P in ham:
if P in nonH:
nonCon_H[P]=ham[P]
else:
Con_H[P]=ham[P]
```
## Testing contextuality
```
print('Is NONcontextual correct:', not c.contextualQ_ham(nonCon_H))
print('Is contextual correct:',c.contextualQ_ham(Con_H))
```
# Classical part of problem!
Take $H_{nc}$ and split into:
- $Z$ = operators that completely comute with all operators in $S$
- $T$ = remaining operators in $S$
- where $S = Z \cup T$ and $S$ is set of Pauli operators in $H_{nc}$
- We then split the set $T$ into cliques $C_{1}, C_{2}, ... , C_{|T|}$
- all ops in a clique commute
- ops between cliques anti-commute!
```
bool_flag, Z_list, T_list = c.contextualQ(list(nonCon_H.keys()), verbose=True)
Z_list
T_list
```
## Get quasi model
First we define
- $C_{i1}$ = first Pauli in each $C_{i}$ set
- $A_{ij} = C_{ij}C_{1i}$
- $G^{prime} = \{1 P_{i} \;| \; i=1,2,...,|Z| \}$
- aka all the completely commuting terms with coefficients set to +1!
- We define G to be an independent set of $G^{prime}$
- where $G \subseteq G^{prime}$
```
G_list, Ci1_list, all_mappings = c.quasi_model(nonCon_H)
print('non-independent Z list:', Z_list)
print('G (independent) Z list:', G_list)
print('all Ci1 terms:', Ci1_list)
```
$$R = G \cup \{ C_{i1} \;| \; i=1,2,...,N \}$$
```
# Assemble all the mappings from terms in the Hamiltonian to their products in R:
all_mappings
```
Overall $R$ is basically reduced non-contextual set
- where everything in original non-contextual set can be found by **inference!**
# Function form
$$R = G \cup \{ C_{i1} \;| \; i=1,2,...,N \}$$
- note q to do with $G$
- note r to do with $C_{i1}$
```
model = [G_list, Ci1_list, all_mappings]
fn_form = c.energy_function_form(nonCon_H, model)
# returns [
# denstion of q,
# dimension of r,
# [coeff, indices of q's, indices of r's, term in Hamiltonian]
# ]
fn_form
Energy_function = c.energy_function(fn_form)
import random
### now for the q terms we only have +1 or -1 assignment!
q_variables = [random.choice([1,-1]) for _ in range(fn_form[0])]
### r variables is anything that makes up unit vector!
r_variables = c.angular(np.arange(0,2*np.pi, fn_form[1]))
r_variables
Energy_function(*q_variables,*r_variables)
```
find_gs_nonconfunction optimizes above steps by:
1. brute forcing all choices of ```q_variables```
- ```itertools.product([1,-1],repeat=fn_form[0])```
2. optimizing over ```r_variables``` (in code ```x```)
- using SciPy optimizer!
```
model = [G_list, Ci1_list, all_mappings]
lowest_eigenvalue, ground_state_params, model_copy, fn_form_copy, = c.find_gs_noncon(nonCon_H,
method = 'differential_evolution',
model=model,
fn_form=fn_form) # returns: best + [model, fn_form]
print(lowest_eigenvalue)
print(ground_state_params)
## check
Energy_function(*ground_state_params[0],*ground_state_params[1]) == lowest_eigenvalue
```
# Now need to rotate Hamiltonian!
We now have non contextual ground state: $(\vec{q}, \vec{r})$
```
ground_state_params
```
We can use this result - ground state of $H_{nc}$ - as a classical estiamte of our ground state of the full Hamiltonian ($H = H_{c} + H_{nc}$)
However we can also obtain a quantum correction using $H_{c}$
By minimizing theenergy of the remaining terms in the Hamiltonian over the quantum states that are **consistent with the noncon-textual ground state**.
To do this we first rotate each $G_{j}$ and $\mathcal{A} = \sum_{i=1}^{N} r_{i}A_{i}$:
```
model = [G_list, Ci1_list, all_mappings]
print(G_list) # G_j terms!
print(Ci1_list) # mathcal(A)
```
to SINGLE QUBIT pauli Z operators!
- to map the operators in $G$ to single qubit Pauli operators, we use $\frac{\pi}{2}$ rotations!
- note $\mathcal{A}$ is an anti-commuting set... therefore we can use $N-1$ rotations as in unitary partitioning's sequence of rotations to do this!
- $R^{\dagger}\mathcal{A} R = \text{single Pauli op}$
# Rotate full Hamiltonian to basis with diagonal noncontextual generators!
function ```diagonalize_epistemic```:
1. first if else statement:
- if cliques present:
- first maps A to single Pauli operator (if cliques present)
- then rotates to diagonlize G union with single Pauli opator of A (hence GuA name!)
- else if NO cliques present:
- gets rotations to diagonlize G
- these rotations make up GuA term in code!
2. NEXT code loops over terms in GuA (denoted as g in code)
- if g is not a single qubit $Z$:
- code generates code to rotate operator to make g diagonal (rotations)
- then constructs map of g to single Z (J rotation)
- Note R is applied to GuA
#########
- Note rotations are given in Appendix A of https://arxiv.org/pdf/2011.10027.pdf
- First code checks if g op in GuA is diagonal
- if so then needs to apply "K" rotation (involving $Y$ and $I$ operators (see pg 11 top) to make it NOT diagononal
- now operator will be diagnoal!
- next generate "J" rotation
- turns non-diagonal operator into a single qubit $Z$ operator!
```
# Get sequence of rotations requried to diagonalize the generators for the noncontextual ground state!
Rotations_list, diagonalized_generators_GuA, eigen_vals_nonC_ground_state_GuA_ops = c.diagonalize_epistemic(model,
fn_form,
ground_state_params)
# rotations to map A to single Pauli operator!
Rotations_list
# rotations to diagonlize G
diagonalized_generators_GuA
eigen_vals_nonC_ground_state_GuA_ops
```
# NEW LCU method
```
N_index=0
check_reduction=True
N_Qubits= hamiltonians[mol_key][1]
R_LCU, Rotations_list, diagonalized_generators_GuA, eigen_vals_nonC_ground_state_GuA_ops= c.diagonalize_epistemic_LCU(
model,
fn_form,
ground_state_params,
N_Qubits,
N_index,
check_reduction=check_reduction)
R_LCU
order = list(range(hamiltonians[mol_key][1]))
N_index=0
check_reduction=True
N_Qubits= hamiltonians[mol_key][1]
reduced_H = c.get_reduced_hamiltonians_LCU(Con_H,
model,
fn_form,
ground_state_params,
order,
N_Qubits,
N_index,
check_reduction=check_reduction)
len(reduced_H[-1])
reduced_H[3]
H = conv_scr.Get_Openfermion_Hamiltonian(reduced_H[-1])
sparseH = qubit_operator_sparse(H, n_qubits=hamiltonians[mol_key][1])
sp.sparse.linalg.eigsh(sparseH, which='SA', k=1)[0][0]
## old way
# Get sequence of rotations requried to diagonalize the generators for the noncontextual ground state!
Rotations_list, diagonalized_generators_GuA, eigen_vals_nonC_ground_state_GuA_ops = c.diagonalize_epistemic(model,
fn_form,
ground_state_params)
len(diagonalized_generators_GuA)
Rotations_list
### old way
order = list(range(hamiltonians[mol_key][1]))
red_H = c.get_reduced_hamiltonians(ham,
model,
fn_form,
ground_state_params,
order)
len(red_H[0])
len(red_H[0])
red_H[0]
H = conv_scr.Get_Openfermion_Hamiltonian(red_H[-1])
sparseH = qubit_operator_sparse(H, n_qubits=hamiltonians[mol_key][1])
sp.sparse.linalg.eigsh(sparseH, which='SA', k=1)[0][0]
from scipy.sparse.linalg import expm
from Misc_functions import sparse_allclose
R_LCU_QubitOp = QubitOperator()
for P in R_LCU: R_LCU_QubitOp+=P
R_LCU_mat = qubit_operator_sparse(R_LCU_QubitOp, n_qubits=hamiltonians[mol_key][1])
R_SeqRot_QubitOp= conv_scr.convert_op_str(Rotations_list[0][1], 1)
R_SeqRot_mat = qubit_operator_sparse(R_SeqRot_QubitOp, n_qubits=hamiltonians[mol_key][1])
theta_sk=Rotations_list[0][0]
exp_rot = expm(R_SeqRot_mat * theta_sk)# / 2)
sparse_allclose(exp_rot, R_LCU_mat)
```
# Restricting the Hamiltonian to a contextualsubspace
(Section B of https://arxiv.org/pdf/2011.10027.pdf)
In the rotated basis the Hamiltonian is restricted to the subspace stabilized by the noncontextual generators $G_{j}'$
```
print(diagonalized_generators_GuA) # G_j' terms!
```
The quantum correction is then obtained by minimizing the expectation value of this resticted Hamiltonian!
(over +1 eigenvectors of the remaining non-contextual generators $\mathcal{A}'$)
```
print(Ci1_list) # mathcal(A)
```
- $\mathcal{H}_{1}$ denotes Hilbert space of $n_{1}$ qubits acted on by by the single qubit $G_{j}'$ terms
- $\mathcal{H}_{2}$ denotes Hilbert space of remaining $n_{2}$
Overall full Hilbert space is: $\mathcal{H}=\mathcal{H}_{1} \otimes \mathcal{H}_{2}$
The **contextual Hamiltonian** in this rotated basis is:
$$H_{c}'=\sum_{P \in \mathcal{S_{c}'}} h_{P}P$$
The set of Pauli terms in $H_{c}'$ is $\mathcal{S_{c}'}$, where terms in $\mathcal{S_{c}'}$ act on both $\mathcal{H}_{1}$ and $\mathcal{H}_{2}$ subspaces in general!
We can write $P$ terms as:
$$P=P_{1}^{\mathcal{H}_{1}} \otimes P_{2}^{\mathcal{H}_{2}}$$
$P$ commutes with an element of $G'$ if and only if $P_{1} \otimes \mathcal{I}^{\mathcal{H}_{2}}$ does
As the generators $G'$ act only on $\mathcal{H}_{1}$
If $P$ anticommutes with any element of $G'$ then its expection value in the noncontextual state is zero
Thus any $P$ must commute with all elements of $G'$ and so $P_{1} \otimes \mathcal{I}^{\mathcal{H}_{2}}$ too
As the elements of $G'$ are single-qubit Pauli $Z$ operators acting in $\mathcal{H}_{1}$:
```
print(diagonalized_generators_GuA) # G_j' terms!
```
$P_{1}$ must be a product of such operators!
**As the exepcation value of $P_{1}$ is some $p_{1}= \pm 1$ DETERMINED BY THE NONCONTEXTUAL GROUND STATE**
```
eigen_vals_nonC_ground_state_GuA_ops
```
Let $|\psi_{(\vec{q}, \vec{r})} \rangle$ be any quantum state consistent with the nonconxtual ground state $(\vec{q}, \vec{r})$... aka gives correct expection values of:
```
print(diagonalized_generators_GuA)
print(eigen_vals_nonC_ground_state_GuA_ops)
```
Then the action of any $P$ which allows our contextual correction has the form:
$$P |\psi_{(\vec{q}, \vec{r})} \rangle = \big( P_{1}^{\mathcal{H}_{1}} \otimes P_{2}^{\mathcal{H}_{2}} \big) |\psi_{(\vec{q}, \vec{r})} \rangle$$
$$ = p_{1}\big( \mathcal{I}^{\mathcal{H}_{1}} \otimes P_{2}^{\mathcal{H}_{2}} \big) |\psi_{(\vec{q}, \vec{r})} \rangle$$
- repeating above, but $p_{1}$ is the expectation value of $P_{1}$ determiend by the noncontextual ground state!
Thus we can denote $H_{c}' |_{(\vec{q}, \vec{r})}$ as the restriction of $H_{c}'$ on its action on the noncontextual ground state $(\vec{q}, \vec{r})$:
$$H_{c}' |_{(\vec{q}, \vec{r})} =\sum_{\substack{P \in \mathcal{S_{c}'} \\ \text{s.t.} [P, G_{i}']=0 \\ \forall G'_{i} \in G'}} p_{1}h_{P}\big( \mathcal{I}^{\mathcal{H}_{1}} \otimes P_{2}^{\mathcal{H}_{2}} \big) $$
$$=\mathcal{I}_{\mathcal{H}_{1}} \otimes H_{c}'|_{\mathcal{H}_{2}} $$
where we can write:
$$H_{c}'|_{\mathcal{H}_{2}} = \sum_{\substack{P \in \mathcal{S_{c}'} \\ \text{s.t.} [P, G_{i}']=0 \\ \forall G'_{i} \in G'}} p_{1}h_{P}P_{2}^{\mathcal{H}_{2}}$$
Cleary this Hamiltonian on $n_{2}$ qubits is given by:
$$n_{2} = n - |G|$$
- $|G|=$ number of noncontextual generators $G_{j}$
```
from copy import deepcopy
import pprint
```
```quantum_correction``` function
```
n_q = len(diagonalized_generators_GuA[0])
rotated_H = deepcopy(ham) ##<-- full Hamiltonian
# iteratively perform R rotation over all terms in orginal Hamiltonian
for R in Rotations_list:
newly_rotated_H={}
for P in rotated_H.keys():
lin_comb_Rot_P = c.apply_rotation(R,P) # linear combination of Paulis from R rotation on P
for P_rot in lin_comb_Rot_P:
if P_rot in newly_rotated_H.keys():
newly_rotated_H[P_rot]+=lin_comb_Rot_P[P_rot]*rotated_H[P] # already in it hence +=
else:
newly_rotated_H[P_rot]=lin_comb_Rot_P[P_rot]*rotated_H[P]
rotated_H = deepcopy(newly_rotated_H) ##<-- perform next R rotation on this H
rotated_H
```
next find where Z indices in $G'$
```
z_indices = []
for d in diagonalized_generators_GuA:
for i in range(n_q):
if d[i] == 'Z':
z_indices.append(i)
print(diagonalized_generators_GuA)
print(z_indices)
```
**The exepcation value of $P_{1}$ terms are $p_{1}= \pm 1$ DETERMINED BY THE NONCONTEXTUAL GROUND STATE**
```
print(diagonalized_generators_GuA)
print(eigen_vals_nonC_ground_state_GuA_ops)
```
We need to ENFORCE the diagnal geneators assigned values in the diagonal basis to these expectation values above^^^
```
ham_red = {}
for P in rotated_H.keys():
sgn = 1
for j, z_index in enumerate(z_indices): # enforce diagonal generator's assigned values in diagonal basis
if P[z_index] == 'Z':
sgn = sgn*eigen_vals_nonC_ground_state_GuA_ops[j] #<- eigenvalue of nonC ground state!
elif P[z_index] != 'I':
sgn = 0
if sgn != 0:
# construct term in reduced Hilbert space
P_red = ''
for i in range(n_q):
if not i in z_indices:
P_red = P_red + P[i]
if P_red in ham_red.keys():
ham_red[P_red] = ham_red[P_red] + rotated_H[P]*sgn
else:
ham_red[P_red] = rotated_H[P]*sgn
ham_red
c.quantum_correction(ham, #<- full Ham
model,
fn_form,
ground_state_params)
c.quantum_correction(nonCon_H,model,fn_form,ground_state_params)
c.get_reduced_hamiltonians(ham,
model,
fn_form,
ground_state_params,
list(range(hamiltonians[mol_key][1])))[-1] == rotated_H ### aka when considering all qubit problem it is equal to rotated H!
```
For some reason it seems that when considering full Hamiltonian there is no reduction in the number of terms!
Q. Do you expect any term reduction when doing CS-VQE?
```
n2 = hamiltonians[mol_key][1]-len(diagonalized_generators_GuA)
n2
ham_red
ham==Con_H
n_q = len(diagonalized_generators_GuA[0])
rotated_Hcon = deepcopy(Con_H)
# iteratively perform R rotation over all terms in orginal Hamiltonian
for R in Rotations_list:
newly_rotated_H={}
for P in rotated_Hcon.keys():
lin_comb_Rot_P = c.apply_rotation(R,P) # linear combination of Paulis from R rotation on P
for P_rot in lin_comb_Rot_P:
if P_rot in newly_rotated_H.keys():
newly_rotated_H[P_rot]+=lin_comb_Rot_P[P_rot]*rotated_Hcon[P] # already in it hence +=
else:
newly_rotated_H[P_rot]=lin_comb_Rot_P[P_rot]*rotated_Hcon[P]
rotated_Hcon = deepcopy(newly_rotated_H) ##<-- perform next R rotation on this H
rotated_Hcon
print(diagonalized_generators_GuA)
print(eigen_vals_nonC_ground_state_GuA_ops)
p1_dict = {Gener.index('Z'): p1 for Gener, p1 in zip(diagonalized_generators_GuA, eigen_vals_nonC_ground_state_GuA_ops)}
p1_dict
new={}
for P1_P2 in rotated_Hcon.keys():
Z_indices = [i for i, sigma in enumerate(P1_P2) if sigma=='Z']
I1_P2=list(deepcopy(P1_P2))
sign=1
for ind in Z_indices:
sign*=p1_dict[ind]
I1_P2[ind]='I'
I1_P2=''.join(I1_P2)
new[I1_P2]=rotated_Hcon[P1_P2]*sign
new
len(rotated_Hcon)-len(diagonalized_generators_GuA)
# len(new)
H = conv_scr.Get_Operfermion_Hamiltonian(new)
sparseH = qubit_operator_sparse(H, n_qubits=hamiltonians[mol_key][1])
sp.sparse.linalg.eigsh(sparseH, which='SA', k=1)[0][0]
H_reduced_subspace={}
for P in rotated_H.keys():
sign=1
for P_known, eigen_val in zip(diagonalized_generators_GuA, eigen_vals_nonC_ground_state_GuA_ops):
Z_index = P_known.index('Z') # Find single qubit Z in generator!
if P[Z_index]== 'Z': # compare location in genertor to P of rotated H
sign*=eigen_val #<- eigenvalue of nonC ground state!
elif P[Z_index]!= 'I':
sign=0 # MUST anti-commute!
# build reduced Hilbert Space
if sign!=0:
P_new = list(deepcopy(P))
P_new[Z_index]='I'
P_new= ''.join(P_new)
if P_new in H_reduced_subspace.keys():
H_reduced_subspace[P_new] = H_reduced_subspace[P_new] + rotated_H[P]*sign
else:
H_reduced_subspace[P_new] = rotated_H[P]*sign
# else:
# H_reduced_subspace[P]=rotated_H[P]
print(len(rotated_H))
print(len(H_reduced_subspace))
# H_reduced_subspace
lowest_eigenvalue
from openfermion import qubit_operator_sparse
import conversion_scripts as conv_scr
import scipy as sp
H = conv_scr.Get_Operfermion_Hamiltonian(H_reduced_subspace)
sparseH = qubit_operator_sparse(H, n_qubits=hamiltonians[mol_key][1])
sp.sparse.linalg.eigsh(sparseH, which='SA', k=1)[0][0]
c.quantum_correction(ham, #<- full Ham
model,
fn_form,
ground_state_params)
lowest_eigenvalue
Hfull = conv_scr.Get_Operfermion_Hamiltonian(ham)
sparseHfull = qubit_operator_sparse(Hfull, n_qubits=hamiltonians[mol_key][1])
FCI = sp.sparse.linalg.eigsh(sparseHfull, which='SA', k=1)[0][0]
print('FCI=', FCI)
sp.sparse.linalg.eigsh(sparseH, which='SA', k=1)[0][0]
print(diagonalized_generators_GuA)
print(eigen_vals_nonC_ground_state_GuA_ops)
p1_dict = {Gener.index('Z'): p1 for Gener, p1 in zip(diagonalized_generators_GuA, eigen_vals_nonC_ground_state_GuA_ops)}
p1_dict
H_reduced_subspace={}
for P in rotated_Hcon.keys():
new_sign=1
P_new = list(P)
for index, sigma in enumerate(P):
if sigma == 'Z':
new_sign*=p1_dict[index]
P_new[index]='I'
P_new = ''.join(P_new)
H_reduced_subspace[P_new] = rotated_Hcon[P]*new_sign
H_reduced_subspace
H_con_subspace = conv_scr.Get_Operfermion_Hamiltonian(H_reduced_subspace)
sparseH_con_subspace = qubit_operator_sparse(H_con_subspace, n_qubits=hamiltonians[mol_key][1])
sp.sparse.linalg.eigsh(sparseH_con_subspace, which='SA', k=1)[0][0]
# H_reduced_subspace={}
# for P in rotated_Hcon.keys():
# p1=1
# for P_known, eigen_val in zip(diagonalized_generators_GuA, eigen_vals_nonC_ground_state_GuA_ops):
# Z_index = P_known.index('Z') # Find single qubit Z in generator!
# if P[Z_index]== 'Z': # compare location in genertor to P of rotated H
# p1*=eigen_val #<- eigenvalue of nonC ground state!
# P1_P2 = list(deepcopy(P))
# P1_P2[Z_index]='I'
# I1_P2= ''.join(P1_P2)
# if I1_P2 in H_reduced_subspace.keys():
# H_reduced_subspace[I1_P2] += rotated_Hcon[P]*p1
# else:
# H_reduced_subspace[I1_P2] = rotated_Hcon[P]*p1
# elif P[Z_index]== 'I':
# H_reduced_subspace[P] = rotated_Hcon[P]
# elif P[Z_index]!= 'I':
# sign=0 # MUST anti-commute!
# H_reduced_subspace[P]=0
# # # build reduced Hilbert Space
# # if sign!=0:
# # if P_new in H_reduced_subspace.keys():
# # H_reduced_subspace[P_new] = H_reduced_subspace[P_new] + rotated_Hcon[P]*sign
# # else:
# # H_reduced_subspace[P_new] = rotated_Hcon[P_new]*sign
# # # else:
# # # H_reduced_subspace[P]=rotated_H[P]
H_reduced_subspace
nonCon_Energy = lowest_eigenvalue
H_con_subspace = conv_scr.Get_Operfermion_Hamiltonian(H_reduced_subspace)
sparseH_con_subspace = qubit_operator_sparse(H_con_subspace, n_qubits=hamiltonians[mol_key][1])
Con_Energy = sp.sparse.linalg.eigsh(sparseH_con_subspace, which='SA', k=1)[0][0]
Con_Energy+nonCon_Energy
FCI
c.quantum_correction(ham, #<- full Ham
model,
fn_form,
ground_state_params)
FCI-lowest_eigenvalue
Con_Energy
c.commute(P_gen, P)
```
|
github_jupyter
|
> **Note:** In most sessions you will be solving exercises posed in a Jupyter notebook that looks like this one. Because you are cloning a Github repository that only we can push to, you should **NEVER EDIT** any of the files you pull from Github. Instead, what you should do, is either make a new notebook and write your solutions in there, or **make a copy of this notebook and save it somewhere else** on your computer, not inside the `sds` folder that you cloned, so you can write your answers in there. If you edit the notebook you pulled from Github, those edits (possible your solutions to the exercises) may be overwritten and lost the next time you pull from Github. This is important, so don't hesitate to ask if it is unclear.
# Exercise Set 16: Exploratory Data Analysis
*Afternoon, August 22, 2018*
In this exercise set we will be practicing our skills wihtin Exploratory Data Analysis. Furthermore we will get a little deeper into the mechanics of the K-means clustering algorithm.
## Exercise Section 16.1: Exploratory data analysis with interactive plots
In the following exercise you will practice interactive plotting with the Plotly library.
** Preparation:**
Setting up plotly:
* Install plotly version (2.7.0) using conda: `conda install -c conda-forge plotly==2.7.0`
* Intall cufflinks that bins the plotly to the dataframe. `conda install -c conda-forge cufflinks-py`
* Create a user on "https://plot.ly/".
* Login
* Hover over your profile name, and click on settings.
* Get the API key and copy it.
* Run the following command in the notebook
```python
# First time you run it
import plotly
username = 'username' # your.username
api_key = 'apikey' # find it under settings # your.apikey
plotly.tools.set_credentials_file(username=username, api_key=api_key)
```
* Plotly is a sort of social media for Graphs, and automatically save all your figures. If you want to run it in offline mode, run the following in the notebook :
```python
import plotly.offline as py # import plotly in offline mode
py.init_notebook_mode(connected=True) # initialize the offline mode, with access to the internet or not.
import plotly.tools as tls
tls.embed('https://plot.ly/~cufflinks/8') # embed cufflinks.
# import cufflinks and make it offline
import cufflinks as cf
cf.go_offline() # initialize cufflinks in offline mode
```
> ** Ex. 16.1.1 ** Reproduce the plots made in the Lectures. This means doing a scatter plot where the colors (hue=) are the ratings, and the text comes when hovering over (text=).
```
#[Answer goes here]
```
## Exercise Section 16.2: Implementing the K-means Clustering algorithm
In the following exercise you will implement your own version of the K-means Clustering Algorithm. This will help you practice the basic matrix operations and syntax in python.
> **Ex. 16.2.0:** First we need to load the dataset to practice on. For this task we will use the famous clustering dataset: of properties of 3 iris flower species. This is already build into many packages, including the plotting library seaborn, and can be loaded using the following command: ```df = sns.load_dataset('iris')```
Plot the data as a scatter matrix to inspect that it indeed has some rather obvious clusters: search for seaborn and scatter matrix on google and figure out the command. Color the markers (the nodes in the graph) by setting the ```hue='species'```
```
# [Answer to Ex. 16.2.0]
```
If we weren't biologist and we had not already named the three flower species, we might want to find and define the natural groupings using a clustering method. Now you should implement the K-Means Clustering Algorithm.
> **Ex. 16.2.1:** First define a matrix X, by extracting the four columns ('sepal_length','sepal_width','petal_length','petal_width') from the dataframe using the .values method.
```
# [Answer to Ex. 16.2.1]
```
Now we are ready to implement the algorithm.
> **Ex. 16.2.2:** First we write the initialization, our first *Expectation*. This will initialize our first guess of the cluster centroids. This is done by picking K random points from the data. We do this by sampling a list of K numbers from the index. And then extracting datapoints using this index, same syntax as with a dataframe.
***(hint: use the random.sample function and sample from a range(len(data)))***
Check that this works and wrap it in a function named `initialize_clusters`. The function should take the data and a value of K (number of clusters / intial samples) as input parameters. And return the initial cluster centroids.
```
#[Answer Ex. 16.2.2]
```
Now we will write the *Maximization* step.
> **Ex.16.2.3.:** The maximization step is done by assigning each datapoint to the closests cluster centers/centroid. This means:
* we need to calculate the distance from each point to each centroid (at first it is just our randomly initialized points). This can be done using the the sklearn.metrics.pairwise_distances() taking the two matrices as input.
* Next run an argmin operation on the matrix to obtain the cluster_assignments, using the ```.argmin()``` method built into the matrix object. The argmin gives you the index of smallest value, and not the smallest value itself. Remember to choose the right axis to apply the argmin operation on - i.e. columns or rows to minimize. You do this setting the axis= argument. ```.argmin(axis=0)``` applies it on the columns and ```.argmin(axis=1)``` applies it on the rows.
Finally wrap these operations into a function `maximize` that takes the cluster centers, and the data as input. And return the cluster assignments.
```
#[Answer Ex. 16.2.3]
```
> **Ex. 16.2.4:** Now we want to update our Expectation of the cluster centroids.
We calculate new cluster centroids, by applying the builtin ```.mean``` function on the subset of the data that is assigned to each cluster.
First you define a container for the new centroids. Using the function: `np.zeros(shape)`. The `shape` parameter should be a tuple with the dimensions of matrix of cluster centroids i.e. (k, n_columns).
>For each cluster you *(this can be done using a for loop from 0 to k number of clusters)*:
* filter the data with a boolean vector that is True if the cluster assignment of the datapoint is equal to the cluster. The indexing is done in the same way as you would do with a dataframe.
* calculate the mean on the subset of the data. Make sure you are doing it on the right axis. (axis=0) is on the columns, and axis=1 is on the rows.
* store the it in a container
Each cluster center should be a vector of 4 values [val,val2,val3,val4] so make sure you take the mean on the right axis. ```.mean(axis=?)```.
Finally wrap these operations into a function `update_expectation` that takes the, `k`, the data `X`, and the `cluster_assignment` as input. And return the new cluster centers.
```
#[Answer 16.2.4]
```
Lastly we put it all together in the scikitlearn canonical ".fit()" function. This function will use the other functions. The important new things here are setting the number of maximization steps, and checking if the solution has converged, i.e. it is stable with little to no change.
Pipeline is the following:
* First we initialize our cluster centroids using the initialization function.
* Then we run the maximazation function until convergence. Converged is checked by comparing if old_centroids from the previous step is equal to the new centroids.
* Once convergence is reached we have our final cluster centroids, and the final cluster assignment.
> **Ex. 16.2.5:** You should now implement it by doing the following:
* Define a maximum number of iterations`max_iter` to 15.
* Use the `initialize_clusters` function to define a variable `centroids`.
* make a `for` loop from 0 to max_iter where you:
* copy the current cluster centroids to a new variable: old_centroids. This will be used for checking convergence after the maximization step.
* define the `cluster_assignment` by running the `maximize` function
* define a new (i.e. overwrite) `centroids` variable by running the `update_expectation` function.
* finally check if old_centroids is equal to new_centroids, using the np.array_equal() function. If they are: break.
Make sure that it works and wrap it around a function `fit_transform()` that takes the data `X` as input, and the number of clusters `k` plus the maximum number of iterations `max_iter`. It should return the cluster assignments and the cluster centroids.
```
#[Answer exercise 16.2.5]
```
> **Ex. 16.2.6:** Run the algorithm and create a new variable `'cluster'` in your dataframe using the cluster_assignments. Count the overlap between the two variables, by using the `pd.pivot_table()` method. between the species and each cluster. Define the `aggfunc=` to the 'count' method.
extra: To avoid a local minima (due to unlucky random initialization) you should run the algorithm more than once. Write a function that fits the algorithm N number of times, and evaluates the best solution by calculating ratio between the average distance between all points within the same cluster and the average distance to points outside ones cluster.
```
#[Answer to Ex. 16.2.6]
```
|
github_jupyter
|
# MASH analysis pipeline with data-driven prior matrices
This notebook is a pipeline written in SoS to run `flashr + mashr` for multivariate analysis described in Urbut et al (2019). This pipeline was last applied to analyze GTEx V8 eQTL data, although it can be used as is to perform similar multivariate analysis for other association studies.
*Version: 2021.02.28 by Gao Wang and Yuxin Zou*
```
%revisions -s
```
## Data overview
`fastqtl` summary statistics data were obtained from dbGaP (data on CRI at UChicago Genetic Medicine). It has 49 tissues. [more description to come]
## Preparing MASH input
Using an established workflow (which takes 33hrs to run on a cluster system as configured by `midway2.yml`; see inside `fastqtl_to_mash.ipynb` for a note on computing environment),
```
INPUT_DIR=/project/compbio/GTEx_dbGaP/GTEx_Analysis_2017-06-05_v8/eqtl/GTEx_Analysis_v8_eQTL_all_associations
JOB_OPT="-c midway2.yml -q midway2"
sos run workflows/fastqtl_to_mash.ipynb --data-list $INPUT_DIR/FastQTLSumStats.list --common-suffix ".allpairs.txt" $JOB_OPT
```
As a result of command above I obtained the "mashable" data-set in the same format [as described here](https://stephenslab.github.io/gtexresults/gtexdata.html).
### Some data integrity check
1. Check if I get the same number of groups (genes) at the end of HDF5 data conversion:
```
$ zcat Whole_Blood.allpairs.txt.gz | cut -f1 | sort -u | wc -l
20316
$ h5ls Whole_Blood.allpairs.txt.h5 | wc -l
20315
```
The results agreed on Whole Blood sample (the original data has a header thus one line more than the H5 version). We should be good (since the pipeline reported success for all other files).
### Data & job summary
The command above took 33 hours on UChicago RCC `midway2`.
```
[MW] cat FastQTLSumStats.log
39832 out of 39832 groups merged!
```
So we have a total of 39832 genes (union of 49 tissues).
```
[MW] cat FastQTLSumStats.portable.log
15636 out of 39832 groups extracted!
```
We have 15636 groups without missing data in any tissue. This will be used to train the MASH model.
The "mashable" data file is `FastQTLSumStats.mash.rds`, 124Mb serialized R file.
## Multivariate adaptive shrinkage (MASH) analysis of eQTL data
Below is a "blackbox" implementation of the `mashr` eQTL workflow -- blackbox in the sense that you can run this pipeline as an executable, without thinking too much about it, if you see your problem fits our GTEx analysis scheme. However when reading it as a notebook it is a good source of information to help developing your own `mashr` analysis procedures.
Since the submission to biorxiv of Urbut 2017 we have improved implementation of MASH algorithm and made a new R package, [`mashr`](https://github.com/stephenslab/mashr). Major improvements compared to Urbut 2019 are:
1. Faster computation of likelihood and posterior quantities via matrix algebra tricks and a C++ implementation.
2. Faster computation of MASH mixture via convex optimization.
3. Replace `SFA` with `FLASH`, a new sparse factor analysis method to generate prior covariance candidates.
4. Improve estimate of residual variance $\hat{V}$.
At this point, the input data have already been converted from the original eQTL summary statistics to a format convenient for analysis in MASH, as a result of running the data conversion pipeline in `fastqtl_to_mash.ipynb`.
Example command:
```bash
JOB_OPT="-j 8"
#JOB_OPT="-c midway2.yml -q midway2"
sos run workflows/mashr_flashr_workflow.ipynb mash $JOB_OPT # --data ... --cwd ... --vhat ...
```
**FIXME: add comments on submitting jobs to HPC. Here we use the UChicago RCC cluster but other users can similarly configure their computating system to run the pipeline on HPC.**
### Global parameter settings
```
[global]
parameter: cwd = path('./mashr_flashr_workflow_output')
# Input summary statistics data
parameter: data = path("fastqtl_to_mash_output/FastQTLSumStats.mash.rds")
# Prefix of output files. If not specified, it will derive it from data.
# If it is specified, for example, `--output-prefix AnalysisResults`
# It will save output files as `{cwd}/AnalysisResults*`.
parameter: output_prefix = ''
# Exchangable effect (EE) or exchangable z-scores (EZ)
parameter: effect_model = 'EZ'
# Identifier of $\hat{V}$ estimate file
# Options are "identity", "simple", "mle", "vhat_corshrink_xcondition", "vhat_simple_specific"
parameter: vhat = 'mle'
parameter: mixture_components = ['flash', 'flash_nonneg', 'pca']
data = data.absolute()
cwd = cwd.absolute()
if len(output_prefix) == 0:
output_prefix = f"{data:bn}"
prior_data = file_target(f"{cwd:a}/{output_prefix}.{effect_model}.prior.rds")
vhat_data = file_target(f"{cwd:a}/{output_prefix}.{effect_model}.V_{vhat}.rds")
mash_model = file_target(f"{cwd:a}/{output_prefix}.{effect_model}.V_{vhat}.mash_model.rds")
def sort_uniq(seq):
seen = set()
return [x for x in seq if not (x in seen or seen.add(x))]
```
### Command interface
```
sos run mashr_flashr_workflow.ipynb -h
```
## Factor analyses
```
# Perform FLASH analysis with non-negative factor constraint (time estimate: 20min)
[flash]
input: data
output: f"{cwd}/{output_prefix}.flash.rds"
task: trunk_workers = 1, walltime = '2h', trunk_size = 1, mem = '8G', cores = 2, tags = f'{_output:bn}'
R: expand = "${ }", stderr = f'{_output:n}.stderr', stdout = f'{_output:n}.stdout'
dat = readRDS(${_input:r})
dat = mashr::mash_set_data(dat$strong.b, Shat=dat$strong.s, alpha=${1 if effect_model == 'EZ' else 0}, zero_Bhat_Shat_reset = 1E3)
res = mashr::cov_flash(dat, factors="default", remove_singleton=${"TRUE" if "canonical" in mixture_components else "FALSE"}, output_model="${_output:n}.model.rds")
saveRDS(res, ${_output:r})
# Perform FLASH analysis with non-negative factor constraint (time estimate: 20min)
[flash_nonneg]
input: data
output: f"{cwd}/{output_prefix}.flash_nonneg.rds"
task: trunk_workers = 1, walltime = '2h', trunk_size = 1, mem = '8G', cores = 2, tags = f'{_output:bn}'
R: expand = "${ }", stderr = f'{_output:n}.stderr', stdout = f'{_output:n}.stdout'
dat = readRDS(${_input:r})
dat = mashr::mash_set_data(dat$strong.b, Shat=dat$strong.s, alpha=${1 if effect_model == 'EZ' else 0}, zero_Bhat_Shat_reset = 1E3)
res = mashr::cov_flash(dat, factors="nonneg", remove_singleton=${"TRUE" if "canonical" in mixture_components else "FALSE"}, output_model="${_output:n}.model.rds")
saveRDS(res, ${_output:r})
[pca]
# Number of components in PCA analysis for prior
# set to 3 as in mash paper
parameter: npc = 3
input: data
output: f"{cwd}/{output_prefix}.pca.rds"
task: trunk_workers = 1, walltime = '1h', trunk_size = 1, mem = '4G', cores = 2, tags = f'{_output:bn}'
R: expand = "${ }", stderr = f'{_output:n}.stderr', stdout = f'{_output:n}.stdout'
dat = readRDS(${_input:r})
dat = mashr::mash_set_data(dat$strong.b, Shat=dat$strong.s, alpha=${1 if effect_model == 'EZ' else 0}, zero_Bhat_Shat_reset = 1E3)
res = mashr::cov_pca(dat, ${npc})
saveRDS(res, ${_output:r})
```
### Estimate residual variance
FIXME: add some narratives here explaining what we do in each method.
```
# V estimate: "identity" method
[vhat_identity]
input: data
output: f'{vhat_data:nn}.V_identity.rds'
R: expand = "${ }", workdir = cwd, stderr = f"{_output:n}.stderr", stdout = f"{_output:n}.stdout"
dat = readRDS(${_input:r})
saveRDS(diag(ncol(dat$random.b)), ${_output:r})
# V estimate: "simple" method (using null z-scores)
[vhat_simple]
depends: R_library("mashr")
input: data
output: f'{vhat_data:nn}.V_simple.rds'
R: expand = "${ }", workdir = cwd, stderr = f"{_output:n}.stderr", stdout = f"{_output:n}.stdout"
library(mashr)
dat = readRDS(${_input:r})
vhat = estimate_null_correlation_simple(mash_set_data(dat$random.b, Shat=dat$random.s, alpha=${1 if effect_model == 'EZ' else 0}, zero_Bhat_Shat_reset = 1E3))
saveRDS(vhat, ${_output:r})
# V estimate: "mle" method
[vhat_mle]
# number of samples to use
parameter: n_subset = 6000
# maximum number of iterations
parameter: max_iter = 6
depends: R_library("mashr")
input: data, prior_data
output: f'{vhat_data:nn}.V_mle.rds'
task: trunk_workers = 1, walltime = '36h', trunk_size = 1, mem = '4G', cores = 1, tags = f'{_output:bn}'
R: expand = "${ }", workdir = cwd, stderr = f"{_output:n}.stderr", stdout = f"{_output:n}.stdout"
library(mashr)
dat = readRDS(${_input[0]:r})
# choose random subset
set.seed(1)
random.subset = sample(1:nrow(dat$random.b), min(${n_subset}, nrow(dat$random.b)))
random.subset = mash_set_data(dat$random.b[random.subset,], dat$random.s[random.subset,], alpha=${1 if effect_model == 'EZ' else 0}, zero_Bhat_Shat_reset = 1E3)
# estimate V mle
vhat = estimate_null_correlation(random.subset, readRDS(${_input[1]:r}), max_iter = ${max_iter})
saveRDS(vhat, ${_output:r})
# Estimate each V separately via corshrink
[vhat_corshrink_xcondition_1]
# Utility script
parameter: util_script = path('/project/mstephens/gtex/scripts/SumstatQuery.R')
# List of genes to analyze
parameter: gene_list = path()
fail_if(not gene_list.is_file(), msg = 'Please specify valid path for --gene-list')
fail_if(not util_script.is_file() and len(str(util_script)), msg = 'Please specify valid path for --util-script')
genes = sort_uniq([x.strip().strip('"') for x in open(f'{gene_list:a}').readlines() if not x.strip().startswith('#')])
depends: R_library("CorShrink")
input: data, for_each = 'genes'
output: f'{vhat_data:nn}/{vhat_data:bnn}_V_corshrink_{_genes}.rds'
task: trunk_workers = 1, walltime = '3m', trunk_size = 500, mem = '3G', cores = 1, tags = f'{_output:bn}'
R: expand = "${ }", workdir = cwd, stderr = f"{_output:n}.stderr", stdout = f"{_output:n}.stdout"
source(${util_script:r})
CorShrink_sum = function(gene, database, z_thresh = 2){
print(gene)
dat <- GetSS(gene, database)
z = dat$"z-score"
max_absz = apply(abs(z), 1, max)
nullish = which(max_absz < z_thresh)
# if (length(nullish) < ncol(z)) {
# stop("not enough null data to estimate null correlation")
# }
if (length(nullish) <= 1){
mat = diag(ncol(z))
} else {
nullish_z = z[nullish, ]
mat = as.matrix(CorShrink::CorShrinkData(nullish_z, ash.control = list(mixcompdist = "halfuniform"))$cor)
}
return(mat)
}
V = Corshrink_sum("${_genes}", ${data:r})
saveRDS(V, ${_output:r})
# Estimate each V separately via "simple" method
[vhat_simple_specific_1]
# Utility script
parameter: util_script = path('/project/mstephens/gtex/scripts/SumstatQuery.R')
# List of genes to analyze
parameter: gene_list = path()
fail_if(not gene_list.is_file(), msg = 'Please specify valid path for --gene-list')
fail_if(not util_script.is_file() and len(str(util_script)), msg = 'Please specify valid path for --util-script')
genes = sort_uniq([x.strip().strip('"') for x in open(f'{gene_list:a}').readlines() if not x.strip().startswith('#')])
depends: R_library("Matrix")
input: data, for_each = 'genes'
output: f'{vhat_data:nn}/{vhat_data:bnn}_V_simple_{_genes}.rds'
task: trunk_workers = 1, walltime = '1m', trunk_size = 500, mem = '3G', cores = 1, tags = f'{_output:bn}'
R: expand = "${ }", workdir = cwd, stderr = f"{_output:n}.stderr", stdout = f"{_output:n}.stdout"
source(${util_script:r})
simple_V = function(gene, database, z_thresh = 2){
print(gene)
dat <- GetSS(gene, database)
z = dat$"z-score"
max_absz = apply(abs(z), 1, max)
nullish = which(max_absz < z_thresh)
# if (length(nullish) < ncol(z)) {
# stop("not enough null data to estimate null correlation")
# }
if (length(nullish) <= 1){
mat = diag(ncol(z))
} else {
nullish_z = z[nullish, ]
mat = as.matrix(Matrix::nearPD(as.matrix(cov(nullish_z)), conv.tol=1e-06, doSym = TRUE, corr=TRUE)$mat)
}
return(mat)
}
V = simple_V("${_genes}", ${data:r})
saveRDS(V, ${_output:r})
# Consolidate Vhat into one file
[vhat_corshrink_xcondition_2, vhat_simple_specific_2]
depends: R_library("parallel")
# List of genes to analyze
parameter: gene_list = path()
fail_if(not gene_list.is_file(), msg = 'Please specify valid path for --gene-list')
genes = paths([x.strip().strip('"') for x in open(f'{gene_list:a}').readlines() if not x.strip().startswith('#')])
input: group_by = 'all'
output: f"{vhat_data:nn}.V_{step_name.rsplit('_',1)[0]}.rds"
task: trunk_workers = 1, walltime = '1h', trunk_size = 1, mem = '4G', cores = 1, tags = f'{_output:bn}'
R: expand = "${ }", workdir = cwd, stderr = f"{_output:n}.stderr", stdout = f"{_output:n}.stdout"
library(parallel)
files = sapply(c(${genes:r,}), function(g) paste0(c(${_input[0]:adr}), '/', g, '.rds'), USE.NAMES=FALSE)
V = mclapply(files, function(i){ readRDS(i) }, mc.cores = 1)
R = dim(V[[1]])[1]
L = length(V)
V.array = array(as.numeric(unlist(V)), dim=c(R, R, L))
saveRDS(V.array, ${_output:ar})
```
### Compute MASH priors
Main reference are our `mashr` vignettes [this for mashr eQTL outline](https://stephenslab.github.io/mashr/articles/eQTL_outline.html) and [this for using FLASH prior](https://github.com/stephenslab/mashr/blob/master/vignettes/flash_mash.Rmd).
The outcome of this workflow should be found under `./mashr_flashr_workflow_output` folder (can be configured). File names have pattern `*.mash_model_*.rds`. They can be used to computer posterior for input list of gene-SNP pairs (see next section).
```
# Compute data-driven / canonical prior matrices (time estimate: 2h ~ 12h for ~30 49 by 49 matrix mixture)
[prior]
depends: R_library("mashr")
# if vhat method is `mle` it should use V_simple to analyze the data to provide a rough estimate, then later be refined via `mle`.
input: [data, vhat_data if vhat != "mle" else f'{vhat_data:nn}.V_simple.rds'] + [f"{cwd}/{output_prefix}.{m}.rds" for m in mixture_components]
output: prior_data
task: trunk_workers = 1, walltime = '36h', trunk_size = 1, mem = '4G', cores = 4, tags = f'{_output:bn}'
R: expand = "${ }", workdir = cwd, stderr = f"{_output:n}.stderr", stdout = f"{_output:n}.stdout"
library(mashr)
rds_files = c(${_input:r,})
dat = readRDS(rds_files[1])
vhat = readRDS(rds_files[2])
mash_data = mash_set_data(dat$strong.b, Shat=dat$strong.s, V=vhat, alpha=${1 if effect_model == 'EZ' else 0}, zero_Bhat_Shat_reset = 1E3)
# setup prior
U = list(XtX = t(mash_data$Bhat) %*% mash_data$Bhat / nrow(mash_data$Bhat))
for (f in rds_files[3:length(rds_files)]) U = c(U, readRDS(f))
U.ed = cov_ed(mash_data, U, logfile=${_output:nr})
# Canonical matrices
U.can = cov_canonical(mash_data)
saveRDS(c(U.ed, U.can), ${_output:r})
```
## `mashr` mixture model fitting
```
# Fit MASH mixture model (time estimate: <15min for 70K by 49 matrix)
[mash_1]
depends: R_library("mashr")
input: data, vhat_data, prior_data
output: mash_model
task: trunk_workers = 1, walltime = '36h', trunk_size = 1, mem = '4G', cores = 1, tags = f'{_output:bn}'
R: expand = "${ }", workdir = cwd, stderr = f"{_output:n}.stderr", stdout = f"{_output:n}.stdout"
library(mashr)
dat = readRDS(${_input[0]:r})
vhat = readRDS(${_input[1]:r})
U = readRDS(${_input[2]:r})
mash_data = mash_set_data(dat$random.b, Shat=dat$random.s, alpha=${1 if effect_model == 'EZ' else 0}, V=vhat, zero_Bhat_Shat_reset = 1E3)
saveRDS(mash(mash_data, Ulist = U, outputlevel = 1), ${_output:r})
```
### Optional posterior computations
Additionally provide posterior for the "strong" set in MASH input data.
```
# Compute posterior for the "strong" set of data as in Urbut et al 2017.
# This is optional because most of the time we want to apply the
# MASH model learned on much larger data-set.
[mash_2]
# default to True; use --no-compute-posterior to disable this
parameter: compute_posterior = True
# input Vhat file for the batch of posterior data
skip_if(not compute_posterior)
depends: R_library("mashr")
input: data, vhat_data, mash_model
output: f"{cwd:a}/{output_prefix}.{effect_model}.posterior.rds"
task: trunk_workers = 1, walltime = '36h', trunk_size = 1, mem = '4G', cores = 1, tags = f'{_output:bn}'
R: expand = "${ }", workdir = cwd, stderr = f"{_output:n}.stderr", stdout = f"{_output:n}.stdout"
library(mashr)
dat = readRDS(${_input[0]:r})
vhat = readRDS(${_input[1]:r})
mash_data = mash_set_data(dat$strong.b, Shat=dat$strong.s, alpha=${1 if effect_model == 'EZ' else 0}, V=vhat, zero_Bhat_Shat_reset = 1E3)
mash_model = readRDS(${_input[2]:ar})
saveRDS(mash_compute_posterior_matrices(mash_model, mash_data), ${_output:r})
```
## Compute MASH posteriors
In the GTEx V6 paper we assumed one eQTL per gene and applied the model learned above to those SNPs. Under that assumption, the input data for posterior calculation will be the `dat$strong.*` matrices.
It is a fairly straightforward procedure as shown in [this vignette](https://stephenslab.github.io/mashr/articles/eQTL_outline.html).
But it is often more interesting to apply MASH to given list of eQTLs, eg, from those from fine-mapping results. In GTEx V8 analysis we obtain such gene-SNP pairs from DAP-G fine-mapping analysis. See [this notebook](https://stephenslab.github.io/gtex-eqtls/analysis/Independent_eQTL_Results.html) for how the input data is prepared. The workflow below takes a number of input chunks (each chunk is a list of matrices `dat$Bhat` and `dat$Shat`)
and computes posterior for each chunk. It is therefore suited for running in parallel posterior computation for all gene-SNP pairs, if input data chunks are provided.
```
JOB_OPT="-c midway2.yml -q midway2"
DATA_DIR=/project/compbio/GTEx_eQTL/independent_eQTL
sos run workflows/mashr_flashr_workflow.ipynb posterior \
$JOB_OPT \
--posterior-input $DATA_DIR/DAPG_pip_gt_0.01-AllTissues/DAPG_pip_gt_0.01-AllTissues.*.rds \
$DATA_DIR/ConditionalAnalysis_AllTissues/ConditionalAnalysis_AllTissues.*.rds
```
```
# Apply posterior calculations
[posterior]
parameter: mash_model = path(f"{cwd:a}/{output_prefix}.{effect_model}.V_{vhat}.mash_model.rds")
parameter: posterior_input = paths()
parameter: posterior_vhat_files = paths()
# eg, if data is saved in R list as data$strong, then
# when you specify `--data-table-name strong` it will read the data as
# readRDS('{_input:r}')$strong
parameter: data_table_name = ''
parameter: bhat_table_name = 'Bhat'
parameter: shat_table_name = 'Shat'
mash_model = f"{mash_model:a}"
skip_if(len(posterior_input) == 0, msg = "No posterior input data to compute on. Please specify it using --posterior-input.")
fail_if(len(posterior_vhat_files) > 1 and len(posterior_vhat_files) != len(posterior_input), msg = "length of --posterior-input and --posterior-vhat-files do not agree.")
for p in posterior_input:
fail_if(not p.is_file(), msg = f'Cannot find posterior input file ``{p}``')
depends: R_library("mashr"), mash_model
input: posterior_input, group_by = 1
output: f"{_input:n}.posterior.rds"
task: trunk_workers = 1, walltime = '20h', trunk_size = 1, mem = '20G', cores = 1, tags = f'{_output:bn}'
R: expand = "${ }", workdir = cwd, stderr = f"{_output:n}.stderr", stdout = f"{_output:n}.stdout"
library(mashr)
data = readRDS(${_input:r})${('$' + data_table_name) if data_table_name else ''}
vhat = readRDS("${vhat_data if len(posterior_vhat_files) == 0 else posterior_vhat_files[_index]}")
mash_data = mash_set_data(data$${bhat_table_name}, Shat=data$${shat_table_name}, alpha=${1 if effect_model == 'EZ' else 0}, V=vhat, zero_Bhat_Shat_reset = 1E3)
saveRDS(mash_compute_posterior_matrices(readRDS(${mash_model:r}), mash_data), ${_output:r})
```
### Posterior results
1. The outcome of the `[posterior]` step should produce a number of serialized R objects `*.batch_*.posterior.rds` (can be loaded to R via `readRDS()`) -- I chopped data to batches to take advantage of computing in multiple cluster nodes. It should be self-explanary but please let me know otherwise.
2. Other posterior related files are:
1. `*.batch_*.yaml`: gene-SNP pairs of interest, identified elsewhere (eg. fine-mapping analysis).
2. The corresponding univariate analysis summary statistics for gene-SNPs from `*.batch_*.yaml` are extracted and saved to `*.batch_*.rds`, creating input to the `[posterior]` step.
3. Note the `*.batch_*.stdout` file documents some SNPs found in fine-mapping results but not found in the original `fastqtl` output.
|
github_jupyter
|
# Visualisation in Python - Matplotlib
Here is the sales dataset for an online retailer. The data is collected over a period of three years: 2012 to 2015. It contains the information of sales made by the company.
The products captured belong to three categories:
Furniture
Office Supplies
Technology
Also, the company caters to five different markets:
USCA
LATAM
ASPAC
EUR
AFR
We will be using the 'pyplot' package of the Matplotlib library.
```
# importing numpy and the pyplot package of matplotlib
import numpy as np
import matplotlib.pyplot as plt
# Creating an array with product categories
product_cat = np.array(['Furniture','Technology','Office Supplies'])
# Creating an array with the sales amount
# Furniture: 4110451.90
# Technology: 4744557.50
# Office Supplies: 3787492.52
sales_amt = np.array([4110451.90,4744557.50,3787492.52])
print(sales_amt)
```
## Bar Graph: Plotting sales across each product category
```
# plotting the bar graph with product categories on x-axis and sales amount of y-axis
plt.bar(product_cat,sales_amt)
# necessary command to display the created graph
plt.show()
```
### Adding title and labeling axes in the graph
```
# plotting the bar graph with product categories on x-axis and sales amount of y-axis
plt.bar(product_cat, sales_amt)
# adding title to the graph
plt.title("Sales Across Product Categories", fontdict={'fontsize': 20, 'fontweight' : 5, 'color' : 'Green'})
# labeling axes
plt.xlabel("Product Category", fontdict={'fontsize': 10, 'fontweight' : 3, 'color' : 'Blue'})
plt.ylabel("Sales", fontdict={'fontsize': 10, 'fontweight' : 3, 'color' : 'Blue'})
# necessary command to display the created graph
plt.show()
```
#### Modifying the bars in the graph
```
# changing color of the bars in the bar graph
# plotting the bar graph with product categories on x-axis and sales amount of y-axis
plt.bar(product_cat, sales_amt, color='cyan', edgecolor='orange')
# adding title to the graph
plt.title("Sales Across Product Categories", fontdict={'fontsize': 20, 'fontweight' : 5, 'color' : 'Green'})
# labeling axes
plt.xlabel("Product Category", fontdict={'fontsize': 10, 'fontweight' : 3, 'color' : 'Blue'})
plt.ylabel("Sales", fontdict={'fontsize': 10, 'fontweight' : 3, 'color' : 'Blue'})
# necessary command to display the created graph
plt.show()
```
#### Adjusting tick values and the value labels
```
# plotting the bar graph with product categories on x-axis and sales amount of y-axis
plt.bar(product_cat, sales_amt, color='cyan', edgecolor='orange')
# adding title to the graph
plt.title("Sales Across Product Categories", fontdict={'fontsize': 20, 'fontweight' : 5, 'color' : 'Green'})
# labeling axes
plt.xlabel("Product Category", fontdict={'fontsize': 10, 'fontweight' : 3, 'color' : 'Blue'})
plt.ylabel("Sales", fontdict={'fontsize': 10, 'fontweight' : 3, 'color' : 'Blue'})
# Modifying the ticks to show information in (lakhs)
tick_values = np.arange(0, 8000000, 1000000)
tick_labels = ["0L", "10L", "20L", "30L", "40L", "50L", "60L", "70L"]
plt.yticks(tick_values, tick_labels)
plt.show()
```
## Scatter Chart: Plotting Sales vs Profits
Scatter plots are used when you want to show the relationship between two facts or measures.
Now, you have the sales and profit data of different product categories across different countries. Let's try to build scatterplots to visualise the data at hand.
```
# Sales and Profit data for different product categories across different countries
sales = np.array ([1013.14, 8298.48, 875.51, 22320.83, 9251.6, 4516.86, 585.16, 836154.03, 216748.48, 174.2, 27557.79, 563.25, 558.11, 37117.45, 357.36, 2206.96, 709.5, 35064.03, 7230.78, 235.33, 148.32, 3973.27, 11737.8, 7104.63, 83.67, 5569.83, 92.34, 107104.36, 1045.62, 9072.51, 42485.82, 5093.82, 14846.16, 943.92, 684.36, 15012.03, 38196.18, 2448.75, 28881.96, 13912.14, 4507.2, 4931.06, 12805.05, 67912.73, 4492.2, 1740.01, 458.04, 16904.32, 21744.53, 10417.26, 18665.33, 2808.42, 54195.57, 67332.5, 24390.95, 1790.43, 2234.19, 9917.5, 7408.14, 36051.99, 1352.22, 1907.7, 245722.14, 2154.66, 1078.21, 3391.65, 28262.73, 5177.04, 66.51, 2031.34, 1683.72, 1970.01, 6515.82, 1055.31, 1029.48, 5303.4, 1850.96, 1159.41, 39989.13, 1183.87, 96365.09, 8356.68, 7010.24, 23119.23, 46109.28, 146071.84, 242259.03, 9058.95, 1313.67, 31525.06, 2019.94, 703.04, 1868.79, 700.5, 55512.02, 243.5, 2113.18, 11781.81, 262189.49, 3487.29, 513.12, 312050.42, 5000.7, 121.02, 1302.78, 169.92, 124.29, 57366.05, 29445.93, 4614.3, 45009.98, 309.24, 3353.67, 41348.34, 2280.27, 61193.7, 1466.79, 12419.94, 445.12, 25188.65, 263514.92, 12351.23, 1152.3, 26298.81, 9900.78, 5355.57, 2325.66, 6282.81, 127707.92, 1283.1, 3560.15, 3723.84, 13715.01, 4887.9, 3396.89, 33348.42, 625.02, 1665.48, 32486.97, 340212.44, 20516.22, 8651.16, 13590.06, 2440.35, 6462.57, 1770.13, 7527.18, 1433.65, 423.3, 21601.72, 10035.72, 2378.49, 3062.38, 719469.32, 179366.79, 345.17, 30345.78, 300.71, 940.81, 36468.08, 1352.85, 1755.72, 2391.96, 19.98, 19792.8, 15633.88, 7.45, 521.67, 1118.24, 7231.68, 12399.32, 204.36, 23.64, 5916.48, 313.98, 108181.5, 9212.42, 27476.91, 1761.33, 289.5, 780.3, 15098.46, 813.27, 47.55, 8323.23, 22634.64, 1831.02, 28808.1, 10539.78, 588.99, 939.78, 7212.41, 15683.01, 41369.09, 5581.6, 403.36, 375.26, 12276.66, 15393.56, 76.65, 5884.38, 18005.49, 3094.71, 43642.78, 35554.83, 22977.11, 1026.33, 665.28, 9712.49, 6038.52, 30756.51, 3758.25, 4769.49, 2463.3, 160153.16, 967.11, 2311.74, 1414.83, 12764.91, 4191.24, 110.76, 637.34, 1195.12, 2271.63, 804.12, 196.17, 167.67, 131.77, 2842.05, 9969.12, 1784.35, 3098.49, 25005.54, 1300.1, 118697.39, 7920.54, 6471.78, 31707.57, 37636.47, 118777.77, 131170.76, 3980.88, 3339.39, 26563.9, 4038.73, 124.8, 196.65, 2797.77, 29832.76, 184.84, 79.08, 8047.83, 205313.25, 1726.98, 899.73, 224.06, 304763.54, 6101.31, 729.6, 896.07, 17.82, 26.22, 46429.78, 31167.27, 2455.94, 37714.3, 1506.93, 3812.78, 25223.34, 3795.96, 437.31, 41278.86, 2091.81, 6296.61, 468.82, 23629.64, 160435.53, 9725.46, 1317.03, 1225.26, 30034.08, 7893.45, 2036.07, 215.52, 3912.42, 82783.43, 253.14, 966.96, 3381.26, 164.07, 1984.23, 75.12, 25168.17, 3295.53, 991.12, 10772.1, 44.16, 1311.45, 35352.57, 245783.54, 20.49, 13471.06, 8171.16, 14075.67, 611.82, 3925.56, 981.84, 10209.84, 156.56, 243.06, 21287.52, 7300.51, 434.52, 6065.0, 741577.51, 132461.03, 224.75, 28953.6, 757.98, 528.15, 34922.41, 50.58, 2918.48, 1044.96, 22195.13, 3951.48, 6977.64, 219.12, 5908.38, 10987.46, 4852.26, 445.5, 71860.82, 14840.45, 24712.08, 1329.9, 1180.44, 85.02, 10341.63, 690.48, 1939.53, 20010.51, 914.31, 25223.82, 12804.66, 2124.24, 602.82, 2961.66, 15740.79, 74138.35, 7759.39, 447.0, 2094.84, 22358.95, 21734.53, 4223.73, 17679.53, 1019.85, 51848.72, 69133.3, 30146.9, 705.48, 14508.88, 7489.38, 20269.44, 246.12, 668.13, 768.93, 215677.35, 899.16, 2578.2, 4107.99, 20334.57, 366.84, 3249.27, 98.88, 3497.88, 3853.05, 786.75, 1573.68, 458.36, 1234.77, 1094.22, 2300.61, 970.14, 3068.25, 35792.85, 4277.82, 71080.28, 3016.86, 3157.49, 15888.0, 30000.36, 140037.89, 216056.25, 1214.22, 1493.94, 32036.69, 4979.66, 106.02, 46257.68, 1033.3, 937.32, 3442.62, 160633.45, 213.15, 338.88, 242117.13, 9602.34, 2280.99, 73759.08, 23526.12, 6272.74, 43416.3, 576.78, 1471.61, 20844.9, 3497.7, 56382.38, 902.58, 6235.26, 48.91, 32684.24, 276611.58, 13370.38, 10595.28, 4555.14, 10084.38, 267.72, 1012.95, 4630.5, 149433.51, 364.32, 349.2, 4647.56, 504.0, 10343.52, 5202.66, 2786.26, 34135.95, 2654.58, 24699.51, 339239.87, 136.26, 23524.51, 8731.68, 8425.86, 835.95, 11285.19])
profit = np.array([-1213.46, 1814.13, -1485.7, -2286.73, -2872.12, 946.8, 198.48, 145454.95, 49476.1, -245.56, 5980.77, -790.47, -895.72, -34572.08, 117.9, 561.96, 152.85, 1426.05, 1873.17, -251.03, 68.22, 635.11, 3722.4, -3168.63, 27.6, 952.11, 7.38, 20931.13, 186.36, -5395.38, 9738.45, 525.27, 3351.99, 120.78, 266.88, 3795.21, 8615.97, 609.54, 7710.57, 2930.43, 1047.96, -2733.32, 2873.73, -5957.89, -909.6, 163.41, -376.02, -6322.68, -10425.86, 2340.36, -28430.53, 756.12, 12633.33, 7382.54, -14327.69, 436.44, 683.85, -694.91, 1960.56, 10925.82, 334.08, 425.49, 53580.2, 1024.56, 110.93, 632.22, 8492.58, 1418.88, 19.26, -2567.57, 346.26, 601.86, 1318.68, 304.05, 428.37, 1416.24, -2878.18, 283.41, 12611.04, 261.95, -648.43, 1112.88, -2640.29, 6154.32, 11558.79, 15291.4, 56092.65, 1515.39, 342.03, -10865.66, -902.8, 351.52, 364.17, 87.72, 11565.66, 75.4, 289.33, 3129.63, 50795.72, 783.72, 215.46, 29196.89, 1147.26, 53.22, 286.56, 73.02, 42.24, 13914.85, 5754.54, 998.04, -1476.04, 86.58, -1636.35, 10511.91, 647.34, 13768.62, 338.67, 3095.67, 173.84, 5632.93, 64845.11, 3297.33, 338.61, 7246.62, 2255.52, 1326.36, 827.64, 1100.58, 9051.36, 412.23, 1063.91, 940.59, 3891.84, 1599.51, 1129.57, 8792.64, 6.24, 592.77, 8792.85, 47727.5, -4597.68, 2242.56, 3546.45, 321.87, 1536.72, -2463.29, 1906.08, -1916.99, 186.24, 3002.05, -3250.98, 554.7, 830.64, 122612.79, 33894.21, -559.03, 7528.05, -477.67, -1660.25, -33550.96, 481.68, 425.08, 450.3, 9.57, -3025.29, 2924.62, -11.84, 87.36, 26.51, 1727.19, -6131.18, 59.16, 3.06, 1693.47, 74.67, 24729.21, -4867.94, 6705.18, 410.79, 70.74, 101.7, 3264.3, 137.01, 6.18, 2100.21, 5295.24, 520.29, 7205.52, 2602.65, 116.67, 224.91, -5153.93, 3882.69, -6535.24, -1254.1, 84.56, -186.38, -3167.2, -7935.59, 37.02, 1908.06, -27087.84, 829.32, 8727.44, 2011.47, -11629.64, 234.96, 53.1, 1248.14, 1511.07, 7374.24, 1193.28, 1090.23, 553.86, 38483.86, 255.81, 528.54, 326.07, 3924.36, 1018.92, 36.48, 113.24, -1770.05, 527.64, 224.49, 79.53, 64.77, 38.08, 868.08, 2265.06, -2643.62, 833.73, 5100.03, 326.44, 18158.84, 1682.01, -3290.22, 8283.33, 7926.18, 1694.41, 30522.92, 1214.07, 900.6, -6860.8, -865.91, 26.16, 47.22, 863.52, 7061.26, 73.92, 33.12, 1801.23, 38815.44, 431.13, 216.81, 16.5, 53688.2, 1210.32, 236.94, 210.84, 3.18, 2.22, 10265.64, 7212.3, 343.56, 3898.28, 568.11, -1867.85, 5782.38, 697.29, -192.06, 10179.02, 616.32, 1090.47, 165.84, 6138.28, 39723.06, 2085.14, 90.0, 129.93, 7957.53, 2131.86, 562.44, 99.12, 1298.37, 7580.33, 113.73, 139.71, 456.0, 21.24, 292.68, 30.34, 5817.15, 1060.89, 252.9, 3060.61, 6.6, 219.09, 8735.82, 31481.09, 2.85, -3124.72, 2195.94, 3464.7, 141.12, 1125.69, -1752.03, 3281.52, -303.77, 114.18, -2412.63, -5099.61, 146.64, 660.22, 18329.28, 28529.84, -232.27, 7435.41, -1157.94, -746.73, -30324.2, 2.52, 1313.44, 213.72, -5708.95, 930.18, 1663.02, 31.59, 1787.88, -8219.56, 973.92, 4.32, 8729.78, -2529.52, 5361.06, 69.21, 519.3, 13.56, 2236.77, 213.96, 367.98, 5074.2, 206.61, 7620.36, 2093.19, 164.07, 230.01, -815.82, 4226.7, -3635.09, -3344.17, 167.26, 143.79, -8233.57, -4085.21, 919.35, -25232.35, 234.33, 12040.68, 7206.28, -15112.76, 206.04, -2662.49, 2346.81, 4461.36, 93.48, 82.11, 147.87, 10389.53, 395.58, 474.74, 1333.26, 3913.02, 117.36, 858.78, 6.9, -4628.49, 1170.6, 218.55, 539.58, -211.0, 438.87, 317.16, 310.8, -1578.09, 706.56, 6617.4, 803.84, 2475.26, 764.34, -1461.88, 3805.56, 7371.27, -1377.13, 42435.03, 472.47, 315.48, -11755.91, -2418.6, 6.36, 9317.76, 326.88, -287.31, 637.68, 17579.17, 70.83, 47.4, 26143.92, 1548.15, 612.78, 17842.76, 6735.39, 1206.5, -10035.74, 149.4, -777.85, 5566.29, 748.92, 14941.58, 348.93, 1944.06, -5.51, 7026.84, 46114.92, 2361.86, 2613.24, 1277.37, 2587.74, 103.08, 311.43, 1250.58, 13055.21, 18.21, 108.24, 709.44, 115.92, 1863.6, 1873.86, 817.32, 7577.64, 1019.19, 6813.03, 24698.84, 66.24, -10971.39, 2056.47, 2095.35, 246.33, 2797.89])
```
### Plotting a Scatterplot
```
# plotting scatterplot
plt.scatter(sales,profit)
# necessary command to display graph
plt.show()
plt.scatter(profit,sales)
plt.show()
# Sales and Profit data for different product categories across different countries
sales = np.array ([1013.14, 8298.48, 875.51, 22320.83, 9251.6, 4516.86, 585.16, 836154.03, 216748.48, 174.2, 27557.79, 563.25, 558.11, 37117.45, 357.36, 2206.96, 709.5, 35064.03, 7230.78, 235.33, 148.32, 3973.27, 11737.8, 7104.63, 83.67, 5569.83, 92.34, 107104.36, 1045.62, 9072.51, 42485.82, 5093.82, 14846.16, 943.92, 684.36, 15012.03, 38196.18, 2448.75, 28881.96, 13912.14, 4507.2, 4931.06, 12805.05, 67912.73, 4492.2, 1740.01, 458.04, 16904.32, 21744.53, 10417.26, 18665.33, 2808.42, 54195.57, 67332.5, 24390.95, 1790.43, 2234.19, 9917.5, 7408.14, 36051.99, 1352.22, 1907.7, 245722.14, 2154.66, 1078.21, 3391.65, 28262.73, 5177.04, 66.51, 2031.34, 1683.72, 1970.01, 6515.82, 1055.31, 1029.48, 5303.4, 1850.96, 1159.41, 39989.13, 1183.87, 96365.09, 8356.68, 7010.24, 23119.23, 46109.28, 146071.84, 242259.03, 9058.95, 1313.67, 31525.06, 2019.94, 703.04, 1868.79, 700.5, 55512.02, 243.5, 2113.18, 11781.81, 262189.49, 3487.29, 513.12, 312050.42, 5000.7, 121.02, 1302.78, 169.92, 124.29, 57366.05, 29445.93, 4614.3, 45009.98, 309.24, 3353.67, 41348.34, 2280.27, 61193.7, 1466.79, 12419.94, 445.12, 25188.65, 263514.92, 12351.23, 1152.3, 26298.81, 9900.78, 5355.57, 2325.66, 6282.81, 127707.92, 1283.1, 3560.15, 3723.84, 13715.01, 4887.9, 3396.89, 33348.42, 625.02, 1665.48, 32486.97, 340212.44, 20516.22, 8651.16, 13590.06, 2440.35, 6462.57, 1770.13, 7527.18, 1433.65, 423.3, 21601.72, 10035.72, 2378.49, 3062.38, 719469.32, 179366.79, 345.17, 30345.78, 300.71, 940.81, 36468.08, 1352.85, 1755.72, 2391.96, 19.98, 19792.8, 15633.88, 7.45, 521.67, 1118.24, 7231.68, 12399.32, 204.36, 23.64, 5916.48, 313.98, 108181.5, 9212.42, 27476.91, 1761.33, 289.5, 780.3, 15098.46, 813.27, 47.55, 8323.23, 22634.64, 1831.02, 28808.1, 10539.78, 588.99, 939.78, 7212.41, 15683.01, 41369.09, 5581.6, 403.36, 375.26, 12276.66, 15393.56, 76.65, 5884.38, 18005.49, 3094.71, 43642.78, 35554.83, 22977.11, 1026.33, 665.28, 9712.49, 6038.52, 30756.51, 3758.25, 4769.49, 2463.3, 160153.16, 967.11, 2311.74, 1414.83, 12764.91, 4191.24, 110.76, 637.34, 1195.12, 2271.63, 804.12, 196.17, 167.67, 131.77, 2842.05, 9969.12, 1784.35, 3098.49, 25005.54, 1300.1, 118697.39, 7920.54, 6471.78, 31707.57, 37636.47, 118777.77, 131170.76, 3980.88, 3339.39, 26563.9, 4038.73, 124.8, 196.65, 2797.77, 29832.76, 184.84, 79.08, 8047.83, 205313.25, 1726.98, 899.73, 224.06, 304763.54, 6101.31, 729.6, 896.07, 17.82, 26.22, 46429.78, 31167.27, 2455.94, 37714.3, 1506.93, 3812.78, 25223.34, 3795.96, 437.31, 41278.86, 2091.81, 6296.61, 468.82, 23629.64, 160435.53, 9725.46, 1317.03, 1225.26, 30034.08, 7893.45, 2036.07, 215.52, 3912.42, 82783.43, 253.14, 966.96, 3381.26, 164.07, 1984.23, 75.12, 25168.17, 3295.53, 991.12, 10772.1, 44.16, 1311.45, 35352.57, 245783.54, 20.49, 13471.06, 8171.16, 14075.67, 611.82, 3925.56, 981.84, 10209.84, 156.56, 243.06, 21287.52, 7300.51, 434.52, 6065.0, 741577.51, 132461.03, 224.75, 28953.6, 757.98, 528.15, 34922.41, 50.58, 2918.48, 1044.96, 22195.13, 3951.48, 6977.64, 219.12, 5908.38, 10987.46, 4852.26, 445.5, 71860.82, 14840.45, 24712.08, 1329.9, 1180.44, 85.02, 10341.63, 690.48, 1939.53, 20010.51, 914.31, 25223.82, 12804.66, 2124.24, 602.82, 2961.66, 15740.79, 74138.35, 7759.39, 447.0, 2094.84, 22358.95, 21734.53, 4223.73, 17679.53, 1019.85, 51848.72, 69133.3, 30146.9, 705.48, 14508.88, 7489.38, 20269.44, 246.12, 668.13, 768.93, 215677.35, 899.16, 2578.2, 4107.99, 20334.57, 366.84, 3249.27, 98.88, 3497.88, 3853.05, 786.75, 1573.68, 458.36, 1234.77, 1094.22, 2300.61, 970.14, 3068.25, 35792.85, 4277.82, 71080.28, 3016.86, 3157.49, 15888.0, 30000.36, 140037.89, 216056.25, 1214.22, 1493.94, 32036.69, 4979.66, 106.02, 46257.68, 1033.3, 937.32, 3442.62, 160633.45, 213.15, 338.88, 242117.13, 9602.34, 2280.99, 73759.08, 23526.12, 6272.74, 43416.3, 576.78, 1471.61, 20844.9, 3497.7, 56382.38, 902.58, 6235.26, 48.91, 32684.24, 276611.58, 13370.38, 10595.28, 4555.14, 10084.38, 267.72, 1012.95, 4630.5, 149433.51, 364.32, 349.2, 4647.56, 504.0, 10343.52, 5202.66, 2786.26, 34135.95, 2654.58, 24699.51, 339239.87, 136.26, 23524.51, 8731.68, 8425.86, 835.95, 11285.19])
profit = np.array([-1213.46, 1814.13, -1485.7, -2286.73, -2872.12, 946.8, 198.48, 145454.95, 49476.1, -245.56, 5980.77, -790.47, -895.72, -34572.08, 117.9, 561.96, 152.85, 1426.05, 1873.17, -251.03, 68.22, 635.11, 3722.4, -3168.63, 27.6, 952.11, 7.38, 20931.13, 186.36, -5395.38, 9738.45, 525.27, 3351.99, 120.78, 266.88, 3795.21, 8615.97, 609.54, 7710.57, 2930.43, 1047.96, -2733.32, 2873.73, -5957.89, -909.6, 163.41, -376.02, -6322.68, -10425.86, 2340.36, -28430.53, 756.12, 12633.33, 7382.54, -14327.69, 436.44, 683.85, -694.91, 1960.56, 10925.82, 334.08, 425.49, 53580.2, 1024.56, 110.93, 632.22, 8492.58, 1418.88, 19.26, -2567.57, 346.26, 601.86, 1318.68, 304.05, 428.37, 1416.24, -2878.18, 283.41, 12611.04, 261.95, -648.43, 1112.88, -2640.29, 6154.32, 11558.79, 15291.4, 56092.65, 1515.39, 342.03, -10865.66, -902.8, 351.52, 364.17, 87.72, 11565.66, 75.4, 289.33, 3129.63, 50795.72, 783.72, 215.46, 29196.89, 1147.26, 53.22, 286.56, 73.02, 42.24, 13914.85, 5754.54, 998.04, -1476.04, 86.58, -1636.35, 10511.91, 647.34, 13768.62, 338.67, 3095.67, 173.84, 5632.93, 64845.11, 3297.33, 338.61, 7246.62, 2255.52, 1326.36, 827.64, 1100.58, 9051.36, 412.23, 1063.91, 940.59, 3891.84, 1599.51, 1129.57, 8792.64, 6.24, 592.77, 8792.85, 47727.5, -4597.68, 2242.56, 3546.45, 321.87, 1536.72, -2463.29, 1906.08, -1916.99, 186.24, 3002.05, -3250.98, 554.7, 830.64, 122612.79, 33894.21, -559.03, 7528.05, -477.67, -1660.25, -33550.96, 481.68, 425.08, 450.3, 9.57, -3025.29, 2924.62, -11.84, 87.36, 26.51, 1727.19, -6131.18, 59.16, 3.06, 1693.47, 74.67, 24729.21, -4867.94, 6705.18, 410.79, 70.74, 101.7, 3264.3, 137.01, 6.18, 2100.21, 5295.24, 520.29, 7205.52, 2602.65, 116.67, 224.91, -5153.93, 3882.69, -6535.24, -1254.1, 84.56, -186.38, -3167.2, -7935.59, 37.02, 1908.06, -27087.84, 829.32, 8727.44, 2011.47, -11629.64, 234.96, 53.1, 1248.14, 1511.07, 7374.24, 1193.28, 1090.23, 553.86, 38483.86, 255.81, 528.54, 326.07, 3924.36, 1018.92, 36.48, 113.24, -1770.05, 527.64, 224.49, 79.53, 64.77, 38.08, 868.08, 2265.06, -2643.62, 833.73, 5100.03, 326.44, 18158.84, 1682.01, -3290.22, 8283.33, 7926.18, 1694.41, 30522.92, 1214.07, 900.6, -6860.8, -865.91, 26.16, 47.22, 863.52, 7061.26, 73.92, 33.12, 1801.23, 38815.44, 431.13, 216.81, 16.5, 53688.2, 1210.32, 236.94, 210.84, 3.18, 2.22, 10265.64, 7212.3, 343.56, 3898.28, 568.11, -1867.85, 5782.38, 697.29, -192.06, 10179.02, 616.32, 1090.47, 165.84, 6138.28, 39723.06, 2085.14, 90.0, 129.93, 7957.53, 2131.86, 562.44, 99.12, 1298.37, 7580.33, 113.73, 139.71, 456.0, 21.24, 292.68, 30.34, 5817.15, 1060.89, 252.9, 3060.61, 6.6, 219.09, 8735.82, 31481.09, 2.85, -3124.72, 2195.94, 3464.7, 141.12, 1125.69, -1752.03, 3281.52, -303.77, 114.18, -2412.63, -5099.61, 146.64, 660.22, 18329.28, 28529.84, -232.27, 7435.41, -1157.94, -746.73, -30324.2, 2.52, 1313.44, 213.72, -5708.95, 930.18, 1663.02, 31.59, 1787.88, -8219.56, 973.92, 4.32, 8729.78, -2529.52, 5361.06, 69.21, 519.3, 13.56, 2236.77, 213.96, 367.98, 5074.2, 206.61, 7620.36, 2093.19, 164.07, 230.01, -815.82, 4226.7, -3635.09, -3344.17, 167.26, 143.79, -8233.57, -4085.21, 919.35, -25232.35, 234.33, 12040.68, 7206.28, -15112.76, 206.04, -2662.49, 2346.81, 4461.36, 93.48, 82.11, 147.87, 10389.53, 395.58, 474.74, 1333.26, 3913.02, 117.36, 858.78, 6.9, -4628.49, 1170.6, 218.55, 539.58, -211.0, 438.87, 317.16, 310.8, -1578.09, 706.56, 6617.4, 803.84, 2475.26, 764.34, -1461.88, 3805.56, 7371.27, -1377.13, 42435.03, 472.47, 315.48, -11755.91, -2418.6, 6.36, 9317.76, 326.88, -287.31, 637.68, 17579.17, 70.83, 47.4, 26143.92, 1548.15, 612.78, 17842.76, 6735.39, 1206.5, -10035.74, 149.4, -777.85, 5566.29, 748.92, 14941.58, 348.93, 1944.06, -5.51, 7026.84, 46114.92, 2361.86, 2613.24, 1277.37, 2587.74, 103.08, 311.43, 1250.58, 13055.21, 18.21, 108.24, 709.44, 115.92, 1863.6, 1873.86, 817.32, 7577.64, 1019.19, 6813.03, 24698.84, 66.24, -10971.39, 2056.47, 2095.35, 246.33, 2797.89])
# corresponding category and country value to the above arrays
product_category = np.array(['Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Technology', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Office Supplies', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture', 'Furniture'])
country = np.array(['Zimbabwe', 'Zambia', 'Yemen', 'Vietnam', 'Venezuela', 'Uzbekistan', 'Uruguay', 'United States', 'United Kingdom', 'United Arab Emirates', 'Ukraine', 'Uganda', 'Turkmenistan', 'Turkey', 'Tunisia', 'Trinidad and Tobago', 'Togo', 'Thailand', 'Tanzania', 'Tajikistan', 'Taiwan', 'Syria', 'Switzerland', 'Sweden', 'Swaziland', 'Sudan', 'Sri Lanka', 'Spain', 'South Sudan', 'South Korea', 'South Africa', 'Somalia', 'Singapore', 'Sierra Leone', 'Serbia', 'Senegal', 'Saudi Arabia', 'Rwanda', 'Russia', 'Romania', 'Qatar', 'Portugal', 'Poland', 'Philippines', 'Peru', 'Paraguay', 'Papua New Guinea', 'Panama', 'Pakistan', 'Norway', 'Nigeria', 'Niger', 'Nicaragua', 'New Zealand', 'Netherlands', 'Nepal', 'Namibia', 'Myanmar (Burma)', 'Mozambique', 'Morocco', 'Mongolia', 'Moldova', 'Mexico', 'Mauritania', 'Martinique', 'Mali', 'Malaysia', 'Madagascar', 'Luxembourg', 'Lithuania', 'Libya', 'Liberia', 'Lesotho', 'Lebanon', 'Kyrgyzstan', 'Kenya', 'Kazakhstan', 'Jordan', 'Japan', 'Jamaica', 'Italy', 'Israel', 'Ireland', 'Iraq', 'Iran', 'Indonesia', 'India', 'Hungary', 'Hong Kong', 'Honduras', 'Haiti', 'Guyana', 'Guinea-Bissau', 'Guinea', 'Guatemala', 'Guadeloupe', 'Greece', 'Ghana', 'Germany', 'Georgia', 'Gabon', 'France', 'Finland', 'Ethiopia', 'Estonia', 'Eritrea', 'Equatorial Guinea', 'El Salvador', 'Egypt', 'Ecuador', 'Dominican Republic', 'Djibouti', 'Denmark', 'Democratic Republic of the Congo', 'Czech Republic', 'Cuba', 'Croatia', "Cote d'Ivoire", 'Costa Rica', 'Colombia', 'China', 'Chile', 'Central African Republic', 'Canada', 'Cameroon', 'Cambodia', 'Burkina Faso', 'Bulgaria', 'Brazil', 'Bosnia and Herzegovina', 'Bolivia', 'Benin', 'Belgium', 'Belarus', 'Barbados', 'Bangladesh', 'Bahrain', 'Azerbaijan', 'Austria', 'Australia', 'Argentina', 'Angola', 'Algeria', 'Albania', 'Afghanistan', 'Zimbabwe', 'Zambia', 'Yemen', 'Western Sahara', 'Vietnam', 'Venezuela', 'Uzbekistan', 'Uruguay', 'United States', 'United Kingdom', 'United Arab Emirates', 'Ukraine', 'Uganda', 'Turkmenistan', 'Turkey', 'Tunisia', 'Trinidad and Tobago', 'Togo', 'The Gambia', 'Thailand', 'Tanzania', 'Tajikistan', 'Taiwan', 'Syria', 'Switzerland', 'Sweden', 'Swaziland', 'Suriname', 'Sudan', 'Sri Lanka', 'Spain', 'South Korea', 'South Africa', 'Somalia', 'Slovenia', 'Slovakia', 'Singapore', 'Sierra Leone', 'Serbia', 'Senegal', 'Saudi Arabia', 'Rwanda', 'Russia', 'Romania', 'Republic of the Congo', 'Qatar', 'Portugal', 'Poland', 'Philippines', 'Peru', 'Paraguay', 'Papua New Guinea', 'Panama', 'Pakistan', 'Oman', 'Norway', 'Nigeria', 'Niger', 'Nicaragua', 'New Zealand', 'Netherlands', 'Nepal', 'Namibia', 'Myanmar (Burma)', 'Mozambique', 'Morocco', 'Montenegro', 'Mongolia', 'Moldova', 'Mexico', 'Mauritania', 'Martinique', 'Mali', 'Malaysia', 'Madagascar', 'Macedonia', 'Luxembourg', 'Lithuania', 'Libya', 'Liberia', 'Lesotho', 'Lebanon', 'Laos', 'Kyrgyzstan', 'Kenya', 'Kazakhstan', 'Jordan', 'Japan', 'Jamaica', 'Italy', 'Israel', 'Ireland', 'Iraq', 'Iran', 'Indonesia', 'India', 'Hungary', 'Hong Kong', 'Honduras', 'Haiti', 'Guyana', 'Guinea-Bissau', 'Guinea', 'Guatemala', 'Guadeloupe', 'Greece', 'Ghana', 'Germany', 'Georgia', 'Gabon', 'French Guiana', 'France', 'Finland', 'Ethiopia', 'Estonia', 'Eritrea', 'Equatorial Guinea', 'El Salvador', 'Egypt', 'Ecuador', 'Dominican Republic', 'Djibouti', 'Denmark', 'Democratic Republic of the Congo', 'Czech Republic', 'Cyprus', 'Cuba', 'Croatia', "Cote d'Ivoire", 'Costa Rica', 'Colombia', 'China', 'Chile', 'Chad', 'Central African Republic', 'Canada', 'Cameroon', 'Cambodia', 'Burkina Faso', 'Bulgaria', 'Brazil', 'Botswana', 'Bosnia and Herzegovina', 'Bolivia', 'Bhutan', 'Benin', 'Belize', 'Belgium', 'Belarus', 'Barbados', 'Bangladesh', 'Bahrain', 'Azerbaijan', 'Austria', 'Australia', 'Armenia', 'Argentina', 'Angola', 'Algeria', 'Albania', 'Afghanistan', 'Zimbabwe', 'Zambia', 'Yemen', 'Western Sahara', 'Vietnam', 'Venezuela', 'Uzbekistan', 'Uruguay', 'United States', 'United Kingdom', 'United Arab Emirates', 'Ukraine', 'Uganda', 'Turkmenistan', 'Turkey', 'Tunisia', 'Trinidad and Tobago', 'Togo', 'Thailand', 'Tanzania', 'Taiwan', 'Syria', 'Switzerland', 'Sweden', 'Sudan', 'Sri Lanka', 'Spain', 'South Korea', 'South Africa', 'Somalia', 'Slovenia', 'Slovakia', 'Singapore', 'Sierra Leone', 'Senegal', 'Saudi Arabia', 'Rwanda', 'Russia', 'Romania', 'Republic of the Congo', 'Qatar', 'Portugal', 'Poland', 'Philippines', 'Peru', 'Paraguay', 'Papua New Guinea', 'Panama', 'Pakistan', 'Norway', 'Nigeria', 'Niger', 'Nicaragua', 'New Zealand', 'Netherlands', 'Nepal', 'Myanmar (Burma)', 'Mozambique', 'Morocco', 'Montenegro', 'Mongolia', 'Moldova', 'Mexico', 'Mauritania', 'Martinique', 'Mali', 'Malaysia', 'Malawi', 'Madagascar', 'Macedonia', 'Lithuania', 'Libya', 'Liberia', 'Lebanon', 'Laos', 'Kyrgyzstan', 'Kuwait', 'Kenya', 'Kazakhstan', 'Jordan', 'Japan', 'Jamaica', 'Italy', 'Israel', 'Ireland', 'Iraq', 'Iran', 'Indonesia', 'India', 'Hungary', 'Hong Kong', 'Honduras', 'Haiti', 'Guyana', 'Guatemala', 'Guadeloupe', 'Greece', 'Ghana', 'Germany', 'Georgia', 'Gabon', 'France', 'Finland', 'Estonia', 'El Salvador', 'Egypt', 'Ecuador', 'Dominican Republic', 'Djibouti', 'Denmark', 'Democratic Republic of the Congo', 'Czech Republic', 'Cuba', 'Croatia', "Cote d'Ivoire", 'Costa Rica', 'Colombia', 'China', 'Chile', 'Canada', 'Cameroon', 'Cambodia', 'Burundi', 'Burkina Faso', 'Bulgaria', 'Brazil', 'Botswana', 'Bosnia and Herzegovina', 'Bolivia', 'Benin', 'Belgium', 'Belarus', 'Barbados', 'Bangladesh', 'Azerbaijan', 'Austria', 'Australia', 'Armenia', 'Argentina', 'Angola', 'Algeria', 'Albania', 'Afghanistan'])
```
### Adding title and labeling axes
```
# plotting scatter chart
plt.scatter(profit,sales)
# Adding and formatting title
plt.title("Sales Across Profit in various Counteries for different Product Categories", fontdict={'fontsize': 20, 'fontweight' : 5, 'color' : 'Green'})
# Labeling Axes
plt.xlabel("Profit", fontdict={'fontsize': 10, 'fontweight' : 3, 'color' : 'Blue'})
plt.ylabel("Sales", fontdict={'fontsize': 10, 'fontweight' : 3, 'color' : 'Blue'})
plt.show()
```
### Representing product categories using different colors
```
product_categories = np.array(["Technology", "Furniture", "Office Supplies"])
colors = np.array(["cyan", "green", "yellow"])
# plotting the scatterplot with color coding the points belonging to different categories
for color,category in zip(colors,product_categories):
sales_cat = sales[product_category==category]
profit_cat = profit[product_category==category]
plt.scatter(profit_cat,sales_cat,c=color,label=category)
# Adding and formatting title
plt.title("Sales Across Profit in various Counteries for different Product Categories", fontdict={'fontsize': 20, 'fontweight' : 5, 'color' : 'Green'})
# Labeling Axes
plt.xlabel("Profit", fontdict={'fontsize': 10, 'fontweight' : 3, 'color' : 'Blue'})
plt.ylabel("Sales", fontdict={'fontsize': 10, 'fontweight' : 3, 'color' : 'Blue'})
# Adding legend for interpretation of points
plt.legend()
plt.show()
```
### Adding labels to points belonging to specific country
```
# plotting the scatterplot with color coding the points belonging to different categories
for color,category in zip(colors,product_categories):
sales_cat = sales[product_category==category]
profit_cat = profit[product_category==category]
plt.scatter(profit_cat,sales_cat,c=color,label=category)
# labeling points that belong to country "India"
for xy in zip(profit[country == "India"],sales[country == "India"]):
plt.annotate(text="India",xy = xy)
# Adding and formatting title
plt.title("Sales Across Profit in various Counteries for different Product Categories", fontdict={'fontsize': 20, 'fontweight' : 5, 'color' : 'Green'})
# Labeling Axes
plt.xlabel("Profit", fontdict={'fontsize': 10, 'fontweight' : 3, 'color' : 'Blue'})
plt.ylabel("Sales", fontdict={'fontsize': 10, 'fontweight' : 3, 'color' : 'Blue'})
# Adding legend for interpretation of points
plt.legend()
plt.show()
```
# Line Chart: Trend of sales over the 12 months
Can be used to present the trend with time variable on the x-axis
In some cases, can be used as an alternative to scatterplot to understand the relationship between 2 variables
```
# Sales data across months
months = np.array(['January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December'])
sales = np.array([241268.56, 184837.36, 263100.77, 242771.86, 288401.05, 401814.06, 258705.68, 456619.94, 481157.24, 422766.63, 555279.03, 503143.69])
# plotting a line chart
plt.plot(months,sales,'bx')
# adding title to the chart
plt.title("Sales across months")
# labeling the axes
plt.xlabel("Months")
plt.ylabel("Sales")
# rotating the tick values of x-axis
plt.xticks(rotation = 90)
# displating the created plot
plt.show()
y = np.random.randint(1,100, 50)
plt.plot(y, 'ro')
plt.show()
# plotting a line chart
plt.plot(months,sales,'b',marker='x')
# adding title to the chart
plt.title("Sales across months")
# labeling the axes
plt.xlabel("Months")
plt.ylabel("Sales")
# rotating the tick values of x-axis
plt.xticks(rotation = 90)
# displating the created plot
plt.show()
```
# Histogram: Distibution of employees across different age groups
Useful in checking the distribution of data range
Builds a bar corresponding to each element in the data range showing its frequency
```
# data corresponding to age of the employees in the company
age = np.array([23, 22, 24, 24, 23, 23, 22, 23, 24, 24, 24, 22, 24, 23, 24, 23, 22, 24, 23, 23, 22, 23, 23, 24, 23, 24, 23, 22, 24, 22, 23, 24, 23, 24, 22, 22, 24, 23, 22, 24, 24, 24, 23, 24, 24, 22, 23, 23, 24, 22, 22, 24, 22, 23, 22, 23, 22, 23, 23, 23, 23, 22, 22, 23, 23, 23, 23, 23, 23, 22, 29, 29, 27, 28, 28, 29, 28, 27, 26, 27, 28, 29, 26, 28, 26, 28, 27, 27, 28, 28, 26, 29, 28, 28, 26, 27, 26, 28, 27, 29, 29, 27, 27, 27, 28, 29, 29, 29, 27, 28, 28, 26, 28, 27, 26, 26, 27, 26, 29, 28, 28, 28, 29, 26, 26, 26, 29, 26, 28, 26, 28, 28, 27, 27, 27, 29, 27, 28, 27, 26, 29, 29, 27, 29, 26, 29, 26, 29, 29, 27, 28, 28, 27, 29, 26, 28, 26, 28, 27, 29, 29, 29, 27, 27, 29, 29, 26, 26, 26, 27, 28, 27, 28, 28, 29, 27, 26, 27, 29, 28, 29, 27, 27, 26, 26, 26, 26, 29, 28, 28, 33, 34, 33, 33, 34, 33, 31, 32, 33, 33, 32, 34, 32, 31, 33, 34, 31, 33, 34, 33, 34, 33, 32, 33, 31, 33, 32, 32, 31, 34, 33, 31, 34, 32, 32, 31, 32, 31, 32, 34, 33, 33, 31, 32, 32, 31, 32, 33, 34, 32, 34, 31, 32, 31, 33, 32, 34, 31, 32, 34, 31, 31, 34, 34, 34, 32, 34, 33, 33, 32, 32, 33, 31, 33, 31, 32, 34, 32, 32, 31, 34, 32, 32, 31, 32, 34, 32, 33, 31, 34, 31, 31, 32, 31, 33, 34, 34, 34, 31, 33, 34, 33, 34, 31, 34, 34, 33, 31, 32, 33, 31, 31, 33, 32, 34, 32, 34, 31, 31, 34, 32, 32, 31, 31, 32, 31, 31, 32, 33, 32, 31, 32, 32, 31, 31, 34, 31, 34, 33, 32, 31, 34, 34, 31, 34, 31, 32, 34, 33, 33, 34, 32, 33, 31, 31, 33, 32, 31, 31, 31, 37, 38, 37, 37, 36, 37, 36, 39, 37, 39, 37, 39, 38, 36, 37, 36, 38, 38, 36, 39, 39, 37, 39, 36, 37, 36, 36, 37, 38, 36, 38, 39, 39, 36, 38, 37, 39, 38, 39, 39, 36, 38, 37, 38, 39, 36, 37, 36, 36, 38, 38, 38, 39, 36, 37, 37, 39, 37, 37, 36, 36, 39, 37, 36, 36, 36, 39, 37, 37, 37, 37, 39, 36, 39, 37, 38, 37, 36, 36, 39, 39, 36, 36, 39, 39, 39, 37, 38, 36, 36, 37, 38, 37, 38, 37, 39, 39, 37, 39, 36, 36, 39, 39, 39, 36, 38, 39, 39, 39, 39, 38, 36, 37, 37, 38, 38, 39, 36, 37, 37, 39, 36, 37, 37, 36, 36, 36, 38, 39, 38, 36, 38, 36, 39, 38, 36, 36, 37, 39, 39, 37, 37, 37, 36, 37, 36, 36, 38, 38, 39, 36, 39, 36, 37, 37, 39, 39, 36, 38, 39, 39, 39, 37, 37, 37, 37, 39, 36, 37, 39, 38, 39, 36, 37, 38, 39, 38, 36, 37, 38, 42, 43, 44, 43, 41, 42, 41, 41, 42, 41, 43, 44, 43, 44, 44, 42, 43, 44, 43, 41, 44, 42, 43, 42, 42, 44, 43, 42, 41, 42, 41, 41, 41, 44, 44, 44, 41, 43, 42, 42, 43, 43, 44, 44, 44, 44, 44, 41, 42, 44, 43, 42, 42, 43, 44, 44, 44, 44, 41, 42, 43, 43, 43, 41, 43, 41, 42, 41, 42, 42, 41, 42, 44, 41, 43, 42, 41, 43, 41, 44, 44, 43, 43, 43, 41, 41, 41, 42, 43, 42, 48, 48, 48, 49, 47, 45, 46, 49, 46, 49, 49, 46, 47, 45, 47, 45, 47, 49, 47, 46, 46, 47, 45, 49, 49, 49, 45, 46, 47, 46, 45, 46, 45, 48, 48, 45, 49, 46, 48, 49, 47, 48, 45, 48, 46, 45, 48, 45, 46, 46, 48, 47, 46, 45, 48, 46, 49, 47, 46, 49, 48, 46, 47, 47, 46, 48, 47, 46, 46, 49, 50, 54, 53, 55, 51, 50, 51, 54, 54, 53, 53, 51, 51, 50, 54, 51, 51, 55, 50, 51, 50, 50, 53, 52, 54, 53, 55, 52, 52, 50, 52, 55, 54, 50, 50, 55, 52, 54, 52, 54])
# Checking the number of employees
len(age)
# plotting a histogram
plt.hist(age)
plt.show()
```
### Plotting a histogram with fixed number of bins
```
# plotting a histogram
plt.hist(age,bins=5,color='green',edgecolor='black')
plt.show()
list_1 = [48.49, 67.54, 57.47, 68.17, 51.18, 68.31, 50.33, 66.7, 45.62, 43.59, 53.64, 70.08, 47.69, 61.27, 44.14, 51.62, 48.72, 65.11]
weights = np.array(list_1)
plt.hist(weights,bins = 4,range=[40,80],edgecolor='white')
plt.show()
```
# Box plot: Understanding the spread of sales across different countries
Useful in understanding the spread of the data
Divides the data based on the percentile values
Helps identify the presence of outliers
```
# Creating arrays with sales in different countries across each category: 'Furniture', 'Technology' and 'Office Supplies'
sales_technology = np.array ([1013.14, 8298.48, 875.51, 22320.83, 9251.6, 4516.86, 585.16, 174.2, 27557.79, 563.25, 558.11, 37117.45, 357.36, 2206.96, 709.5, 35064.03, 7230.78, 235.33, 148.32, 3973.27, 11737.8, 7104.63, 83.67, 5569.83, 92.34, 1045.62, 9072.51, 42485.82, 5093.82, 14846.16, 943.92, 684.36, 15012.03, 38196.18, 2448.75, 28881.96, 13912.14, 4507.2, 4931.06, 12805.05, 67912.73, 4492.2, 1740.01, 458.04, 16904.32, 21744.53, 10417.26, 18665.33, 2808.42, 54195.57, 67332.5, 24390.95, 1790.43, 2234.19, 9917.5, 7408.14, 36051.99, 1352.22, 1907.7, 2154.66, 1078.21, 3391.65, 28262.73, 5177.04, 66.51, 2031.34, 1683.72, 1970.01, 6515.82, 1055.31, 1029.48, 5303.4, 1850.96, 1159.41, 39989.13, 1183.87, 96365.09, 8356.68, 7010.24, 23119.23, 46109.28, 9058.95, 1313.67, 31525.06, 2019.94, 703.04, 1868.79, 700.5, 55512.02, 243.5, 2113.18, 11781.81, 3487.29, 513.12, 5000.7, 121.02, 1302.78, 169.92, 124.29, 57366.05, 29445.93, 4614.3, 45009.98, 309.24, 3353.67, 41348.34, 2280.27, 61193.7, 1466.79, 12419.94, 445.12, 25188.65, 12351.23, 1152.3, 26298.81, 9900.78, 5355.57, 2325.66, 6282.81, 1283.1, 3560.15, 3723.84, 13715.01, 4887.9, 3396.89, 33348.42, 625.02, 1665.48, 32486.97, 20516.22, 8651.16, 13590.06, 2440.35, 6462.57])
sales_office_supplies = np.array ([1770.13, 7527.18, 1433.65, 423.3, 21601.72, 10035.72, 2378.49, 3062.38, 345.17, 30345.78, 300.71, 940.81, 36468.08, 1352.85, 1755.72, 2391.96, 19.98, 19792.8, 15633.88, 7.45, 521.67, 1118.24, 7231.68, 12399.32, 204.36, 23.64, 5916.48, 313.98, 9212.42, 27476.91, 1761.33, 289.5, 780.3, 15098.46, 813.27, 47.55, 8323.23, 22634.64, 1831.02, 28808.1, 10539.78, 588.99, 939.78, 7212.41, 15683.01, 41369.09, 5581.6, 403.36, 375.26, 12276.66, 15393.56, 76.65, 5884.38, 18005.49, 3094.71, 43642.78, 35554.83, 22977.11, 1026.33, 665.28, 9712.49, 6038.52, 30756.51, 3758.25, 4769.49, 2463.3, 967.11, 2311.74, 1414.83, 12764.91, 4191.24, 110.76, 637.34, 1195.12, 2271.63, 804.12, 196.17, 167.67, 131.77, 2842.05, 9969.12, 1784.35, 3098.49, 25005.54, 1300.1, 7920.54, 6471.78, 31707.57, 37636.47, 3980.88, 3339.39, 26563.9, 4038.73, 124.8, 196.65, 2797.77, 29832.76, 184.84, 79.08, 8047.83, 1726.98, 899.73, 224.06, 6101.31, 729.6, 896.07, 17.82, 26.22, 46429.78, 31167.27, 2455.94, 37714.3, 1506.93, 3812.78, 25223.34, 3795.96, 437.31, 41278.86, 2091.81, 6296.61, 468.82, 23629.64, 9725.46, 1317.03, 1225.26, 30034.08, 7893.45, 2036.07, 215.52, 3912.42, 82783.43, 253.14, 966.96, 3381.26, 164.07, 1984.23, 75.12, 25168.17, 3295.53, 991.12, 10772.1, 44.16, 1311.45, 35352.57, 20.49, 13471.06, 8171.16, 14075.67, 611.82, 3925.56])
sales_furniture = np.array ([981.84, 10209.84, 156.56, 243.06, 21287.52, 7300.51, 434.52, 6065.0, 224.75, 28953.6, 757.98, 528.15, 34922.41, 50.58, 2918.48, 1044.96, 22195.13, 3951.48, 6977.64, 219.12, 5908.38, 10987.46, 4852.26, 445.5, 71860.82, 14840.45, 24712.08, 1329.9, 1180.44, 85.02, 10341.63, 690.48, 1939.53, 20010.51, 914.31, 25223.82, 12804.66, 2124.24, 602.82, 2961.66, 15740.79, 74138.35, 7759.39, 447.0, 2094.84, 22358.95, 21734.53, 4223.73, 17679.53, 1019.85, 51848.72, 69133.3, 30146.9, 705.48, 14508.88, 7489.38, 20269.44, 246.12, 668.13, 768.93, 899.16, 2578.2, 4107.99, 20334.57, 366.84, 3249.27, 98.88, 3497.88, 3853.05, 786.75, 1573.68, 458.36, 1234.77, 1094.22, 2300.61, 970.14, 3068.25, 35792.85, 4277.82, 71080.28, 3016.86, 3157.49, 15888.0, 30000.36, 1214.22, 1493.94, 32036.69, 4979.66, 106.02, 46257.68, 1033.3, 937.32, 3442.62, 213.15, 338.88, 9602.34, 2280.99, 73759.08, 23526.12, 6272.74, 43416.3, 576.78, 1471.61, 20844.9, 3497.7, 56382.38, 902.58, 6235.26, 48.91, 32684.24, 13370.38, 10595.28, 4555.14, 10084.38, 267.72, 1012.95, 4630.5, 364.32, 349.2, 4647.56, 504.0, 10343.52, 5202.66, 2786.26, 34135.95, 2654.58, 24699.51, 136.26, 23524.51, 8731.68, 8425.86, 835.95, 11285.19])
# plotting box plot for each category
plt.boxplot([sales_technology,sales_office_supplies,sales_furniture])
# adding title to the graph
plt.title("Sales across country and product categories")
# labeling the axes
plt.xlabel("Product Category")
plt.ylabel("Sales")
# Replacing the x ticks with respective category
plt.xticks((1,2,3),["Technology","Office_Supply","Furniture"])
plt.show()
```
|
github_jupyter
|
### Set Data Path
```
from pathlib import Path
base_dir = Path("data")
train_dir = base_dir/Path("train")
validation_dir = base_dir/Path("validation")
test_dir = base_dir/Path("test")
```
### Image Transform Function
```
from torchvision import transforms
transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
transforms.Normalize(mean=(.5, .5, .5), std=(.5, .5, .5))
])
```
### Load Training Data (x: features, y: labels)
```
import torch
from PIL import Image
x, y = [], []
for file_name in train_dir.glob("*.jpg"):
bounding_box_file = file_name.with_suffix('.txt')
with open(bounding_box_file) as file:
lines = file.readlines()
if(len(lines) > 1):
continue
else:
line = lines[0].strip('\n')
(classes, cen_x, cen_y, box_w, box_h) = list(map(float, line.split(' ')))
torch_data = torch.FloatTensor([cen_x, cen_y, box_w, box_h])
y.append(torch_data)
img = Image.open(str(file_name)).convert('RGB')
img = transform(img)
x.append(img)
```
### Put Training Data into Torch Loader
```
import torch.utils.data as Data
tensor_x = torch.stack(x)
tensor_y = torch.stack(y)
torch_dataset = Data.TensorDataset(tensor_x, tensor_y)
loader = Data.DataLoader(dataset=torch_dataset, batch_size=32, shuffle=True, num_workers=2)
```
### Load Pretrained RestNet18 Model
```
import torchvision
from torch import nn
model = torchvision.models.resnet18(pretrained=True)
fc_in_size = model.fc.in_features
model.fc = nn.Linear(fc_in_size, 4)
```
### Parameters
```
EPOCH = 10
LR = 1e-3
```
### Loss Function & Optimizer
```
loss_func = nn.SmoothL1Loss()
opt = torch.optim.Adam(model.parameters(), lr=LR)
```
### Training
```
for epoch in range(EPOCH):
for step, (batch_x, batch_y) in enumerate(loader):
batch_x = batch_x
batch_y = batch_y
output = model(batch_x)
loss = loss_func(output, batch_y)
opt.zero_grad()
loss.backward()
opt.step()
if(step % 5 == 0):
print("Epoch {} | Step {} | Loss {}".format(epoch, step, loss))
```
### Show some of the Prediction
```
%matplotlib inline
import cv2
from matplotlib import pyplot as plt
import numpy as np
model = model.cpu()
for batch_x, batch_y in loader:
predict = model(batch_x)
for x, pred, y in zip(batch_x, predict, batch_y):
(pos_x, pos_y, box_w, box_h) = pred
pos_x *= 224
pos_y *= 224
box_w *= 224
box_h *= 224
image = transforms.ToPILImage()(x)
img = cv2.cvtColor(np.asarray(image), cv2.COLOR_RGB2BGR)
img = cv2.rectangle(img, (pos_x - box_w/2, pos_y - box_h/2), (pos_x + box_w/2, pos_y + box_h/2), (255, 0, 0), 3)
plt.imshow(img)
plt.show()
break
```
|
github_jupyter
|
# Partitioning feature space
**Make sure to get latest dtreeviz**
```
! pip install -q -U dtreeviz
! pip install -q graphviz==0.17 # 0.18 deletes the `run` func I need
import numpy as np
import pandas as pd
from sklearn.linear_model import LinearRegression, Ridge, Lasso, LogisticRegression
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
from sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor
from sklearn.datasets import load_boston, load_iris, load_wine, load_digits, \
load_breast_cancer, load_diabetes
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, accuracy_score
import matplotlib.pyplot as plt
%config InlineBackend.figure_format = 'retina'
from sklearn import tree
from dtreeviz.trees import *
from dtreeviz.models.shadow_decision_tree import ShadowDecTree
def show_mse_leaves(X,y,max_depth):
t = DecisionTreeRegressor(max_depth=max_depth)
t.fit(X,y)
shadow = ShadowDecTree.get_shadow_tree(t, X, y, feature_names=['sqfeet'], target_name='rent')
root, leaves, internal = shadow._get_tree_nodes()
# node2samples = shadow._get_tree_nodes()_samples()
# isleaf = shadow.get_node_type(t)
n_node_samples = t.tree_.n_node_samples
mse = 99.9#mean_squared_error(y, [np.mean(y)]*len(y))
print(f"Root {0:3d} has {n_node_samples[0]:3d} samples with MSE ={mse:6.2f}")
print("-----------------------------------------")
avg_mse_per_record = 0.0
node2samples = shadow.get_node_samples()
for node in leaves:
leafy = y[node2samples[node.id]]
n = len(leafy)
mse = mean_squared_error(leafy, [np.mean(leafy)]*n)
avg_mse_per_record += mse * n
print(f"Node {node.id:3d} has {n_node_samples[node.id]:3d} samples with MSE ={mse:6.2f}")
avg_mse_per_record /= len(y)
print(f"Average MSE per record is {avg_mse_per_record:.1f}")
```
## Regression
```
df_cars = pd.read_csv("data/cars.csv")
X, y = df_cars[['ENG']], df_cars['MPG']
df_cars.head(3)
dt = DecisionTreeRegressor(max_depth=1)
dt.fit(X, y)
rtreeviz_univar(dt, X, y,
feature_names='Horsepower',
markersize=5,
mean_linewidth=1,
target_name='MPG',
fontsize=9,
show={})
```
**Q.** What is the MSE between y and predicted $\hat{y} = \overline{y}$?
Hints: You can use function `mean_squared_error(` $y$,$\hat{y}$ `)`; create a vector of length $|y|$ with $\overline{y}$ as elements.
<details>
<summary>Solution</summary>
<pre>
mean_squared_error(y, [np.mean(y)]*len(y)) # about 60.76
</pre>
</details>
**Q.** Where would you split this if you could only split once? Set the `split` variable to a reasonable value.
```
split = ...
```
<details>
<summary>Solution</summary>
The split location that gets most pure subregion might be about split = 200 HP because the region to the right has a relatively flat MPG average.
</details>
**Alter the rtreeviz_univar() call to show the split with arg show={'splits'}**
<details>
<summary>Solution</summary>
<pre>
rtreeviz_univar(dt, X, y,
feature_names='Horsepower',
markersize=5,
mean_linewidth=1,
target_name='MPG',
fontsize=9,
show={'splits'})
</pre>
</details>
**Q.** What are the MSE values for the left, right partitions?
Hints: Get the y values whose `X['ENG']` are less than `split` into `lefty` and those greater than or equal to `split` into `righty`. The split introduces two new children that are leaves until we (possibly) split them; the leaves predict the mean of their samples.
```
lefty = ...; mleft = ...
righty = ...; mright = ...
mse_left = ...
mse_right = ...
mse_left, mse_right
```
<details>
<summary>Solution</summary>
Should be (35.68916307096633, 12.770261374699789)<p>
<pre>
lefty = y[X['ENG']<split]
righty = y[X['ENG']>=split]
mleft = np.mean(lefty)
mright = np.mean(righty)
mse_left = mean_squared_error(lefty, [mleft]\*len(lefty))
mse_right = mean_squared_error(righty, [mright]\*len(righty))
</pre>
</details>
**Q.** Compare the MSE values for overall y and the average of the left, right partition MSEs (which is about 24.2)?
<details>
<summary>Solution</summary>
After the split the MSE of the children is much lower than before the split, therefore, it is a worthwhile split.
</details>
**Q.** Set the split value to 100 and recompare MSE values for y, left, and right.
<details>
<summary>Solution</summary>
With split=100, mse_left, mse_right become 33.6 and 41.0. These are still less than the y MSE of 60.7 so worthwhile but not nearly as splitting at 200.
</details>
### Effect of deeper trees
Consider the sequence of tree depths 1..6 for horsepower vs MPG.
```
X = df_cars[['ENG']].values
y = df_cars['MPG'].values
fig, axes = plt.subplots(1,6, figsize=(14,3), sharey=True)
for i,ax in enumerate(axes.flatten()):
dt = DecisionTreeRegressor(max_depth=i+1)
dt.fit(X, y)
t = rtreeviz_univar(dt,
X, y,
feature_names='Horsepower',
markersize=5,
mean_linewidth=1,
target_name='MPG' if i==0 else None,
fontsize=9,
show={'splits'},
ax=ax)
ax.set_title(f"Depth {i+1}", fontsize=9)
plt.tight_layout()
plt.show()
```
**Q.** Focusing on the orange horizontal lines, what do you notice as more splits appear?
<details>
<summary>Solution</summary>
With depth 1, model is biased due to coarseness of the approximations (just 2 leaf means). Depth 2 gets much better approximation, so bias is lower. As we add more depth to tree, number of splits increases and these appear to be chasing details of the data, decreasing bias on training set but also hurting generality.
</details>
**Q.** Consider the MSE for the 4 leaves of a depth 2 tree and 15 leaves of a depth 4 tree. What happens to the average MSE per leaf? What happens to the leaf sizes and how is it related to average MSE?
```
show_mse_leaves(df_cars[['ENG']], df_cars['MPG'], max_depth=2)
show_mse_leaves(df_cars[['ENG']], df_cars['MPG'], max_depth=4)
```
<details>
<summary>Solution</summary>
The average MSE is much lower as we increase depth because that allows the tree to isolate pure/more-similar regions. This also shrinks leaf size since we are splitting more as the tree deepens.
</details>
Consider the plot of the CYL feature (num cylinders) vs MPG:
```
X = df_cars[['CYL']].values
y = df_cars['MPG'].values
fig, axes = plt.subplots(1,3, figsize=(7,2.5), sharey=True)
depths = [1,2,10]
for i,ax in enumerate(axes.flatten()):
dt = DecisionTreeRegressor(max_depth=depths[i])
dt.fit(X, y)
t = rtreeviz_univar(dt,
X, y,
feature_names='Horsepower',
markersize=5,
mean_linewidth=1,
target_name='MPG' if i==0 else None,
fontsize=9,
show={'splits','title'},
ax=ax)
ax.set_title(f"Depth {i+1}", fontsize=9)
plt.tight_layout()
plt.show()
```
**Q.** Explain why the graph looks like a bunch of vertical bars.
<details>
<summary>Solution</summary>
The x values are integers and will clump together. Since there are many MPG values at each int, you get vertical clumps of data.
</details>
**Q.** Why don't we get many more splits for depth 10 vs depth 2?
<details>
<summary>Solution</summary>
Once each unique x value has a "bin", there are no more splits to do.
</details>
**Q.** Why are the orange predictions bars at the levels they are in the plot?
<details>
<summary>Solution</summary>
Decision tree leaves predict the average y for all samples in a leaf.
</details>
## Classification
```
wine = load_wine()
df_wine = pd.DataFrame(data=wine.data, columns=wine.feature_names)
df_wine.head(3)
feature_names = list(wine.feature_names)
class_names = list(wine.target_names)
```
### 1 variable
```
X = df_wine[['flavanoids']].values
y = wine.target
dt = DecisionTreeClassifier(max_depth=1)
dt.fit(X, y)
fig, ax = plt.subplots(1,1, figsize=(4,1.8))
ct = ctreeviz_univar(dt, X, y,
feature_names = 'flavanoids',
class_names=class_names,
target_name='Wine',
nbins=40, gtype='strip',
fontsize=9,
show={},
colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1},
ax=ax)
plt.show()
```
**Q.** Where would you split this (vertically) if you could only split once?
<details>
<summary>Solution</summary>
The split location that gets most pure subregion might be about 1.5 because it nicely carves off the left green samples.
</details>
**Alter the code to show the split with arg show={'splits'}**
<details>
<summary>Solution</summary>
<pre>
X = df_wine[['flavanoids']].values
y = wine.target
dt = DecisionTreeClassifier(max_depth=1)
dt.fit(X, y)
fig, ax = plt.subplots(1,1, figsize=(4,1.8))
ct = ctreeviz_univar(dt, X, y,
feature_names = 'flavanoids',
class_names=class_names,
target_name='Wine',
nbins=40, gtype='strip',
fontsize=9,
show={'splits'},
colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1},
ax=ax)
plt.show()
</pre>
</details>
**Q.** For max_depth=2, how many splits will we get?
<details>
<summary>Solution</summary>
3. We get one split for root and then with depth=2, we have 2 children that each get a split.
</details>
**Q.** Where would you split this graph in that many places?
<details>
<summary>Solution</summary>
Once we carve off the leftmost green, we would want to isolate the blue in between 1.3 and 2.3. The other place to split is not obvious as there is no great choice. (sklearn will add a split point at 1.0)
</details>
**Alter the code to show max_depth=2**
<details>
<summary>Solution</summary>
<pre>
X = df_wine[['flavanoids']].values
y = wine.target
dt = DecisionTreeClassifier(max_depth=2)
dt.fit(X, y)
fig, ax = plt.subplots(1,1, figsize=(4,1.8))
ct = ctreeviz_univar(dt, X, y,
feature_names = 'flavanoids',
class_names=class_names,
target_name='Wine',
nbins=40, gtype='strip',
fontsize=9,
show={'splits'},
colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1},
ax=ax)
plt.show()
</pre>
</details>
### Gini impurity
Let's compute the gini impurity for left and right sides for a depth=1 tree that splits flavanoids at 1.3. Here's a function that computes the value:
$$
Gini({\bf p}) = \sum_{i=1}^{k} p_i \left[ \sum_{j \ne i}^k p_j \right] = \sum_{i=1}^{k} p_i (1 - p_i) = 1 - \sum_{i=1}^{k} p_i^2
$$
where $p_i = \frac{|y[y==i]|}{|y|}$. Since $\sum_{j \ne i}^k p_j$ is the probability of "not $p_i$", we can summarize that as just $1-p_i$. The gini value is then computing $p_i$ times "not $p_i$" for $k$ classes. Value $p_i$ is the probability of seeing class $i$ in a list of target values, $y$.
```
def gini(y):
"""
Compute gini impurity from y vector of class values (from k unique values).
Result is in range 0..(k-1/k) inclusive; binary range is 0..1/2.
See https://en.wikipedia.org/wiki/Decision_tree_learning#Gini_impurity"
"""
_, counts = np.unique(y, return_counts=True)
p = counts / len(y)
return 1 - np.sum( p**2 )
```
**Q.** Using that function, what is the gini impurity for the overall y target
<details>
<summary>Solution</summary>
gini(y) # about 0.66
</details>
**Get all y values for rows where `df_wine['flavanoids']`<1.3 into variable `lefty` and `>=` into `righty`**
```
lefty = ...
righty = ...
```
<details>
<summary>Solution</summary>
<pre>
lefty = y[df_wine['flavanoids']<1.3]
righty = y[df_wine['flavanoids']>=1.3]
</pre>
</details>
**Q.** What are the gini values for left and right partitions?
<details>
<summary>Solution</summary>
gini(lefty), gini(righty) # about 0.27, 0.53
</details>
**Q.** What can we conclude about the purity of left and right? Also, compare to gini for all y values.
<details>
<summary>Solution</summary>
Left partition is much more pure than right but right is still more pure than original gini(y). We can conclude that the split is worthwhile as the partition would let us give more accurate predictions.
</details>
### 2 variables
```
X = df_wine[['alcohol','flavanoids']].values
y = wine.target
dt = DecisionTreeClassifier(max_depth=1)
dt.fit(X, y)
fig, ax = plt.subplots(1, 1, figsize=(4,3))
ct = ctreeviz_bivar(dt, X, y,
feature_names = ['alcohol','flavanoid'], class_names=class_names,
target_name='iris',
show={},
colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1},
ax=ax
)
```
**Q.** Which variable and split point would you choose if you could only split once?
<details>
<summary>Solution</summary>
Because the blue dots are spread vertically, a horizontal split won't be very good. Hence, we should choose variable proline. The best split will carve off the blue dots, leaving the yellow and green mixed up. A split at proline=12.7 seems pretty good.
</details>
**Modify the code to view the splits and compare your answer**
**Q.** Which variable and split points would you choose next for depth=2?
<details>
<summary>Solution</summary>
Once we carve off most of the blue vertically, we should separate the yellow by choosing flavanoid=1.7 to split horizontally. NOTICE, however, that the 2nd split will not be across entire graph since we are splitting the region on the right. Splitting on the left can be at flavanoid=1 so we isolate the green from blue on left.
</details>
**Modify the code to view the splits for depth=2 and compare your answer**
### Gini
Let's examine gini impurity for a different pair of variables.
```
X = df_wine[['proline','flavanoids']].values
y = wine.target
dt = DecisionTreeClassifier(max_depth=1)
dt.fit(X, y)
fig, ax = plt.subplots(1, 1, figsize=(4,3))
ctreeviz_bivar(dt, X, y,
feature_names = ['proline','flavanoid'],
class_names=class_names,
target_name='iris',
show={'splits'},
colors={'scatter_marker_alpha':1, 'scatter_marker_alpha':1},
ax=ax)
plt.show()
```
**Get all y values for rows where the split var is less than the split value into variable `lefty` and those `>=` into `righty`**
```
lefty = ...
righty = ...
```
<details>
<summary>Solution</summary>
<pre>
lefty = y[df_wine['proline']<750]
righty = y[df_wine['proline']>=750]
</pre>
</details>
**Print out the gini for y, lefty, righty**
<details>
<summary>Solution</summary>
<pre>
gini(y), gini(lefty), gini(righty)
</pre>
</details>
## Training a single tree and print out the training accuracy (num correct / total)
```
t = DecisionTreeClassifier()
t.fit(df_wine, y)
accuracy_score(y, t.predict(df_wine))
```
Take a look at the feature importance:
```
from rfpimp import *
I = importances(t, df_wine, y)
plot_importances(I)
```
|
github_jupyter
|
## Dataset
The CIFAR-10 dataset (Canadian Institute For Advanced Research) is a collection of images that are commonly used to train machine learning and computer vision algorithms. It is one of the most widely used datasets for machine learning research. The CIFAR-10 dataset contains 60,000 32x32 color images in 10 different classes. The 10 different classes represent airplanes, cars, birds, cats, deer, dogs, frogs, horses, ships, and trucks. There are 6,000 images of each class.
Computer algorithms for recognizing objects in photos often learn by example. CIFAR-10 is a set of images that can be used to teach a computer how to recognize objects. Since the images in CIFAR-10 are low-resolution (32x32), this dataset can allow researchers to quickly try different algorithms to see what works. Various kinds of convolutional neural networks tend to be the best at recognizing the images in CIFAR-10.
<table>
<tr>
<td class="cifar-class-name">airplane</td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/airplane1.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/airplane2.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/airplane3.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/airplane4.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/airplane5.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/airplane6.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/airplane7.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/airplane8.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/airplane9.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/airplane10.png" class="cifar-sample" /></td>
</tr>
<tr>
<td class="cifar-class-name">automobile</td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/automobile1.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/automobile2.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/automobile3.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/automobile4.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/automobile5.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/automobile6.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/automobile7.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/automobile8.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/automobile9.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/automobile10.png" class="cifar-sample" /></td>
</tr>
<tr>
<td class="cifar-class-name">bird</td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/bird1.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/bird2.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/bird3.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/bird4.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/bird5.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/bird6.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/bird7.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/bird8.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/bird9.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/bird10.png" class="cifar-sample" /></td>
</tr>
<tr>
<td class="cifar-class-name">cat</td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/cat1.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/cat2.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/cat3.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/cat4.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/cat5.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/cat6.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/cat7.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/cat8.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/cat9.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/cat10.png" class="cifar-sample" /></td>
</tr>
<tr>
<td class="cifar-class-name">deer</td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/deer1.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/deer2.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/deer3.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/deer4.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/deer5.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/deer6.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/deer7.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/deer8.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/deer9.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/deer10.png" class="cifar-sample" /></td>
</tr>
<tr>
<td class="cifar-class-name">dog</td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/dog1.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/dog2.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/dog3.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/dog4.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/dog5.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/dog6.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/dog7.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/dog8.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/dog9.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/dog10.png" class="cifar-sample" /></td>
</tr>
<tr>
<td class="cifar-class-name">frog</td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/frog1.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/frog2.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/frog3.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/frog4.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/frog5.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/frog6.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/frog7.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/frog8.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/frog9.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/frog10.png" class="cifar-sample" /></td>
</tr>
<tr>
<td class="cifar-class-name">horse</td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/horse1.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/horse2.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/horse3.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/horse4.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/horse5.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/horse6.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/horse7.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/horse8.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/horse9.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/horse10.png" class="cifar-sample" /></td>
</tr>
<tr>
<td class="cifar-class-name">ship</td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/ship1.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/ship2.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/ship3.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/ship4.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/ship5.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/ship6.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/ship7.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/ship8.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/ship9.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/ship10.png" class="cifar-sample" /></td>
</tr>
<tr>
<td class="cifar-class-name">truck</td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/truck1.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/truck2.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/truck3.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/truck4.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/truck5.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/truck6.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/truck7.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/truck8.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/truck9.png" class="cifar-sample" /></td>
<td><img src="https://www.cs.toronto.edu/~kriz/cifar-10-sample/truck10.png" class="cifar-sample" /></td>
</tr>
</table>
[Dataset Download](https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz)
### 1. Load CIFAR-10 Database
```
import keras
from keras.datasets import cifar10
# load the pre-shuffled train and test data
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
```
### 2. Visualize the First 24 Training Images
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
fig = plt.figure(figsize=(20,5))
for i in range(36):
ax = fig.add_subplot(3, 12, i + 1, xticks=[], yticks=[])
ax.imshow(np.squeeze(x_train[i]))
```
### 3. Rescale the Images by Dividing Every Pixel in Every Image by 255
```
# rescale [0,255] --> [0,1]
x_train = x_train.astype('float32')/255
x_test = x_test.astype('float32')/255
```
### 4. Break Dataset into Training, Testing, and Validation Sets
```
from keras.utils import np_utils
# one-hot encode the labels
num_classes = len(np.unique(y_train))
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
# break training set into training and validation sets
(x_train, x_valid) = x_train[5000:], x_train[:5000]
(y_train, y_valid) = y_train[5000:], y_train[:5000]
# print shape of training set
print('x_train shape:', x_train.shape)
# print number of training, validation, and test images
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
print(x_valid.shape[0], 'validation samples')
```
### 5. Define the Model Architecture
```
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout
model = Sequential()
model.add(Conv2D(filters=16, kernel_size=2, padding='same', activation='relu',
input_shape=(32, 32, 3)))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=32, kernel_size=2, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=64, kernel_size=2, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.3))
model.add(Flatten())
model.add(Dense(500, activation='relu'))
model.add(Dropout(0.4))
model.add(Dense(10, activation='softmax'))
model.summary()
```
### 6. Compile the Model
```
# compile the model
model.compile(loss='categorical_crossentropy', optimizer='rmsprop',
metrics=['accuracy'])
```
### 7. Train the Model
```
from keras.callbacks import ModelCheckpoint
# train the model
checkpointer = ModelCheckpoint(filepath='model.weights.best.hdf5', verbose=1,
save_best_only=True)
hist = model.fit(x_train, y_train, batch_size=32, epochs=100,
validation_data=(x_valid, y_valid), callbacks=[checkpointer],
verbose=2, shuffle=True)
```
### 8. Load the Model with the Best Validation Accuracy
```
# load the weights that yielded the best validation accuracy
model.load_weights('model.weights.best.hdf5')
```
### 9. Calculate Classification Accuracy on Test Set
```
# evaluate and print test accuracy
score = model.evaluate(x_test, y_test, verbose=0)
print('\n', 'Test accuracy:', score[1])
```
### 10. Visualize Some Predictions
This may give you some insight into why the network is misclassifying certain objects.
```
# get predictions on the test set
y_hat = model.predict(x_test)
# define text labels (source: https://www.cs.toronto.edu/~kriz/cifar.html)
cifar10_labels = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
# plot a random sample of test images, their predicted labels, and ground truth
fig = plt.figure(figsize=(20, 8))
for i, idx in enumerate(np.random.choice(x_test.shape[0], size=32, replace=False)):
ax = fig.add_subplot(4, 8, i + 1, xticks=[], yticks=[])
ax.imshow(np.squeeze(x_test[idx]))
pred_idx = np.argmax(y_hat[idx])
true_idx = np.argmax(y_test[idx])
ax.set_title("{} ({})".format(cifar10_labels[pred_idx], cifar10_labels[true_idx]),
color=("green" if pred_idx == true_idx else "red"))
```
|
github_jupyter
|
# Self Supervised Learning Fastai Extension
> Implementation of popular SOTA self-supervised learning algorithms as Fastai Callbacks.
You may find documentation [here](https://keremturgutlu.github.io/self_supervised) and github repo [here](https://github.com/keremturgutlu/self_supervised/tree/master/)
## Install
`pip install self-supervised`
## Algorithms
Here are the list of implemented algorithms:
- [SimCLR](https://arxiv.org/pdf/2002.05709.pdf)
- [BYOL](https://arxiv.org/pdf/2006.07733.pdf)
- [SwAV](https://arxiv.org/pdf/2006.09882.pdf)
## Simple Usage
```python
from self_supervised.simclr import *
dls = get_dls(resize, bs)
model = create_simclr_model(arch=xresnet34, pretrained=False)
learn = Learner(dls, model, SimCLRLoss(temp=0.1), opt_func=opt_func, cbs=[SimCLR(size=size)])
learn.fit_flat_cos(100, 1e-2)
```
```python
from self_supervised.byol import *
dls = get_dls(resize, bs)
model = create_byol_model(arch=xresnet34, pretrained=False)
learn = Learner(dls, model, byol_loss, opt_func=opt_func, cbs=[BYOL(size=size, T=0.99)])
learn.fit_flat_cos(100, 1e-2)
```
```python
from self_supervised.swav import *
dls = get_dls(resize, bs)
model = create_swav_model(arch=xresnet34, pretrained=False)
learn = Learner(dls, model, SWAVLoss(), opt_func=opt_func, cbs=[SWAV(crop_sizes=[size,96],
num_crops=[2,6],
min_scales=[0.25,0.2],
max_scales=[1.0,0.35])])
learn.fit_flat_cos(100, 1e-2)
```
## ImageWang Benchmarks
All of the algorithms implemented in this library have been evaluated in [ImageWang Leaderboard](https://github.com/fastai/imagenette#image%E7%BD%91-leaderboard).
In overall superiority of the algorithms are as follows `SwAV > BYOL > SimCLR` in most of the benchmarks. For details you may inspect the history of [ImageWang Leaderboard](https://github.com/fastai/imagenette#image%E7%BD%91-leaderboard) through github.
It should be noted that during these experiments no hyperparameter selection/tuning was made beyond using `learn.lr_find()` or making sanity checks over data augmentations by visualizing batches. So, there is still space for improvement and overall rankings of the alogrithms may change based on your setup. Yet, the overall rankings are on par with the papers.
## Contributing
Contributions and or requests for new self-supervised algorithms are welcome. This repo will try to keep itself up-to-date with recent SOTA self-supervised algorithms.
Before raising a PR please create a new branch with name `<self-supervised-algorithm>`. You may refer to previous notebooks before implementing your Callback.
Please refer to sections `Developers Guide, Abbreviations Guide, and Style Guide` from https://docs.fast.ai/dev-setup and note that same rules apply for this library.
|
github_jupyter
|
# Compute norm from function space
```
from dolfin import *
import dolfin as df
import numpy as np
import logging
df.set_log_level(logging.INFO)
df.set_log_level(WARNING)
mesh = RectangleMesh(0, 0, 1, 1, 10, 10)
#mesh = Mesh(Rectangle(-10, -10, 10, 10) - Circle(0, 0, 0.1), 10)
V = FunctionSpace(mesh, "CG", 1)
W = VectorFunctionSpace(mesh, "CG", 1)
w = interpolate(Expression(["2", "1"]), W)
%%timeit
norm_squared = 0
for i in range(2):
norm_squared += w[i] ** 2
norm = norm_squared ** 0.5
norm = df.project(norm, V)
#norm = df.interpolate(norm, V)
```
This next bit is fast, but doesn't compute the norm ;-|
```
%%timeit
n = interpolate(Expression("sqrt(pow(x[0], 2) + pow(x[1], 2))"), V)
```
# Compute norm via dolfin vector norm function
```
vector = w.vector()
%%timeit
norm2 = vector.norm('l2')
print(norm2)
```
Okay, the method above is not suitable: it computes the norm of the whole vector, not the norm for the 2d vector at each node.
# Compute the norm using dolfin generic vector functions
```
mesh = RectangleMesh(0, 0, 1, 1, 10, 10)
V = FunctionSpace(mesh, "CG", 1)
W = VectorFunctionSpace(mesh, "CG", 1)
w = interpolate(Expression(["2", "1"]), W)
norm = Function(V)
norm_vec = norm.vector()
print("Shape of w = {}".format(w.vector().get_local().shape))
print("Shape of norm = {}".format(norm.vector().get_local().shape))
```
Compute the norm-squared in dolfin vector:
```
%%timeit
wx, wy = w.split(deepcopy=True)
wnorm2 = (wx.vector() * wx.vector() + wy.vector() * wy.vector())
#At this point, I don't know how to compute the square root of wnorm2 (without numpy or other non-dolfin-generic-vector code).
wnorm = np.sqrt(wnorm2.array())
norm_vec.set_local(wnorm)
```
## plot some results
```
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.tri as tri
coords = mesh.coordinates()
x = coords[:,0]
y = coords[:,1]
triang = tri.Triangulation(x, y)
z = norm2.vector().array()
plt.tripcolor(triang, z, shading='flat', cmap=plt.cm.rainbow)
plt.colorbar()
coords[:,0]
```
# Wacky stuff from webpage (works on the coordinate, not the field value)
http://fenicsproject.org/qa/3693/parallel-vector-operations-something-akin-to-celliterator
```
from dolfin import *
import numpy as np
import math
mesh = RectangleMesh(-1, -1, 1, 1, 10, 10)
V = FunctionSpace(mesh, 'CG', 1)
u = Function(V)
uvec = u.vector()
dofmap = V.dofmap()
dof_x = dofmap.tabulate_all_coordinates(mesh).reshape((-1, 2))
first_dof, last_dof = dofmap.ownership_range() # U.local_size()
#rank = MPI.process_number()
new_values = np.zeros(last_dof - first_dof)
for i in range(len(new_values)):
x, y = dof_x[i]
new_values[i] = math.sqrt(x **2 + y **2)
uvec.set_local(new_values)
uvec.apply('insert')
#plot(u, title=str(rank))
#interactive()
dof_x[0]
mesh.coordinates()[0]
```
## Wacky stuff from http://fenicsproject.org/qa/3532/avoiding-assembly-vector-operations-scalar-vector-spaces
```
from dolfin import *
mesh = RectangleMesh(0.0, 0.0, 1.0, 1.0, 10, 10)
V = FunctionSpace(mesh, "Lagrange", 1)
V_vec = VectorFunctionSpace(mesh, "Lagrange", 1)
W = V_vec
c = project(Expression('1.1'), V)
v = as_vector((1,2))
d = project(c*v,V_vec)
d.vector().array()
W = VectorFunctionSpace(mesh, "CG", 1)
w = interpolate(Expression(["2", "1"]), W)
%%timeit
#dd = w #Function(V_vec)
dofs0 = V_vec.sub(0).dofmap().dofs() # indices of x-components
dofs1 = V_vec.sub(1).dofmap().dofs() # indices of y-components
norm = Function(V)
norm.vector()[:] = np.sqrt(w.vector()[dofs0] * w.vector()[dofs0] + w.vector()[dofs1] * w.vector()[dofs1])
norm = Function(V)
%%timeit
norm.vector()[:] = np.sqrt(w.vector()[dofs0] * w.vector()[dofs0] + w.vector()[dofs1] * w.vector()[dofs1])
norm.vector().array()
```
# Done a number of tests. Implement one or two versions as funcions
```
import numpy as np
def value_dim(w):
if isinstance(w.function_space(), df.FunctionSpace):
# Scalar field.
return 1
else:
# value_shape() returns a tuple (N,) and int is required.
return w.function_space().ufl_element().value_shape()[0]
def compute_pointwise_norm(w, target=None, method=1):
"""Given a function vectior function w, compute the norm at each vertex, and store in scalar function target.
If target is given (a scalar dolfin Function), then store the result in there, and return reference to it.
If traget is not given, create the object and return reference to it.
Method allows to choose which method we use.
"""
if not target:
raise NotImplementeError("This is missing - could cerate a df.Function(V) here")
dim = value_dim(w)
assert dim in [3], "Only implemented for 2d and 3d vector field"
if method == 1:
wx, wy, wz = w.split(deepcopy=True)
wnorm = np.sqrt(wx.vector() * wx.vector() + wy.vector() * wy.vector() + wz.vector() * wz.vector())
target.vector().set_local(wnorm)
elif method == 2:
V_vec = w.function_space()
dofs0 = V_vec.sub(0).dofmap().dofs() # indices of x-components
dofs1 = V_vec.sub(1).dofmap().dofs() # indices of y-components
dofs2 = V_vec.sub(2).dofmap().dofs() # indices of z-components
target.vector()[:] = np.sqrt(w.vector()[dofs0] * w.vector()[dofs0] +\
w.vector()[dofs1] * w.vector()[dofs1] +\
w.vector()[dofs2] * w.vector()[dofs2])
else:
raise NotImplementedError("method {} unknown".format(method))
import dolfin as df
def create_test_system(nx, ny=None):
if not ny:
ny = nx
nz = ny
mesh = df.BoxMesh(0, 0, 0, 1, 1, 1, nx, ny, nz)
V = df.FunctionSpace(mesh, "CG", 1)
W = df.VectorFunctionSpace(mesh, "CG", 1)
w = df.interpolate(Expression(["2", "1", "2"]), W)
target = df.Function(V)
return w, mesh, V, W, target
w, mesh, V, W, norm = create_test_system(5)
%timeit compute_pointwise_norm(w, norm, method=1)
assert norm.vector().array()[0] == np.sqrt(2*2 + 1 + 2*2)
assert norm.vector().array()[0] == np.sqrt(2*2 + 1 + 2*2)
%timeit compute_pointwise_norm(w, norm, method=2)
assert norm.vector().array()[0] == np.sqrt(2*2 + 1 + 2*2)
compute_pointwise_norm(w, norm, method=1)
norm.vector().array()[0]
```
|
github_jupyter
|
## Create Data
```
import numpy as np
import matplotlib.pyplot as plt
from patsy import dmatrix
from statsmodels.api import GLM, families
def simulate_poisson_process(rate, sampling_frequency):
return np.random.poisson(rate / sampling_frequency)
def plot_model_vs_true(time, spike_train, firing_rate, conditional_intensity, sampling_frequency):
fig, axes = plt.subplots(2, 1, figsize=(12, 6), sharex=True, constrained_layout=True)
s, t = np.nonzero(spike_train)
axes[0].scatter(np.unique(time)[s], t, s=1, color='black')
axes[0].set_ylabel('Trials')
axes[0].set_title('Simulated Spikes')
axes[0].set_xlim((0, 1))
axes[1].plot(np.unique(time), firing_rate[:, 0],
linestyle='--', color='black',
linewidth=4, label='True Rate')
axes[1].plot(time.ravel(), conditional_intensity * sampling_frequency,
linewidth=4, label='model conditional intensity')
axes[1].set_xlabel('Time')
axes[1].set_ylabel('Firing Rate (Hz)')
axes[1].set_title('True Rate vs. Model')
axes[1].set_ylim((0, 15))
plt.legend()
n_time, n_trials = 1500, 1000
sampling_frequency = 1500
# Firing rate starts at 5 Hz and switches to 10 Hz
firing_rate = np.ones((n_time, n_trials)) * 10
firing_rate[:n_time // 2, :] = 5
spike_train = simulate_poisson_process(
firing_rate, sampling_frequency)
time = (np.arange(0, n_time)[:, np.newaxis] / sampling_frequency *
np.ones((1, n_trials)))
trial_id = (np.arange(n_trials)[np.newaxis, :]
* np.ones((n_time, 1)))
```
## Good Fit
```
# Fit a spline model to the firing rate
design_matrix = dmatrix('bs(time, df=5)', dict(time=time.ravel()))
fit = GLM(spike_train.ravel(), design_matrix,
family=families.Poisson()).fit()
conditional_intensity = fit.mu
plot_model_vs_true(time, spike_train, firing_rate, conditional_intensity, sampling_frequency)
plt.savefig('simulated_spikes_model.png')
from time_rescale import TimeRescaling
conditional_intensity = fit.mu
rescaled = TimeRescaling(conditional_intensity,
spike_train.ravel(),
trial_id.ravel())
fig, axes = plt.subplots(1, 2, figsize=(12, 6))
rescaled.plot_ks(ax=axes[0])
rescaled.plot_rescaled_ISI_autocorrelation(ax=axes[1])
plt.savefig('time_rescaling_ks_autocorrelation.png')
```
### Adjust for short trials
```
rescaled_adjusted = TimeRescaling(conditional_intensity,
spike_train.ravel(),
trial_id.ravel(),
adjust_for_short_trials=True)
fig, axes = plt.subplots(1, 2, figsize=(12, 6))
rescaled_adjusted.plot_ks(ax=axes[0])
rescaled_adjusted.plot_rescaled_ISI_autocorrelation(ax=axes[1])
plt.savefig('time_rescaling_ks_autocorrelation_adjusted.png')
```
## Bad Fit
```
constant_fit = GLM(spike_train.ravel(),
np.ones_like(spike_train.ravel()),
family=families.Poisson()).fit()
conditional_intensity = constant_fit.mu
plot_model_vs_true(time, spike_train, firing_rate, conditional_intensity, sampling_frequency)
plt.savefig('constant_model_fit.png')
bad_rescaled = TimeRescaling(constant_fit.mu,
spike_train.ravel(),
trial_id.ravel(),
adjust_for_short_trials=True)
fig, axes = plt.subplots(1, 2, figsize=(12, 6))
bad_rescaled.plot_ks(ax=axes[0], scatter_kwargs=dict(s=10))
axes[0].set_title('KS Plot')
bad_rescaled.plot_rescaled_ISI_autocorrelation(ax=axes[1], scatter_kwargs=dict(s=10))
axes[1].set_title('Autocorrelation');
plt.savefig('time_rescaling_ks_autocorrelation_bad_fit.png')
```
|
github_jupyter
|
```
# Dependencies
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from datetime import timedelta
import time
from datetime import date
# Import SQL Alchemy
from sqlalchemy import create_engine, ForeignKey, func
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
# Import PyMySQL (Not needed if mysqlclient is installed)
import pymysql
pymysql.install_as_MySQLdb()
firstDate = "2017-07-17"
lastDate = "2017-07-30"
engine = create_engine("sqlite:///hawaii.sqlite")
conn = engine.connect()
Base = automap_base()
Base.prepare(engine, reflect=True)
# mapped classes are now created with names by default
# matching that of the table name.
Base.classes.keys()
Measurement = Base.classes.Measurements
Station = Base.classes.Stations
# To push the objects made and query the server we use a Session object
session = Session(bind=engine)
# Calculate the date 1 year ago from today
prev_year = date.today() - timedelta(days=365)
# Perform a query to retrieve the data and precipitation scores
results = session.query(Measurement.date, Measurement.prcp).filter(Measurement.date >= prev_year).all()
# Save the query results as a Pandas DataFrame and set the index to the date column
df = pd.DataFrame(results, columns=['date', 'precipitation'])
df.head()
yAxis = df.precipitation
xAxis = df.date
plt.figure(figsize=(15,3))
plt.bar(xAxis, yAxis, color='blue', alpha = 0.5, align='edge')
plt.xticks(np.arange(12), df.date[1:13], rotation=90)
plt.xlabel('Date')
plt.ylabel('Precipitation (Inches)')
plt.title('Precipitation')
plt.show()
df.describe()
totalStations = session.query(Station.station).count()
totalStations
activeStations = session.query(Measurement.station, Measurement.tobs, func.count(Measurement.station)).group_by(Measurement.station).all()
dfAS = pd.DataFrame(activeStations, columns = ['station', 'tobs', 'stationCount'])
print(dfAS)
maxObs = dfAS.loc[(dfAS['tobs'] == dfAS['tobs'].max())]
maxObs
tobsData = session.query(Measurement.date, Measurement.tobs).filter(Measurement.date >= prev_year).all()
dfTD = pd.DataFrame(tobsData, columns = ['Date', 'tobs']).sort_values('tobs', ascending = False)
dfTD.head()
plt.hist(dfTD['tobs'], bins=12, color= "blue")
plt.xlabel('Tobs (bins=12)')
plt.ylabel('Frequency')
plt.title('Tobs Frequency')
plt.legend('Tobs')
plt.show()
def calcTemps(x):
session.query(x['Date'], x['tobs']).filter(lastDate >= x['Date'] >= firstDate).all()
x['tobs'].max()
x['tobs'].min()
calcTemps(dfTD)
```
|
github_jupyter
|
## Global Air Pollution Measurements
* [Air Quality Index - Wiki](https://en.wikipedia.org/wiki/Air_quality_index)
* [BigQuery - Wiki](https://en.wikipedia.org/wiki/BigQuery)
In this notebook data is extracted from *BigQuery Public Data* assesible exclusively only in *Kaggle*. The BigQurey Helper Object will convert data in cloud storage into *Pandas DataFrame* object. The query syntax is same as *SQL*. As size of data is very high convert entire data to DataFrame is cumbersome. So query is written such that will be readly available for Visualization.
***
>**Baisc attributes of Air quality index**
* Measurement units
* $ug/m^3$: micro gram/cubic meter
* $ppm$: Parts Per Million
* Pollutant
* $O3$: Ozone gas
* $SO2$: Sulphur Dioxed
* $NO2$: Nitrogen Dioxed
* $PM 2.5$: Particles with an aerodynamic diameter less than $2.5 μm$
* $PM 10$: Particles with an aerodynamic diameter less than $10 μm$
* $CO$: Carbon monoxide
**Steps**
1. Load Packages
2. Bigquery Object
3. AQI range and Statistics
4. Distribution of country listed in AQI
5. Location
6. Air Quality Index value distribution Map veiw
7. Pollutant Statistics
8. Distribution of pollutant and unit
9. Distribution of Source name
10. Sample AQI Averaged over in hours
11. AQI variation with time
12. Country Heatmap
13. Animation
### Load packages
```
# Load packages
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from mpl_toolkits.basemap import Basemap
import folium
import folium.plugins as plugins
import warnings
warnings.filterwarnings('ignore')
pd.options.display.max_rows =10
%matplotlib inline
```
### Bigquery
BigQuery is a RESTful web service that enables interactive analysis of massively large datasets working in conjunction with Google Storage. It is an Infrastructure as a Service that may be used complementarily with MapReduce.
```
# Customized query helper function explosively in Kaggle
import bq_helper
# Helper object
openAQ = bq_helper.BigQueryHelper(active_project='bigquery-public-data',
dataset_name='openaq')
# List of table
openAQ.list_tables()
#Schema
openAQ.table_schema('global_air_quality')
```
### Table display
```
openAQ.head('global_air_quality')
# Summary statics
query = """SELECT value,averaged_over_in_hours
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE unit = 'µg/m³'
"""
p1 = openAQ.query_to_pandas(query)
p1.describe()
```
# Air Quality Index Range
* [AQI Range](http://aqicn.org/faq/2013-09-09/revised-pm25-aqi-breakpoints/)
<center><img src = 'https://campuspress.yale.edu/datadriven/files/2012/03/AQI-1024x634-1ybtu6l.png '><center>
The range of AQI is 0 - 500, so lets limit data to that range, in previous kernel's these outlier data points are not removed
```
query = """SELECT value,country
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE unit = 'µg/m³' AND value < 0
"""
p1 = openAQ.query_to_pandas(query)
p1.describe().T
```
There are more than 100 value having value less than 0. The lowest value is -999000, which is outlier data point. **Air Quality Meter** is digital a instruments, if meter is show error value then sensor is disconnected or faulty.
```
query2 = """SELECT value,country,pollutant
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE unit = 'µg/m³' AND value > 0
"""
p2 = openAQ.query_to_pandas(query2)
print('0.99 Quantile',p2['value'].quantile(0.99))
p2.describe().T
p2[p2['value']>10000]
```
Country
* MK is *Macedonia* [wiki](https://en.wikipedia.org/wiki/Republic_of_Macedonia)
* CL is *Chile* [Wiki](https://en.wikipedia.org/wiki/Chile)
>In both the countries some may some natural disaster happend so AQI is very high.
We will disgrad value more than 10000, which are outlier data point
### Distribution of country listed in AQI
```
query = """SELECT country,COUNT(country) as `count`
FROM `bigquery-public-data.openaq.global_air_quality`
GROUP BY country
HAVING COUNT(country) >10
ORDER BY `count`
"""
cnt = openAQ.query_to_pandas_safe(query)
cnt.tail()
plt.style.use('bmh')
plt.figure(figsize=(14,4))
sns.barplot(cnt['country'], cnt['count'], palette='magma')
plt.xticks(rotation=45)
plt.title('Distribution of country listed in data');
```
## Location
We find find different location where air quality is taken. This location data consist of latitude and logitude, city.
```
#Average polution of air by countries
query = """SELECT AVG(value) as `Average`,country
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE unit = 'µg/m³' AND value BETWEEN 0 AND 10000
GROUP BY country
ORDER BY Average DESC
"""
cnt = openAQ.query_to_pandas(query)
plt.figure(figsize=(14,4))
sns.barplot(cnt['country'],cnt['Average'], palette= sns.color_palette('gist_heat',len(cnt)))
plt.xticks(rotation=90)
plt.title('Average polution of air by countries in unit $ug/m^3$')
plt.ylabel('Average AQI in $ug/m^3$');
```
* Country PL ( Poland) and IN (India) are top pollutor of air
***
### AQI measurement center
```
query = """SELECT city,latitude,longitude,
AVG(value) as `Average`
FROM `bigquery-public-data.openaq.global_air_quality`
GROUP BY latitude,city,longitude
"""
location = openAQ.query_to_pandas_safe(query)
#Location AQI measurement center
m = folium.Map(location = [20,10],tiles='Mapbox Bright',zoom_start=2)
# add marker one by on map
for i in range(0,500):
folium.Marker(location = [location.iloc[i]['latitude'],location.iloc[i]['longitude']],\
popup=location.iloc[i]['city']).add_to(m)
m # DRAW MAP
```
We find that thier are many air qulity index measurement unit across -US- and -Europe-. Thier are few measurement center in -African- continent. We are hardly find any measuring center in Mid East, Russia.
### Air Quality Index value distribution Map veiw
```
query = """SELECT city,latitude,longitude,
AVG(value) as `Average`
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE unit = 'µg/m³' AND value BETWEEN 0 AND 10000
GROUP BY latitude,city,longitude
"""
location = openAQ.query_to_pandas_safe(query)
location.dropna(axis=0, inplace=True)
plt.style.use('ggplot')
f,ax = plt.subplots(figsize=(14,10))
m1 = Basemap(projection='cyl', llcrnrlon=-180, urcrnrlon=180, llcrnrlat=-90, urcrnrlat=90,
resolution='c',lat_ts=True)
m1.drawmapboundary(fill_color='#A6CAE0', linewidth=0)
m1.fillcontinents(color='grey', alpha=0.3)
m1.drawcoastlines(linewidth=0.1, color="white")
m1.shadedrelief()
m1.bluemarble(alpha=0.4)
avg = location['Average']
m1loc = m1(location['latitude'].tolist(),location['longitude'])
m1.scatter(m1loc[1],m1loc[0],lw=3,alpha=0.5,zorder=3,cmap='coolwarm', c=avg)
plt.title('Average Air qulity index in unit $ug/m^3$ value')
m1.colorbar(label=' Average AQI value in unit $ug/m^3$');
```
### US
```
#USA location
query = """SELECT
MAX(latitude) as `max_lat`,
MIN(latitude) as `min_lat`,
MAX(longitude) as `max_lon`,
MIN(longitude) as `min_lon`
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE country = 'US' """
us_loc = openAQ.query_to_pandas_safe(query)
us_loc
query = """ SELECT city,latitude,longitude,averaged_over_in_hours,
AVG(value) as `Average`
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE country = 'US' AND unit = 'µg/m³' AND value BETWEEN 0 AND 10000
GROUP BY latitude,city,longitude,averaged_over_in_hours,country """
us_aqi = openAQ.query_to_pandas_safe(query)
# USA
min_lat = us_loc['min_lat']
max_lat = us_loc['max_lat']
min_lon = us_loc['min_lon']
max_lon = us_loc['max_lon']
plt.figure(figsize=(14,8))
m2 = Basemap(projection='cyl', llcrnrlon=min_lon, urcrnrlon=max_lon, llcrnrlat=min_lat, urcrnrlat=max_lat,
resolution='c',lat_ts=True)
m2.drawcounties()
m2.drawmapboundary(fill_color='#A6CAE0', linewidth=0)
m2.fillcontinents(color='grey', alpha=0.3)
m2.drawcoastlines(linewidth=0.1, color="white")
m2.drawstates()
m2.bluemarble(alpha=0.4)
avg = (us_aqi['Average'])
m2loc = m2(us_aqi['latitude'].tolist(),us_aqi['longitude'])
m2.scatter(m2loc[1],m2loc[0],c = avg,lw=3,alpha=0.5,zorder=3,cmap='rainbow')
m1.colorbar(label = 'Average AQI value in unit $ug/m^3$')
plt.title('Average Air qulity index in unit $ug/m^3$ of US');
```
AQI of US range 0 to 400, most of city data points are within 100
### India
```
#INDIA location
query = """SELECT
MAX(latitude) as `max_lat`,
MIN(latitude) as `min_lat`,
MAX(longitude) as `max_lon`,
MIN(longitude) as `min_lon`
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE country = 'IN' """
in_loc = openAQ.query_to_pandas_safe(query)
in_loc
query = """ SELECT city,latitude,longitude,
AVG(value) as `Average`
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE country = 'IN' AND unit = 'µg/m³' AND value BETWEEN 0 AND 10000
GROUP BY latitude,city,longitude,country """
in_aqi = openAQ.query_to_pandas_safe(query)
# INDIA
min_lat = in_loc['min_lat']-5
max_lat = in_loc['max_lat']+5
min_lon = in_loc['min_lon']-5
max_lon = in_loc['max_lon']+5
plt.figure(figsize=(14,8))
m3 = Basemap(projection='cyl', llcrnrlon=min_lon, urcrnrlon=max_lon, llcrnrlat=min_lat, urcrnrlat=max_lat,
resolution='c',lat_ts=True)
m3.drawcounties()
m3.drawmapboundary(fill_color='#A6CAE0', linewidth=0)
m3.fillcontinents(color='grey', alpha=0.3)
m3.drawcoastlines(linewidth=0.1, color="white")
m3.drawstates()
avg = in_aqi['Average']
m3loc = m3(in_aqi['latitude'].tolist(),in_aqi['longitude'])
m3.scatter(m3loc[1],m3loc[0],c = avg,alpha=0.5,zorder=5,cmap='rainbow')
m1.colorbar(label = 'Average AQI value in unit $ug/m^3$')
plt.title('Average Air qulity index in unit $ug/m^3$ of India');
```
### Distribution of pollutant and unit
```
# Unit query
query = """SELECT unit,COUNT(unit) as `count`
FROM `bigquery-public-data.openaq.global_air_quality`
GROUP BY unit
"""
unit = openAQ.query_to_pandas(query)
# Pollutant query
query = """SELECT pollutant,COUNT(pollutant) as `count`
FROM `bigquery-public-data.openaq.global_air_quality`
GROUP BY pollutant
"""
poll_count = openAQ.query_to_pandas_safe(query)
plt.style.use('fivethirtyeight')
plt.style.use('bmh')
f, ax = plt.subplots(1,2,figsize = (14,5))
ax1,ax2= ax.flatten()
ax1.pie(x=unit['count'],labels=unit['unit'],shadow=True,autopct='%1.1f%%',explode=[0,0.1],\
colors=sns.color_palette('hot',2),startangle=90,)
ax1.set_title('Distribution of measurement unit')
explode = np.arange(0,0.1)
ax2.pie(x=poll_count['count'],labels=poll_count['pollutant'], shadow=True, autopct='%1.1f%%',\
colors=sns.color_palette('Set2',5),startangle=60,)
ax2.set_title('Distribution of pollutants in air');
```
* The most polular unit of mesurement of air quality is $ug/m^3$
* $O^3$ is share 23% pollution in air.
***
### Pollutant Statistics
```
query = """ SELECT pollutant,
AVG(value) as `Average`,
COUNT(value) as `Count`,
MIN(value) as `Min`,
MAX(value) as `Max`,
SUM(value) as `Sum`
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE unit = 'µg/m³' AND value BETWEEN 0 AND 10000
GROUP BY pollutant
"""
cnt = openAQ.query_to_pandas_safe(query)
cnt
```
We find
* The CO (carbon monoxide) having very wide range of value.
* Look at sum of CO which is highest in list.
* Except Average AQI of CO, all are below 54 $ug/m^3$
### Pollutants by Country
```
query = """SELECT AVG(value) as`Average`,country, pollutant
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE unit = 'µg/m³'AND value BETWEEN 0 AND 10000
GROUP BY country,pollutant"""
p1 = openAQ.query_to_pandas_safe(query)
# By country
p1_pivot = p1.pivot(index = 'country',values='Average', columns= 'pollutant')
plt.figure(figsize=(14,15))
ax = sns.heatmap(p1_pivot, lw=0.01, cmap=sns.color_palette('Reds',500))
plt.yticks(rotation=30)
plt.title('Heatmap average AQI by Pollutant');
f,ax = plt.subplots(figsize=(14,6))
sns.barplot(p1[p1['pollutant']=='co']['country'],p1[p1['pollutant']=='co']['Average'],)
plt.title('Co AQI in diffrent country')
plt.xticks(rotation=90);
f,ax = plt.subplots(figsize=(14,6))
sns.barplot(p1[p1['pollutant']=='pm25']['country'],p1[p1['pollutant']=='pm25']['Average'])
plt.title('pm25 AQI in diffrent country')
plt.xticks(rotation=90);
```
### Distribution of Source name
The institution where AQI is measure
```
#source_name
query = """ SELECT source_name, COUNT(source_name) as `count`
FROM `bigquery-public-data.openaq.global_air_quality`
GROUP BY source_name
ORDER BY count DESC
"""
source_name = openAQ.query_to_pandas_safe(query)
plt.figure(figsize=(14,10))
sns.barplot(source_name['count'][:20], source_name['source_name'][:20],palette = sns.color_palette('YlOrBr'))
plt.title('Distribution of Top 20 source_name')
#plt.axvline(source_name['count'].median())
plt.xticks(rotation=90);
```
We find
* Airnow is top source unit in list
* Europian country are top in the list, the instition name is starts with 'EEA country'.
***
### Sample AQI Averaged over in hours
The sample of AQI value taken in different hour
```
query = """SELECT averaged_over_in_hours, COUNT(*) as `count`
FROM `bigquery-public-data.openaq.global_air_quality`
GROUP BY averaged_over_in_hours
ORDER BY count DESC """
cnt = openAQ.query_to_pandas(query)
#cnt['averaged_over_in_hours'] = cnt['averaged_over_in_hours'].astype('category')
plt.figure(figsize=(14,5))
sns.barplot( cnt['averaged_over_in_hours'],cnt['count'], palette= sns.color_palette('brg'))
plt.title('Distibution of quality measurement per hour ');
```
we find that air quality is measured every hour
***
### AQI in ppm
```
query = """SELECT AVG(value) as`Average`,country,
EXTRACT(YEAR FROM timestamp) as `Year`,
pollutant
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE unit = 'ppm'
GROUP BY country,Year,pollutant"""
pol_aqi = openAQ.query_to_pandas_safe(query)
# By month in year
plt.figure(figsize=(14,8))
sns.barplot(pol_aqi['country'], pol_aqi['Average'])
plt.title('Distribution of average AQI by country $ppm$');
```
### AQI variation with time
```
query = """SELECT EXTRACT(YEAR FROM timestamp) as `Year`,
AVG(value) as `Average`
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE unit = 'µg/m³' AND value BETWEEN 0 AND 10000
GROUP BY EXTRACT(YEAR FROM timestamp)
"""
quality = openAQ.query_to_pandas(query)
query = """SELECT EXTRACT(MONTH FROM timestamp) as `Month`,
AVG(value) as `Average`
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE unit = 'µg/m³' AND value BETWEEN 0 AND 10000
GROUP BY EXTRACT(MONTH FROM timestamp)
"""
quality1 = openAQ.query_to_pandas(query)
# plot
f,ax = plt.subplots(1,2, figsize= (14,6),sharey=True)
ax1,ax2 = ax.flatten()
sns.barplot(quality['Year'],quality['Average'],ax=ax1)
ax1.set_title('Distribution of average AQI by year')
sns.barplot(quality1['Month'],quality['Average'], ax=ax2 )
ax2.set_title('Distribution of average AQI by month')
ax2.set_ylabel('');
# by year & month
query = """SELECT EXTRACT(YEAR from timestamp) as `Year`,
EXTRACT(MONTH FROM timestamp) as `Month`,
AVG(value) as `Average`
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE unit = 'µg/m³' AND value BETWEEN 0 AND 10000
GROUP BY year,Month"""
aqi_year = openAQ.query_to_pandas_safe(query)
# By month in year
plt.figure(figsize=(14,8))
sns.pointplot(aqi_year['Month'],aqi_year['Average'],hue = aqi_year['Year'])
plt.title('Distribution of average AQI by month');
```
We find
* the data available for perticular year is incomplete
* the year 2016, 2017 data is availabel completely
### Country Heatmap
```
# Heatmap by country
query = """SELECT AVG(value) as `Average`,
EXTRACT(YEAR FROM timestamp) as `Year`,
country
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE unit = 'µg/m³' AND value BETWEEN 0 AND 10000
GROUP BY country,Year
"""
coun_aqi = openAQ.query_to_pandas_safe(query)
coun_pivot = coun_aqi.pivot(index='country', columns='Year', values='Average').fillna(0)
# By month in year
plt.figure(figsize=(14,15))
sns.heatmap(coun_pivot, lw=0.01, cmap=sns.color_palette('Reds',len(coun_pivot)))
plt.yticks(rotation=30)
plt.title('Heatmap average AQI by YEAR');
```
### Animation
```
query = """SELECT EXTRACT(YEAR FROM timestamp) as `Year`,AVG(value) as `Average`,
latitude,longitude
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE unit = 'µg/m³' AND value BETWEEN 0 AND 10000
GROUP BY Year, latitude,longitude
"""
p1 = openAQ.query_to_pandas_safe(query)
from matplotlib import animation,rc
import io
import base64
from IPython.display import HTML, display
import warnings
warnings.filterwarnings('ignore')
fig = plt.figure(figsize=(14,10))
plt.style.use('ggplot')
def animate(Year):
ax = plt.axes()
ax.clear()
ax.set_title('Average AQI in Year: '+str(Year))
m4 = Basemap(llcrnrlat=-90, urcrnrlat=90, llcrnrlon=-180,urcrnrlon=180,projection='cyl')
m4.drawmapboundary(fill_color='#A6CAE0', linewidth=0)
m4.fillcontinents(color='grey', alpha=0.3)
m4.drawcoastlines(linewidth=0.1, color="white")
m4.shadedrelief()
lat_y = list(p1[p1['Year'] == Year]['latitude'])
lon_y = list(p1[p1['Year'] == Year]['longitude'])
lat,lon = m4(lat_y,lon_y)
avg = p1[p1['Year'] == Year]['Average']
m4.scatter(lon,lat,c = avg,lw=2, alpha=0.3,cmap='hot_r')
ani = animation.FuncAnimation(fig,animate,list(p1['Year'].unique()), interval = 1500)
ani.save('animation.gif', writer='imagemagick', fps=1)
plt.close(1)
filename = 'animation.gif'
video = io.open(filename, 'r+b').read()
encoded = base64.b64encode(video)
HTML(data='''<img src="data:image/gif;base64,{0}" type="gif" />'''.format(encoded.decode('ascii')))
# Continued
```
>>>>>> ### Thank you for visiting, please upvote if you like it.
|
github_jupyter
|
# Breast Cancer Wisconsin (Diagnostic) Data Set
* **[T81-558: Applications of Deep Learning](https://sites.wustl.edu/jeffheaton/t81-558/)**
* Dataset provided by [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29)
* [Download Here](https://raw.githubusercontent.com/jeffheaton/t81_558_deep_learning/master/data/wcbreast.csv)
This is a popular dataset that contains columns that might be useful to determine if a tumor is breast cancer or not. There are a total of 32 columns and 569 rows. This dataset is used in class to introduce binary (two class) classification. The following fields are present:
* **id** - Identity column, not really useful to a neural network.
* **diagnosis** - Diagnosis, B=Benign, M=Malignant.
* **mean_radius** - Potentially predictive field.
* **mean_texture** - Potentially predictive field.
* **mean_perimeter** - Potentially predictive field.
* **mean_area** - Potentially predictive field.
* **mean_smoothness** - Potentially predictive field.
* **mean_compactness** - Potentially predictive field.
* **mean_concavity** - Potentially predictive field.
* **mean_concave_points** - Potentially predictive field.
* **mean_symmetry** - Potentially predictive field.
* **mean_fractal_dimension** - Potentially predictive field.
* **se_radius** - Potentially predictive field.
* **se_texture** - Potentially predictive field.
* **se_perimeter** - Potentially predictive field.
* **se_area** - Potentially predictive field.
* **se_smoothness** - Potentially predictive field.
* **se_compactness** - Potentially predictive field.
* **se_concavity** - Potentially predictive field.
* **se_concave_points** - Potentially predictive field.
* **se_symmetry** - Potentially predictive field.
* **se_fractal_dimension** - Potentially predictive field.
* **worst_radius** - Potentially predictive field.
* **worst_texture** - Potentially predictive field.
* **worst_perimeter** - Potentially predictive field.
* **worst_area** - Potentially predictive field.
* **worst_smoothness** - Potentially predictive field.
* **worst_compactness** - Potentially predictive field.
* **worst_concavity** - Potentially predictive field.
* **worst_concave_points** - Potentially predictive field.
* **worst_symmetry** - Potentially predictive field.
* **worst_fractal_dimension** - Potentially predictive field.
The following code shows 10 sample rows.
```
import pandas as pd
import numpy as np
path = "./data/"
filename = os.path.join(path,"wcbreast_wdbc.csv")
df = pd.read_csv(filename,na_values=['NA','?'])
# Shuffle
np.random.seed(42)
df = df.reindex(np.random.permutation(df.index))
df.reset_index(inplace=True, drop=True)
df[0:10]
```
|
github_jupyter
|
# Digital Signal Processing
This collection of [jupyter](https://jupyter.org/) notebooks introduces various topics of [Digital Signal Processing](https://en.wikipedia.org/wiki/Digital_signal_processing). The theory is accompanied by computational examples written in [IPython 3](http://ipython.org/). The sources of the notebooks, as well as installation and usage instructions can be found on [GitHub](https://github.com/lev1khachatryan/Digital_Signal_Processing).
## Table of Contents
#### 1. Introduction
* [Introduction](introduction/introduction.ipynb)
#### 2. Spectral Analysis of Deterministic Signals
* [The Leakage-Effect](spectral_analysis_deterministic_signals/leakage_effect.ipynb)
* [Window Functions](spectral_analysis_deterministic_signals/window_functions.ipynb)
* [Zero-Padding](spectral_analysis_deterministic_signals/zero_padding.ipynb)
* [Short-Time Fourier Transform](spectral_analysis_deterministic_signals/stft.ipynb)
* [Summary](spectral_analysis_deterministic_signals/summary.ipynb)
#### 3. Random Signals
* [Introduction](random_signals/introduction.ipynb)
* [Amplitude Distributions](random_signals/distributions.ipynb)
* [Ensemble Averages](random_signals/ensemble_averages.ipynb)
* [Stationary and Ergodic Processes](random_signals/stationary_ergodic.ipynb)
* [Correlation Functions](random_signals/correlation_functions.ipynb)
* [Power Spectral Densities](random_signals/power_spectral_densities.ipynb)
* [Independent Processes](random_signals/independent.ipynb)
* [Important Amplitude Distributions](random_signals/important_distributions.ipynb)
* [White Noise](random_signals/white_noise.ipynb)
* [Superposition of Random Signals](random_signals/superposition.ipynb)
#### 4. Random Signals and LTI Systems
* [Introduction](random_signals_LTI_systems/introduction.ipynb)
* [Linear Mean](random_signals_LTI_systems/linear_mean.ipynb)
* [Correlation Functions](random_signals_LTI_systems/correlation_functions.ipynb)
* [Example: Measurement of Acoustic Impulse Responses](random_signals_LTI_systems/acoustic_impulse_response_measurement.ipynb)
* [Power Spectral Densities](random_signals_LTI_systems/power_spectral_densities.ipynb)
* [Wiener Filter](random_signals_LTI_systems/wiener_filter.ipynb)
#### 5. Spectral Estimation of Random Signals
* [Introduction](spectral_estimation_random_signals/introduction.ipynb)
* [Periodogram](spectral_estimation_random_signals/periodogram.ipynb)
* [Welch-Method](spectral_estimation_random_signals/welch_method.ipynb)
* [Parametric Methods](spectral_estimation_random_signals/parametric_methods.ipynb)
#### 6. Quantization
* [Introduction](quantization/introduction.ipynb)
* [Characteristic of Linear Uniform Quantization](quantization/linear_uniform_characteristic.ipynb)
* [Quantization Error of Linear Uniform Quantization](quantization/linear_uniform_quantization_error.ipynb)
* [Example: Requantization of a Speech Signal](quantization/requantization_speech_signal.ipynb)
* [Noise Shaping](quantization/noise_shaping.ipynb)
* [Oversampling](quantization/oversampling.ipynb)
* [Example: Non-Linear Quantization of a Speech Signal](quantization/nonlinear_quantization_speech_signal.ipynb)
#### 7. Realization of Non-Recursive Filters
* [Introduction](nonrecursive_filters/introduction.ipynb)
* [Fast Convolution](nonrecursive_filters/fast_convolution.ipynb)
* [Segmented Convolution](nonrecursive_filters/segmented_convolution.ipynb)
* [Quantization Effects](nonrecursive_filters/quantization_effects.ipynb)
#### 8. Realization of Recursive Filters
* [Introduction](recursive_filters/introduction.ipynb)
* [Direct Form Structures](recursive_filters/direct_forms.ipynb)
* [Cascaded Structures](recursive_filters/cascaded_structures.ipynb)
* [Quantization of Filter Coefficients](recursive_filters/quantization_of_coefficients.ipynb)
* [Quantization of Variables and Operations](recursive_filters/quantization_of_variables.ipynb)
#### 9. Design of Digital Filters
* [Design of Non-Recursive Filters by the Window Method](filter_design/window_method.ipynb)
* [Design of Non-Recursive Filters by the Frequency Sampling Method](filter_design/frequency_sampling_method.ipynb)
* [Design of Recursive Filters by the Bilinear Transform](filter_design/bilinear_transform.ipynb)
* [Example: Non-Recursive versus Recursive Filter](filter_design/comparison_non_recursive.ipynb)
* [Examples: Typical IIR-Filters in Audio](filter_design/audiofilter.ipynb)
#### Reference Cards
* [Reference Card Discrete Signals and Systems](reference_cards/RC_discrete_signals_and_systems.pdf)
* [Reference Card Random Signals and LTI Systems](reference_cards/RC_random_signals_and_LTI_systems.pdf)
|
github_jupyter
|
# Downloading GNSS station locations and tropospheric zenith delays
**Author**: Simran Sangha, David Bekaert - Jet Propulsion Laboratory
This notebook provides an overview of the functionality included in the **`raiderDownloadGNSS.py`** program. Specifically, we outline examples on how to access and store GNSS station location and tropospheric zenith delay information over a user defined area of interest and span of time. In this notebook, we query GNSS stations spanning northern California between 2016 and 2019.
We will outline the following downloading options to access station location and zenith delay information:
- For a specified range of years
- For a specified time of day
- Confined to a specified geographic bounding box
- Confined to an apriori defined list of GNSS stations
<div class="alert alert-info">
<b>Terminology:</b>
- *GNSS*: Stands for Global Navigation Satellite System. Describes any satellite constellation providing global or regional positioning, navigation, and timing services.
- *tropospheric zenith delay*: The precise atmospheric delay satellite signals experience when propagating through the troposphere.
</div>
## Table of Contents:
<a id='example_TOC'></a>
[**Overview of the raiderDownloadGNSS.py program**](#overview)
- [1. Define spatial extent and/or apriori list of stations](#overview_1)
- [2. Run parameters](#overview_2)
[**Examples of the raiderDownloadGNSS.py program**](#examples)
- [Example 1. Access data for specified year, time-step, and time of day, and across specified spatial subset](#example_1)
- [Example 2. Access data for specified range of years and time of day, and across specified spatial subset, with the maximum allowed CPUs](#example_2)
## Prep: Initial setup of the notebook
Below we set up the directory structure for this notebook exercise. In addition, we load the required modules into our python environment using the **`import`** command.
```
import os
import numpy as np
import matplotlib.pyplot as plt
## Defining the home and data directories
tutorial_home_dir = os.path.abspath(os.getcwd())
work_dir = os.path.abspath(os.getcwd())
print("Tutorial directory: ", tutorial_home_dir)
print("Work directory: ", work_dir)
# Verifying if RAiDER is installed correctly
try:
from RAiDER import downloadGNSSDelays
except:
raise Exception('RAiDER is missing from your PYTHONPATH')
os.chdir(work_dir)
```
# Supported GNSS provider
Currently **`raiderDownloadGNSS.py`** is able to access the UNR Geodetic Laboratory GNSS archive. The creation of a user account and/or special privileges are not necessary.
Data naming conventions are outlined here: http://geodesy.unr.edu/gps_timeseries/README_trop2.txt
This archive does not require a license agreement nor a setup of a user account.
## Overview of the raiderDownloadGNSS.py program
<a id='overview'></a>
The **`raiderDownloadGNSS.py`** program allows for easy access of GNSS station locations and tropospheric zenith delays. Running **`raiderDownloadGNSS.py`** with the **`-h`** option will show the parameter options and outline several basic, practical examples.
Let us explore these options:
```
!raiderDownloadGNSS.py -h
```
### 1. Define spatial extent and/or apriori list of stations
<a id='overview_1'></a>
#### Geographic bounding box (**`--bounding_box BOUNDING_BOX`**)
An area of interest may be specified as `SNWE` coordinates using the **`--bounding_box`** option. Coordinates should be specified as a space delimited string surrounded by quotes. This example below would restrict the query to stations over northern California:
**`--bounding_box '36 40 -124 -119'`**
If no area of interest is specified, the entire global archive will be queried.
#### Textfile with apriori list of station names (**`--station_file STATION_FILE`**)
The query may be restricted to an apropri list of stations. To pass this list to the program, a text file containing a list of 4-char station IDs separated by newlines must be passed as an argument for the **`--station_file`** option.
If used in conjunction with the **`--bounding_box`** option outlined above, then listed stations which fall outside of the specified geographic bounding box will be discarded.
As an example refer to the text-file below, which would be passed as so: **`--station_file support_docs/CA_subset.txt`**
```
!head support_docs/CA_subset.txt
```
### 2. Run parameters
<a id='overview_2'></a>
#### Output directory (**`--out OUT`**)
Specify directory to deposit all outputs into with **`--out`**. Absolute and relative paths are both supported.
By default, outputs will be deposited into the current working directory where the program is launched.
#### GPS repository (**`--gpsrepo GPS_REPO`**)
Specify GPS repository you wish to query with **`--gpsrepo`**.
NOTE that currently only the following archive is supported: UNR
#### Date(s) and step (**`----date DATELIST [DATELIST ...]`**)
**REQUIRED** argument. Specify valid year(s) and step in days **`--date DATE DATE STEP`** to access delays (format YYYYMMDD YYYYMMDD DD). Can be a single date (e.g. '20200101'), two dates between which data for every day between and inclusive is queried (e.g. '2017 2019'), or two dates and a step for which increment in days data is queried (e.g. '2019 2019 12').
Note that this option mirrors a similar option as found in the script `raiderDelay.py`, is used to download weather model data for specified spatiotemporal constraints (i.e. the counterpart to the `raiderDownloadGNSS.py` which downloads GNSS data).
#### Time of day (**`--returntime RETURNTIME`**)
Return tropospheric zenith delays closest to 'HH:MM:SS' time specified with **`--returntime`**.
Note that data is generally archived in 3 second increments. Thus if a time outside of this increment is specified (e.g. '00:00:02'), then the input is rounded to the closest 3 second increment (e.g. '00:00:03')
If not specified, the delays for all times of the day will be returned.
#### Physically download data (**`--download`**)
By default all data is virtually accessed from external zip and tarfiles. If **`--download`** is specified, these external files will be locally downloaded and stored.
Note that this option is **not recommended** for most purposes as it is not neccesary to proceed with statistical analyses, as the code is designed to handle the data virtually.
#### Number of CPUs to be used (**`--cpus NUMCPUS`**)
Specify number of cpus to be used for multiprocessing with **`--cpus`**. For most cases, multiprocessing is essential in order to access data and perform statistical analyses within a reasonable amount of time.
May specify **`--cpus all`** at your own discretion in order to leverage all available CPUs on your system.
By default 8 CPUs will be used.
#### Verbose mode (**`--verbose`**)
Specify **`--verbose`** to print all statements through entire routine. For example, print each station and year within a loop as it is being accessed by the program.
## Examples of the **`raiderDownloadGNSS.py`** program
<a id='examples'></a>
### Example 1. Access data for specified year, time-step, and time of day, and across specified spatial subset <a id='example_1'></a>
Virtually access GNSS station location and zenith delay information for the year '2016', for every 12 days, and at a UTC time of day 'HH:MM:SS' of '00:00:00', and across a geographic bounding box '36 40 -124 -119' spanning over Northern California.
The footprint of the specified geographic bounding box is depicted in **Fig. 1**.
<img src="support_docs/bbox_footprint.png" alt="footprint" width="700">
<center><b>Fig. 1</b> Footprint of geopraphic bounding box used in examples 1 and 2. </center>
```
!raiderDownloadGNSS.py --out products --date 20160101 20161231 12 --returntime '00:00:00' --bounding_box '36 40 -124 -119'
```
Now we can take a look at the generated products:
```
!ls products
```
A list of coordinates for all stations found within the specified geographic bounding box are recorded within **`gnssStationList_overbbox.csv`**:
```
!head products/gnssStationList_overbbox.csv
```
A list of all URL paths for zipfiles containing all tropospheric zenith delay information for a given station and year are recording within **`gnssStationList_overbbox_withpaths.csv`**:
```
!head products/gnssStationList_overbbox_withpaths.csv
```
The zipfiles listed within **`gnssStationList_overbbox_withpaths.csv`** are virtually accessed and queried for internal tarfiles that archive all tropospheric zenith delay information acquired for a given day of the year.
Since we an explicit time of day '00:00:00' and time-step of 12 days was specified above, only data every 12 days from each tarfile corresponding to the time of day '00:00:00' is passed along. If no data is available at that time for a given day, empty strings are passed.
This information is then appended to a primary file allocated and named for a given GNSS station. **`GPS_delays`**:
```
!ls products/GPS_delays
```
Finally, all of the extracted tropospheric zenith delay information stored under **`GPS_delays`** is concatenated with the GNSS station location information stored under **`gnssStationList_overbbox.csv`** into a primary comprehensive file **`UNRcombinedGPS_ztd.csv`**. In this file, the prefix `UNR` denotes the GNSS repository that has been queried, which again may be toggled with the **`--gpsrepo`** option.
**`UNRcombinedGPS_ztd.csv`** may in turn be directly used to perform basic statistical analyses using **`raiderStats.py`**. Please refer to the companion notebook **`raiderStats/raiderStats_tutorial.ipynb`** for a comprehensive outline of the program and examples.
```
!head products/UNRcombinedGPS_ztd.csv
```
### Example 2. Access data for specified range of years and time of day, and across specified spatial subset, with the maximum allowed CPUs <a id='example_2'></a>
Virtually access GNSS station location and zenith delay information for the years '2016-2019', for every day, at a UTC time of day 'HH:MM:SS' of '00:00:00', and across a geographic bounding box '36 40 -124 -119' spanning over Northern California.
The footprint of the specified geographic bounding box is again depicted in **Fig. 1**.
In addition to querying for multiple years, we will also experiment with using the maximum number of allowed CPUs to save some time! Recall again that the default number of CPUs used for parallelization is 8.
```
!rm -rf products
!raiderDownloadGNSS.py --out products --date 20160101 20191231 --returntime '00:00:00' --bounding_box '36 40 -124 -119' --cpus all
```
Outputs are organized again in a fashion consistent with that outlined under **Ex. 1**.
However now we have queried data spanning from the year 2016 up through 2019. Thus, **`UNRcombinedGPS_ztd.csv`** now contains GNSS station data recorded as late as in the year 2019:
```
!grep -m 10 '2019-' products/UNRcombinedGPS_ztd.csv
```
|
github_jupyter
|
# MACHINE LEARNING LAB - 4 ( Backpropagation Algorithm )
**4. Build an Artificial Neural Network by implementing the Backpropagation algorithm and test the same using appropriate data sets.**
```
import numpy as np
X = np.array(([2, 9], [1, 5], [3, 6]), dtype=float) # X = (hours sleeping, hours studying)
y = np.array(([92], [86], [89]), dtype=float) # y = score on test
# scale units
X = X/np.amax(X, axis=0) # maximum of X array
y = y/100 # max test score is 100
class Neural_Network(object):
def __init__(self):
# Parameters
self.inputSize = 2
self.outputSize = 1
self.hiddenSize = 3
# Weights
self.W1 = np.random.randn(self.inputSize, self.hiddenSize) # (3x2) weight matrix from input to hidden layer
self.W2 = np.random.randn(self.hiddenSize, self.outputSize) # (3x1) weight matrix from hidden to output layer
def forward(self, X):
#forward propagation through our network
self.z = np.dot(X, self.W1) # dot product of X (input) and first set of 3x2 weights
self.z2 = self.sigmoid(self.z) # activation function
self.z3 = np.dot(self.z2, self.W2) # dot product of hidden layer (z2) and second set of 3x1 weights
o = self.sigmoid(self.z3) # final activation function
return o
def sigmoid(self, s):
return 1/(1+np.exp(-s)) # activation function
def sigmoidPrime(self, s):
return s * (1 - s) # derivative of sigmoid
def backward(self, X, y, o):
# backward propgate through the network
self.o_error = y - o # error in output
self.o_delta = self.o_error*self.sigmoidPrime(o) # applying derivative of sigmoid to
self.z2_error = self.o_delta.dot(self.W2.T) # z2 error: how much our hidden layer weights contributed to output error
self.z2_delta = self.z2_error*self.sigmoidPrime(self.z2) # applying derivative of sigmoid to z2 error
self.W1 += X.T.dot(self.z2_delta) # adjusting first set (input --> hidden) weights
self.W2 += self.z2.T.dot(self.o_delta) # adjusting second set (hidden --> output) weights
def train (self, X, y):
o = self.forward(X)
self.backward(X, y, o)
NN = Neural_Network()
for i in range(1000): # trains the NN 1,000 times
print ("\nInput: \n" + str(X))
print ("\nActual Output: \n" + str(y))
print ("\nPredicted Output: \n" + str(NN.forward(X)))
print ("\nLoss: \n" + str(np.mean(np.square(y - NN.forward(X))))) # mean sum squared loss)
NN.train(X, y)
```
|
github_jupyter
|
# VarEmbed Tutorial
Varembed is a word embedding model incorporating morphological information, capturing shared sub-word features. Unlike previous work that constructs word embeddings directly from morphemes, varembed combines morphological and distributional information in a unified probabilistic framework. Varembed thus yields improvements on intrinsic word similarity evaluations. Check out the original paper, [arXiv:1608.01056](https://arxiv.org/abs/1608.01056) accepted in [EMNLP 2016](http://www.emnlp2016.net/accepted-papers.html).
Varembed is now integrated into [Gensim](http://radimrehurek.com/gensim/) providing ability to load already trained varembed models into gensim with additional functionalities over word vectors already present in gensim.
# This Tutorial
In this tutorial you will learn how to train, load and evaluate varembed model on your data.
# Train Model
The authors provide their code to train a varembed model. Checkout the repository [MorphologicalPriorsForWordEmbeddings](https://github.com/rguthrie3/MorphologicalPriorsForWordEmbeddings) for to train a varembed model. You'll need to use that code if you want to train a model.
# Load Varembed Model
Now that you have an already trained varembed model, you can easily load the varembed word vectors directly into Gensim. <br>
For that, you need to provide the path to the word vectors pickle file generated after you train the model and run the script to [package varembed embeddings](https://github.com/rguthrie3/MorphologicalPriorsForWordEmbeddings/blob/master/package_embeddings.py) provided in the [varembed source code repository](https://github.com/rguthrie3/MorphologicalPriorsForWordEmbeddings).
We'll use a varembed model trained on [Lee Corpus](https://github.com/RaRe-Technologies/gensim/blob/develop/gensim/test/test_data/lee.cor) as the vocabulary, which is already available in gensim.
```
from gensim.models.wrappers import varembed
vector_file = '../../gensim/test/test_data/varembed_leecorpus_vectors.pkl'
model = varembed.VarEmbed.load_varembed_format(vectors=vector_file)
```
This loads a varembed model into Gensim. Also if you want to load with morphemes added into the varembed vectors, you just need to also provide the path to the trained morfessor model binary as an argument. This works as an optional parameter, if not provided, it would just load the varembed vectors without morphemes.
```
morfessor_file = '../../gensim/test/test_data/varembed_leecorpus_morfessor.bin'
model_with_morphemes = varembed.VarEmbed.load_varembed_format(vectors=vector_file, morfessor_model=morfessor_file)
```
This helps load trained varembed models into Gensim. Now you can use this for any of the Keyed Vector functionalities, like 'most_similar', 'similarity' and so on, already provided in gensim.
```
model.most_similar('government')
model.similarity('peace', 'grim')
```
# Conclusion
In this tutorial, we learnt how to load already trained varembed models vectors into gensim and easily use and evaluate it. That's it!
# Resources
* [Varembed Source Code](https://github.com/rguthrie3/MorphologicalPriorsForWordEmbeddings)
* [Gensim](http://radimrehurek.com/gensim/)
* [Lee Corpus](https://github.com/RaRe-Technologies/gensim/blob/develop/gensim/test/test_data/lee.cor)
|
github_jupyter
|
# FMskill assignment
You are working on a project modelling waves in the Southern North Sea. You have done 6 different calibration runs and want to choose the "best". You would also like to see how your best model is performing compared to a third-party model in NetCDF.
The data:
* SW model results: 6 dfs0 files ts_runX.dfs0 each with 4 items corresponding to 4 stations
* observations: 4 dfs0 files with station data for (name, longitude, latitude):
- F16: 4.0122, 54.1167
- HKZA: 4.0090, 52.3066
- K14: 3.6333, 53.2667
- L9: 4.9667, 53.6167
* A map observations_map.png showing the model domain and observation positions
* Third party model: 1 NetCDF file
The tasks:
1. Calibration - find the best run
2. Validation - compare model to third-party model
```
fldr = "../data/FMskill_assignment/" # where have you put your data?
import fmskill
from fmskill import PointObservation, ModelResult, Connector
```
## 1. Calibration
* 1.1 Start simple: compare F16 with SW1 (the first calibration run)
* 1.2 Define all observations and all model results
* 1.3 Create connector, plot temporal coverage
* 1.4 Evaluate results
* 1.5 Which model is best?
### 1.1 Simple compare
Use fmskill.compare to do a quick comparison of F16 and SW1.
What is the mean absolute error in cm?
Do a time series plot.
### 1.2 Define all observations and all model results
* Define 4 PointObservations o1, o2, o3, o4
* Define 6 ModelResults mr1, mr2, ... (name them "SW1", "SW2", ...)
* How many items do the ModelResults have?
### 1.3 Create connector, plot temporal coverage
* Create empty Connector con
* The add the connections one observation at a time (start by matching o1 with the 6 models, then o2...)
* Print con to screen - which observation has most observation points?
* Plot the temporal coverage of observations and models
* Save the Connector to an excel configuration file
### 1.4 Evaluate results
Do relevant qualitative and quantitative analysis (e.g. time series plots, scatter plots, skill tables etc) to compare the models.
### 1.5 Find the best
Which calibration run is best?
* Which model performs best in terms of bias?
* Which model has the smallest scatter index?
* Which model has linear slope closest to 1.0 for the station HKZA?
* Consider the last day only (Nov 19) - which model has the smallest bias for that day?
* Weighted: Give observation F16 10-times more weight than the other observations - which has the smallest MAE?
* Extremes: Which model has lowest rmse for Hs>4.0 (df = cc.all_df[cc.all_df.obs_val>4])?
## 2. Validation
We will now compare our best model against the UK MetOffice's North West Shelf model stored in NWS_HM0.nc.
* 2.1 Create a ModelResult mr_NWS, evaluate mr_NWS.ds
* 2.2 Plot the first time step (hint .isel(time=0)) of ds (hint: the item is called "VHM0")
* 2.3 Create a Connector con_NWS with the 4 observations and mr_NWS
* 2.4 Evaluate NWS - what is the mean rmse?
* 2.5 Compare NWS to SW5 - which model is better? And is it so for all stations and all metrics? (hint: you can merge ComparisonCollections using the + operator)
|
github_jupyter
|
## Instalación de numpy
```
! pip install numpy
import numpy as np
```
### Array creation
```
my_int_list = [1, 2, 3, 4]
#create numpy array from original python list
my_numpy_arr = np.array(my_int_list)
print(my_numpy_arr)
# Array of zeros
print(np.zeros(10))
# Array of ones with type int
print(np.ones(10, dtype=int))
# Range of numbers
rangeArray = np.array(range(10), int)
print(rangeArray)
# Random array
print(f"Random array: {np.random.rand(5)}\n")
# Random matrix
print(f"Random matrix:\n {np.random.rand(5,4)}\n")
# Random array of integers in a range (say 0-9)
randomArray = np.floor(np.random.rand(10) * 10)
print(f"Random integer array: {randomArray}\n")
# Futher simplification
print(f"Random matrix:\n{np.random.randint(0, 10, (2,5))}\n")
integerArray = np.array([1,2,3,4], int)
integerArray2 = np.array([5,6], int)
# Concatenate two arrays
print(np.concatenate((integerArray, integerArray2)))
# Multidimensional array
floatArray = np.array([[1,2,3], [4,5,6]], float)
print(floatArray)
# Convert one dimensional to multidimensional arrays
rangeArray = rangeArray.reshape(5, 2)
print(rangeArray)
# Convert multidimensional to one dimensional array
rangeArray = rangeArray.flatten()
print(rangeArray)
# Concatenation of multi-dimensional arrays
arr1 = np.array([[1,2], [3,4]], int)
arr2 = np.array([[5,6], [7,8]], int)
print(f'array1: \n{arr1}\n')
print(f'array2: \n{arr2}')
# Based on dimension 1
print(np.concatenate((arr1, arr2), axis=0))
# Based on dimension 2
print(np.concatenate((arr1, arr2), axis=1))
```
### Universal Functions
These functions are defined as functions that operate element-wise on the array elements whether it is a single or multidimensional array.
```
# we want to alter each element of the collection by multiplying each integer by 2
my_int_list = [1, 2, 3, 4]
# python code
for i, val in enumerate(my_int_list):
my_int_list[i] *= 2
my_int_list
#create numpy array from original python list
my_numpy_arr = np.array(my_int_list)
#multiply each element by 2
my_numpy_arr * 2
# Addition
print(f"Array 1 + Array 2\n {arr1 + arr2}\n")
# Multiplication
print(f"Array 1 * Array 2\n {arr1 * arr2}\n")
# Square root
print(f"Square root of Array 1\n {np.sqrt(arr1)}\n")
# Log
print(f"Log of Array 1\n {np.log(arr1)}\n")
```
https://towardsdatascience.com/numpy-python-made-efficient-f82a2d84b6f7
### Aggregation Functions
These functions are useful when we wish to summarise the information contained in an array.
```
arr1 = np.arange(1,10).reshape(3,3)
print(f'Array 1: \n{arr1}\n')
print(f"Sum of elements of Array 1: {arr1.sum()}\n")
print(f"Sum by row elements of Array 1: {np.sum(arr1, axis=1)}\n")
print(f"Sum by column elements of Array 1: {np.sum(arr1, axis=0)}\n")
print(f'Array 1: \n{arr1}\n')
# Mean of array elements
print(f"Mean of elements of Array 1: {arr1.mean()}\n")
# Minimum of array elements
print(f"Minimum of elements of Array 1: {arr1.min()}\n")
# Minimum of elements of Array 1: 1
# Index of maximum of array elements can be found using arg before the funciton name
print(f"Index of minimum of elements of Array 1: {arr1.argmax()}")
```
### Broadcasting
These are a set of rules of how universal functions operate on numpy arrays.
```
sampleArray = np.array([[5,2,3], [3,4,5], [1,1,1]], int)
print(f"Sample Array\n {sampleArray}\n")
# Get unqiue values
print(f"Unique values: {np.unique(sampleArray)}\n")
# Unique values: [1 2 3 4 5]
# Get diagonal values
print(f"Diagonal\n {sampleArray.diagonal()}\n")
# Diagonal
# [5 4 1]
# Sort values in the multidimensional array
print(f"Sorted\n {np.sort(sampleArray)}\n")
sampleArray = np.array([[5,2,3], [3,4,5], [1,1,1]], int)
print(f"Sample Array\n {sampleArray}\n")
# Get diagonal values
print(f"Diagonal\n {sampleArray.T.diagonal()}\n")
vector = np.array([1,2,3,4], int)
matrix1 = np.array([[1,2,3], [4,5,6], [7,8,9]], int)
matrix2 = np.array([[1,1,1], [0,0,0], [1,1,1]], int)
# Dot operator
print(f"Dot of Matrix 1 and Matrix 2\n {np.dot(matrix1, matrix2)}\n")
# Cross operator
print(f"Cross of Matrix 1 and Matrix 2\n {np.cross(matrix1, matrix2)}\n")
# Outer operator
print(f"Outer of Matrix 1 and Matrix 2\n {np.outer(matrix1, matrix2)}\n")
# Inner operator
print(f"Inner of Matrix 1 and Matrix 2\n {np.inner(matrix1, matrix2)}")
```
### Slicing, masking and fancy indexing
The last strategy pools in a few tricks too
```
arr1 = np.array([[1,5], [7,8]], int)
arr2 = np.array([[6, 2], [7,8]], int)
print(f'Array 1: \n{arr1}\n')
print(f'Array 2: \n{arr2}\n\n')
# We can compare complete arrays of equal size element wise
print(f"Array 1 > Array 2\n{arr1 > arr2}\n")
# We can compare elements of an array with a given value
print(f"Array 1 == 2\n {arr1 == arr2}\n")
bigArray = np.array(range(10))
print("Array: {}".format(bigArray))
# Slice array from index 0 to 4
print("Array value from index 0 to 4: {}".format(bigArray[-5]))
# Masking using boolean values and operators
mask = (bigArray > 6) | (bigArray < 3)
print(mask)
print("Array values with mask as true: {}".format(bigArray[mask]))
# Fancy indexing
ind = [2,4,6]
print("Array values with index in list: {}".format(bigArray[ind]))
# Combine all three
print("Array values with index in list: {}".format(bigArray[bigArray > 6][:1]))
```
<img src="https://cdn-images-1.medium.com/max/800/1*cxbe7Omfj6Be0fbvD7gmGQ.png">
<img src="https://cdn-images-1.medium.com/max/800/1*9FImAfjF6Z6Hyv9lm1WgjA.png">
https://medium.com/@zachary.bedell/writing-beautiful-code-with-numpy-505f3b353174
```
# multiplying two matrices containing 60,000 and 80,000 integers
import time
import random as r
tick = time.time()
#create a 300x200 matrix of 60,000 random integers
my_list_1 = []
for row_index in range(300):
new_row = []
for col_index in range(200):
new_row.append(r.randint(0, 20))
my_list_1.append(new_row)
#create a 200x400 matrix of 80,000 random integers
my_list_2 = []
for row_index in range(200):
new_row = []
for col_index in range(400):
new_row.append(r.randint(0, 20))
my_list_2.append(new_row)
#create 2X3 array to hold results
my_result_arr = []
for row_index in range(300):
new_row = []
for col_index in range(400):
new_row.append(0)
my_result_arr.append(new_row)
# iterate through rows of my_list_1
for i in range(len(my_list_1)):
# iterate through columns of my_list_2
for j in range(len(my_list_2[0])):
# iterate through rows of my_list_2
for k in range(len(my_list_2)):
my_result_arr[i][j] += my_list_1[i][k] * my_list_2[k][j]
time_to_completion = time.time() - tick
print("execution time without NumPy: ", time_to_completion)
```
The code is difficult to read, and the solution requires double and triple nested loops, each of which have high time complexities of O(n²) and O(n³).
```
import time
tick = time.time()
np_arr_1 = np.arange(0, 60000).reshape(300, 200)
np_arr_2 = np.arange(0, 80000).reshape(200, 400)
my_result_arr = np.matmul(np_arr_1, np_arr_2)
time_to_completion = time.time() - tick
print("execution time with NumPy: ", time_to_completion)
```
|
github_jupyter
|
# Lets-Plot in 2020
### Preparation
```
import numpy as np
import pandas as pd
import colorcet as cc
from PIL import Image
from lets_plot import *
from lets_plot.bistro.corr import *
LetsPlot.setup_html()
df = pd.read_csv("https://raw.githubusercontent.com/JetBrains/lets-plot-docs/master/data/lets_plot_git_history.csv", sep=';')
df = df[['author_date', 'author_name', 'files_changed', 'insertions', 'deletions']]
df.author_date = pd.to_datetime(df.author_date, utc=True)
df.files_changed = df.files_changed.str.split(' ').str[0].astype(int)
df.insertions = df.insertions.str.split(' ').str[0].astype(int)
df.deletions = df.deletions.fillna('0').str.split(' ').str[0].astype(int)
df['diff'] = df.insertions - df.deletions
df['month'] = df.author_date.dt.month
df['day'] = df.author_date.dt.day
df['weekday'] = df.author_date.dt.weekday
df['hour'] = df.author_date.dt.hour
df = df[df.author_date.dt.year == 2020].sort_values(by='author_date').reset_index(drop=True)
df.head()
```
### General Analytics
```
agg_features = {'files_changed': ['sum', 'mean'], \
'insertions': ['sum', 'mean'], \
'deletions': ['sum', 'mean'], \
'diff': ['sum']}
agg_df = df.groupby('author_name').agg(agg_features).reset_index()
agg_features['commits_number'] = ['sum']
agg_df = pd.merge(agg_df, df.author_name.value_counts().to_frame(('commits_number', 'sum')).reset_index(), \
left_on='author_name', right_on='index')
agg_df['color'] = cc.palette['glasbey_bw'][:agg_df.shape[0]]
plots = []
for feature, agg in [(key, val) for key, vals in agg_features.items() for val in vals]:
agg_df = agg_df.sort_values(by=(feature, agg), ascending=False)
aes_name = ('total {0}' if agg == 'sum' else 'mean {0} per commit').format(feature.replace('_', ' '))
plotted_df = agg_df[[('author_name', ''), (feature, agg), ('color', '')]]
plotted_df.columns = plotted_df.columns.get_level_values(0)
plots.append(ggplot(plotted_df) + \
geom_bar(aes(x='author_name', y=feature, color='color', fill='color'), \
stat='identity', alpha=.25, size=1, \
tooltips=layer_tooltips().line('^x')
.line('{0}|^y'.format(aes_name))) + \
scale_color_identity() + scale_fill_identity() + \
xlab('') + ylab('') + \
ggtitle(aes_name.title()))
w, h = 400, 300
bunch = GGBunch()
bunch.add_plot(plots[7], 0, 0, w, h)
bunch.add_plot(plots[6], w, 0, w, h)
bunch.add_plot(plots[0], 0, h, w, h)
bunch.add_plot(plots[1], w, h, w, h)
bunch.add_plot(plots[2], 0, 2 * h, w, h)
bunch.add_plot(plots[3], w, 2 * h, w, h)
bunch.add_plot(plots[4], 0, 3 * h, w, h)
bunch.add_plot(plots[5], w, 3 * h, w, h)
bunch.show()
```
Looking at the total values, we clearly see that Igor Alshannikov and Ivan Kupriyanov outcompete the rest. But there is a real intrigue as to who takes the third place.
Meanwhile, we see more diversity in mean values of different contribution types.
```
ggplot(df.hour.value_counts().to_frame('count').reset_index().sort_values(by='index')) + \
geom_histogram(aes(x='index', y='count', color='index', fill='index'), \
stat='identity', show_legend=False, \
tooltips=layer_tooltips().line('^y')) + \
scale_x_discrete(breaks=list(range(24))) + \
scale_color_gradient(low='#e0ecf4', high='#8856a7') + \
scale_fill_gradient(low='#e0ecf4', high='#8856a7') + \
xlab('hour') + ylab('commits number') + \
ggtitle('Total Hourly Committing') + ggsize(600, 450)
```
The peak of commit activity is about 18 p.m. The evening seems to be a good time to save daily results.
### Higher Resolution
```
plotted_df = df[df.insertions > 0].reset_index(drop=True)
plotted_df['insertions_unit'] = np.ones(plotted_df.shape[0])
ggplot(plotted_df) + \
geom_segment(aes(x='author_date', y='insertions_unit', xend='author_date', yend='insertions'), color='#8856a7') + \
geom_point(aes(x='author_date', y='insertions', fill='month'), shape=21, color='#8856a7', \
tooltips=layer_tooltips().line('@author_name').line('@|@insertions').line('@|@month')) + \
scale_x_datetime(name='date') + \
scale_y_log10(name='insertions (log)') + \
scale_fill_brewer(name='', type='qual', palette='Paired') + \
facet_grid(y='author_name') + \
ggtitle('Lollipop Plot of Commits by Authors') + ggsize(800, 1000)
```
Some of the team members started their work only a few months ago, so they still have time to catch up next year.
```
ggplot(df) + \
geom_point(aes(x='weekday', y='insertions', color='author_name', size='files_changed'), \
shape=8, alpha=.5, position='jitter', show_legend=False, \
tooltips=layer_tooltips().line('author|@author_name')
.line('@|@insertions')
.line('@|@deletions')
.line('files changed|@files_changed')) + \
scale_x_discrete(labels=['Monday', 'Tuesday', 'Wednesday', 'Thursday', \
'Friday', 'Saturday', 'Sunday']) + \
scale_y_log10(breaks=[2 ** n for n in range(16)]) + \
scale_size(range=[3, 7], trans='sqrt') + \
ggtitle('All Commits') + ggsize(800, 600) + \
theme(axis_tooltip='blank')
```
Usually no one works at the weekend. But if something needs to be done - it should be.
### And Finally...
```
r = df.groupby('day').insertions.median().values
x = r * np.cos(np.linspace(0, 2 * np.pi, r.size))
y = r * np.sin(np.linspace(0, 2 * np.pi, r.size))
daily_insertions_df = pd.DataFrame({'x': x, 'y': y})
MONTHS = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']
mask_width, mask_height = 60, 80
mask = np.array(Image.open("images/snowman_mask.bmp").resize((mask_width, mask_height), Image.BILINEAR))
grid = [[(0 if color.mean() > 255 / 2 else 1) for color in row] for row in mask]
grid_df = pd.DataFrame(grid).stack().to_frame('month')
grid_df.index.set_names(['y', 'x'], inplace=True)
grid_df = grid_df.reset_index()
grid_df.y = grid_df.y.max() - grid_df.y
grid_df = grid_df[grid_df.month > 0].reset_index(drop=True)
agg_df = np.round(df.month.value_counts() * grid_df.shape[0] / df.shape[0]).to_frame('commits_number')
agg_df.iloc[0].commits_number += grid_df.shape[0] - agg_df.commits_number.sum()
agg_df.commits_number = agg_df.commits_number.astype(int)
agg_df.index.name = 'month'
agg_df = agg_df.reset_index()
grid_df['commits_number'] = 0
start_idx = 0
for idx, (month, commits_number) in agg_df.iterrows():
grid_df.loc[start_idx:(start_idx + commits_number), 'month'] = MONTHS[month - 1]
grid_df.loc[start_idx:(start_idx + commits_number), 'commits_number'] = commits_number
start_idx += commits_number
blank_theme = theme_classic() + theme(axis='blank', axis_ticks_x='blank', axis_ticks_y='blank', legend_position='none')
ps = ggplot(daily_insertions_df, aes(x='x', y='y')) + \
geom_polygon(color='#f03b20', fill='#fd8d3c', size=1) + coord_fixed() + blank_theme
p1l = corr_plot(data=df[['insertions', 'deletions']], flip=False).tiles(type='lower', diag=True)\
.palette_gradient(low='blue', mid='green', high='darkgreen').build() + blank_theme
p1r = corr_plot(data=df[['deletions', 'insertions']], flip=True).tiles(type='lower', diag=True)\
.palette_gradient(low='blue', mid='green', high='darkgreen').build() + blank_theme
p2l = corr_plot(data=df[['insertions', 'deletions', 'diff']], flip=False).tiles(type='lower', diag=True)\
.palette_gradient(low='blue', mid='green', high='darkgreen').build() + blank_theme
p2r = corr_plot(data=df[['diff', 'deletions', 'insertions']], flip=True).tiles(type='lower', diag=True)\
.palette_gradient(low='blue', mid='green', high='darkgreen').build() + blank_theme
p3l = corr_plot(data=df[['insertions', 'deletions', 'diff', 'files_changed']], flip=False)\
.tiles(type='lower', diag=True).palette_gradient(low='blue', mid='green', high='darkgreen').build() + blank_theme
p3r = corr_plot(data=df[['files_changed', 'diff', 'deletions', 'insertions']], flip=True)\
.tiles(type='lower', diag=True).palette_gradient(low='blue', mid='green', high='darkgreen').build() + blank_theme
pt = ggplot({'x': [0], 'y': [0], 'greetings': ['Happy New Year!']}, aes(x='x', y='y')) + \
geom_text(aes(label='greetings'), color='blue', size=20, family='Times New Roman', fontface='bold') + blank_theme
pm = ggplot(grid_df, aes(x='x', y='y')) + \
geom_tile(aes(fill='month'), width=.8, height=.8, \
tooltips=layer_tooltips().line('@|@month')
.line('@|@commits_number')) + \
scale_fill_brewer(type='qual', palette='Set2') + \
blank_theme
w, h = 50, 50
bunch = GGBunch()
bunch.add_plot(ps, 3 * w, 0, 2 * w, 2 * h)
bunch.add_plot(p1l, 2 * w, 2 * h, 2 * w, 2 * h)
bunch.add_plot(p1r, 4 * w, 2 * h, 2 * w, 2 * h)
bunch.add_plot(p2l, w, 4 * h, 3 * w, 3 * h)
bunch.add_plot(p2r, 4 * w, 4 * h, 3 * w, 3 * h)
bunch.add_plot(p3l, 0, 7 * h, 4 * w, 4 * h)
bunch.add_plot(p3r, 4 * w, 7 * h, 4 * w, 4 * h)
bunch.add_plot(pt, 0, 11 * h, 16 * w, 2 * h)
bunch.add_plot(pm, 8 * w, 3 * h, 8 * w, 8 * h)
bunch.show()
```
|
github_jupyter
|
# LSV Data Analysis and Parameter Estimation
##### First, all relevent Python packages are imported
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from scipy.optimize import curve_fit
from scipy.signal import savgol_filter, find_peaks, find_peaks_cwt
import pandas as pd
import math
import glob
import altair as alt
from voltammetry import preprocessing, plotting, fitting
```
##### The user will be able to import experimental data for an LSV scan
##### (Currently, we assume that the LSV sweep starts at equilibrium)
```
##Import Experimental Reversible Data:
rev_exp_data = pd.read_csv("data/10mVs_Reversible.csv")
current_exp=rev_exp_data['current(A)'].values
voltage_exp=rev_exp_data['voltage(mV)'].values
time_exp=rev_exp_data['time(s)'].values
## all appropriate packages and the singular experimental data file is imported now
```
##### Next, the program will grab some simple quantitative information from the graph that may be hard to do by hand or over extensive datasets
```
t,i,v = preprocessing.readFile('data/10mM_F2CA_1M_KOH_pH_14_100mV.DTA',type='gamry',scan='first')
length = len(t)
v1, v2 = v[0:int(length/2)], v[int(length/2):]
i1, i2 = i[0:int(length/2)], i[int(length/2):]
t1, t2 = t[0:int(length/2)], t[int(length/2):]
peak_list = []
_, v_peaks, i_peaks = fitting.peak_find(v1,i1,v2,i2)
b1, b2 = fitting.baseline(v1,i1,v2,i2)
for n in range(len(v_peaks)):
peak_list.append([i_peaks[n],v_peaks[n]])
plotting.plot_voltammogram(t,i,v, peaks = peak_list).display()
plt.plot(v1,b1)
plt.plot(v1,i1)
plt.plot(v2,b2)
plt.plot(v2,i2)
```
##### This program can also return relevant parameters using a physics-based model.
```
# Import the dimensionless voltammagram (V I) for reversible reactions
rev_dim_values = pd.read_csv("data/dimensionless_values_rev.csv")
rev_dim_current=rev_dim_values['dimensionless_current'].values
rev_dim_voltage=rev_dim_values['dimensionless_Voltage'].values
##We will now prompt the user to submit known parameters (THESE CAN BE CHANGED OR MADE MORE CONVENIENT)
sweep_rate= float(input("What is the Voltage sweep rate in mV/s?(10)"))
electrode_surface_area= float(input("What is the electrode surface area in cm^2?(.2)"))
concentration_initial= float(input("What is the initial concentration in mol/cm^3?(.00001)"))
Temp= float(input("What is the temperature in K?(298)"))
eq_pot= float(input("What is the equilibrium potential in V?(.10)"))
##we are inserting a diffusion coefficient to check math here, we will estimate this later:
Diff_coeff=0.00001
## Here we define constant variables, these can be made to user inputs if needed.
n=1
Faradays_const=96285
R_const=8.314
sigma=(n*Faradays_const*sweep_rate)/(R_const*Temp)
Pre=electrode_surface_area*concentration_initial*n*Faradays_const*math.sqrt(Diff_coeff*sigma)
output_voltage=(eq_pot+rev_dim_voltage/n)
output_current=Pre*rev_dim_current
plt.plot(output_voltage,output_current)
```
##### Then, we can back out a relevant parameter from the data:
```
# Fitting Diff_Coeff
def test_func(rev_dim_current, D):
return electrode_surface_area*concentration_initial*n*Faradays_const*math.sqrt(D*sigma)*rev_dim_current
params, params_covariance = curve_fit(test_func, rev_dim_current, output_current,p0=None,bounds = (0,[1]))
print("Diffusion Coefficient (cm^2/s): {}".format(params[0]))
```
##### We can repeat this exercise on an LSV with an irreversible reaction to determine exchange current density.
```
##Import Experimental Irreversible Data:
irrev_exp_data = pd.read_csv("data/10mVs_Irreversible.csv")
current_exp=irrev_exp_data['current(A)'].values
voltage_exp=irrev_exp_data['voltage(mV)'].values
time_exp=irrev_exp_data['time(s)'].values
## all appropriate packages and the singular experimental data file is imported now
# Import the dimensionless voltammagram (V I) for irreversible reactions
irrev_dim_values = pd.read_csv("data/dimensionless_values_irrev.csv")
irrev_dim_current=irrev_dim_values['dimensionless_current'].values
irrev_dim_voltage=irrev_dim_values['dimensionless_Voltage'].values
##We will now prompt the user to submit known parameters (THESE CAN BE CHANGED OR MADE MORE CONVENIENT)
sweep_rate= float(input("What is the Voltage sweep rate in mV/s?(10)"))
electrode_surface_area= float(input("What is the electrode surface area in cm^2?(.2)"))
concentration_initial= float(input("What is the initial concentration in mol/cm^3?(.00001)"))
Temp= float(input("What is the temperature in K?(298)"))
eq_pot= float(input("What is the equilibrium potential in mV?(100)"))
##we are inserting a diffusion coefficient to check math here, we will estimate this later:
Diff_coeff=0.00001
## Here we define constant variables, these can be made to user inputs if needed.
n=1
Faradays_const=96285
R_const=8.314
exchange_current_density=0.0002
kinetic_coefficient=exchange_current_density/n/Faradays_const/electrode_surface_area/concentration_initial
transfer_coefficient=.6
eV_const=59.1
beta=transfer_coefficient*n*Faradays_const*sweep_rate/R_const/Temp/1000
Pre=(concentration_initial*n*Faradays_const*
math.sqrt(Diff_coeff*sweep_rate*transfer_coefficient
*Faradays_const/(R_const*Temp*1000)))
output_voltage=eq_pot+irrev_dim_voltage/transfer_coefficient-eV_const/transfer_coefficient*math.log(math.sqrt(math.pi*Diff_coeff*beta)/kinetic_coefficient)
output_current=Pre*irrev_dim_current
plt.plot(output_voltage,output_current)
# Fitting Diff_Coeff
from scipy import optimize
def test_func(irrev_dim_voltage, exchange_current_density):
return eq_pot+irrev_dim_voltage/transfer_coefficient-eV_const/transfer_coefficient*math.log(math.sqrt(math.pi*Diff_coeff*beta)/(exchange_current_density/n/Faradays_const/electrode_surface_area/concentration_initial))
params, params_covariance = optimize.curve_fit(test_func, irrev_dim_voltage, output_voltage,p0=None,bounds = (0,[1]))
print("Exchange current density (A/cm^2): {}".format(params[0]))
```
|
github_jupyter
|
# Test: Minimum error discrimination
In this notebook we are testing the evolution of the error probability with the number of evaluations.
```
import sys
sys.path.append('../../')
import itertools
import numpy as np
import matplotlib.pyplot as plt
from numpy import pi
from qiskit.algorithms.optimizers import SPSA
from qnn.quantum_neural_networks import StateDiscriminativeQuantumNeuralNetworks as nnd
from qnn.quantum_state import QuantumState
plt.style.use('ggplot')
def callback(params, results, prob_error, prob_inc, prob):
data.append(prob_error)
# Create random states
ψ = QuantumState.random(1)
ϕ = QuantumState.random(1)
# Parameters
th_u, fi_u, lam_u = [0], [0], [0]
th1, th2 = [0], [pi]
th_v1, th_v2 = [0], [0]
fi_v1, fi_v2 = [0], [0]
lam_v1, lam_v2 = [0], [0]
params = list(itertools.chain(th_u, fi_u, lam_u, th1, th2, th_v1, th_v2, fi_v1, fi_v2, lam_v1, lam_v2))
# Initialize Discriminator
discriminator = nnd([ψ, ϕ])
data = []
results = discriminator.discriminate(SPSA(100), params, callback=callback)
optimal = nnd.helstrom_bound(ψ, ϕ)
print(f'Optimal results: {optimal}\nActual results: {results}')
fig = plt.figure(figsize=(14, 6))
plt.plot(data, '-')
plt.xlabel('Number of evaluations')
plt.ylabel('Probability')
plt.legend(['Experimental'])
plt.title('Evolution of error probability for 2 states')
fig.savefig('twostates.png')
plt.show()
th_u, fi_u, lam_u = results[0][:3]
th1 = results[0][3]
th2 = results[0][4]
th_v1 = results[0][5]
th_v2 = results[0][6]
fi_v1 = results[0][7]
fi_v2 = results[0][8]
lam_v1 = results[0][9]
lam_v2 = results[0][10]
M = nnd.povm( 2,
[th_u], [fi_u], [lam_u],
[th1], [th2],
[th_v1], [th_v2],
[fi_v1], [fi_v2],
[lam_v1], [lam_v2], output='povm' )
plt.style.use('default')
sphere = nnd.plot_bloch_sphere( M , [ψ, ϕ] )
sphere.render()
plt.savefig('sphere_2_states')
plt.style.use('ggplot')
# Create random states
ψ = QuantumState.random(1)
ϕ = QuantumState.random(1)
χ = QuantumState.random(1)
# Parameters
th_u, fi_u, lam_u = [0], [0], [0]
th1, th2 = 2 * [0], 2 * [pi]
th_v1, th_v2 = 2 * [0], 2 * [0]
fi_v1, fi_v2 = 2 * [0], 2 * [0]
lam_v1, lam_v2 = 2 * [0], 2 * [0]
params = list(itertools.chain(th_u, fi_u, lam_u, th1, th2, th_v1, th_v2, fi_v1, fi_v2, lam_v1, lam_v2))
# Initialize Discriminator
discriminator = nnd([ψ, ϕ, χ])
data = []
results = discriminator.discriminate(SPSA(100), params, callback=callback)
print(f'Results: {results}')
fig = plt.figure(figsize=(14, 6))
plt.plot(data, '-')
plt.xlabel('Number of evaluations')
plt.ylabel('Probability')
plt.legend(['Experimental'])
plt.title('Evolution of error probability for 3 states')
fig.savefig('3states.png')
plt.show()
th_u, fi_u, lam_u = results[0][:3]
th1 = results[0][3:5]
th2 = results[0][5:7]
th_v1 = results[0][7:9]
th_v2 = results[0][9:11]
fi_v1 = results[0][11:13]
fi_v2 = results[0][13:15]
lam_v1 = results[0][15:17]
lam_v2 = results[0][17:19]
M = nnd.povm( 3,
[th_u], [fi_u], [lam_u],
th1, th2,
th_v1, th_v2,
fi_v1, fi_v2,
lam_v1, lam_v2, output='povm' )
plt.style.use('default')
sphere = nnd.plot_bloch_sphere( M , [ψ, ϕ, χ] )
sphere.render()
plt.savefig('sphere_3_states.png')
plt.style.use('ggplot')
# Create random states
ψ = QuantumState([ np.array([1,0]) ])
ϕ = QuantumState([ np.array([np.cos(np.pi/4), np.sin(np.pi/4)]),
np.array([np.cos(0.1+np.pi/4),np.sin(0.1+np.pi/4)] ) ])
χ = QuantumState([ np.array([np.cos(np.pi/4), 1j*np.sin(np.pi/4)]),
np.array([np.cos(0.1+np.pi/4), 1j*np.sin(0.1+np.pi/4)] ),
np.array([np.cos(-0.1+np.pi/4), 1j*np.sin(-0.1+np.pi/4)] )])
# Parameters
th_u, fi_u, lam_u = list(np.pi*np.random.randn(1)), list(np.pi*np.random.randn(1)), list(np.pi*np.random.randn(1))
th1, th2 = list(np.pi*np.random.randn(2)), list(np.pi*np.random.randn(2))
th_v1, th_v2 = list(np.pi*np.random.randn(2)), list(np.pi*np.random.randn(2))
fi_v1, fi_v2 = list(np.pi*np.random.randn(2)), list(np.pi*np.random.randn(2))
lam_v1, lam_v2 = list(np.pi*np.random.randn(2)), list(np.pi*np.random.randn(2))
params = list(itertools.chain(th_u, fi_u, lam_u, th1, th2, th_v1, th_v2, fi_v1, fi_v2, lam_v1, lam_v2))
# Initialize Discriminator
discriminator = nnd([ψ, ϕ, χ])
data = []
results = discriminator.discriminate(SPSA(100), params, callback=callback)
print(f'Results: {results}')
fig = plt.figure(figsize=(14, 6))
plt.plot(data, '-')
plt.xlabel('Number of evaluations')
plt.ylabel('Probability')
plt.legend(['Experimental'])
plt.title('Evolution of error probability for 3 states with noise')
fig.savefig('noisy.png')
plt.show()
th_u, fi_u, lam_u = results[0][:3]
th1 = results[0][3:5]
th2 = results[0][5:7]
th_v1 = results[0][7:9]
th_v2 = results[0][9:11]
fi_v1 = results[0][11:13]
fi_v2 = results[0][13:15]
lam_v1 = results[0][15:17]
lam_v2 = results[0][17:19]
M = nnd.povm( 3,
[th_u], [fi_u], [lam_u],
th1, th2,
th_v1, th_v2,
fi_v1, fi_v2,
lam_v1, lam_v2, output='povm' )
plt.style.use('default')
sphere = nnd.plot_bloch_sphere( M , [ψ, ϕ, χ] )
sphere.render()
plt.savefig('sphere_3_states_noisy.png')
plt.style.use('ggplot')
```
|
github_jupyter
|
# Terminologies
<img src="https://github.com/dorisjlee/remote/blob/master/astroSim-tutorial-img/terminology.jpg?raw=true",width=20%>
- __Domain__ (aka Grids): the whole simulation box.
- __Block__(aka Zones): group of cells that make up a larger unit so that it is more easily handled. If the code is run in parallel, you could have one processor assigned to be in charge to work on several blocks (specified by iProcs,jProcs,kProcs in flash.par). In FLASH, the default block size in flash is $2^3$ = 8 cells. This means that level 0 in the AMR is 8 cells and so forth.
<img src="https://github.com/dorisjlee/remote/blob/master/astroSim-tutorial-img/level_cells.jpg?raw=true",width=20%>
- __Cells__ : basic units that contain information about the fluid variables (often called primitives: $\rho$, $P$, $v_{x,y,z}$,$B_{x,y,z}$)
- __Ghost cells__ (abbrev as ``gc`` in FLASH): Could be thought of as an extra layer of padding outside the simulation domain. The alues of these gcs are mostly determined by what the boundary conditions you chose. Generally, you won't have to mess with these when specifying the initial conditions.
# Simulation_initBlock.F90
Simulation_initBlock is called by each block. First we compute the center based on the dimensions of the box (in cgs) from flash.par:
~~~fortran
center = abs(xmin-xmax)/2.
~~~
We loop through all the coordinates of the cell within each block.
~~~fortran
do k = blkLimits(LOW,KAXIS),blkLimits(HIGH,KAXIS)
! get the coordinates of the cell center in the z-direction
zz = zCoord(k)-center
do j = blkLimits(LOW,JAXIS),blkLimits(HIGH,JAXIS)
! get the coordinates of the cell center in the y-direction
yy = yCoord(j)-center
do i = blkLimits(LOW,IAXIS),blkLimits(HIGH,IAXIS)
! get the cell center, left, and right positions in x
xx = xCenter(i)-center
~~~
``xCenter,yCoord,zCoord`` are functions that return the cell position (in cgs) given its cell index. These calculations are based on treating the bottom left corner of the box as the origin, so we minus the box center to get the origin to be at the center, as shown in Fig 3.
<img src="https://github.com/dorisjlee/remote/blob/master/astroSim-tutorial-img/user_coord.png?raw=true",width=200,height=200>
__Fig 3: The corrected ``xx,yy,zz`` are physical positions measured from the origin.__
Given the cell positions, you can specify values for initializing the fluid variables.
The fluid variables are stored inside the local variables (called rhoZone,presZone,velxZone, velyZone,velzZone in the example) which are then transferred into to the cell one at a time using the method Grid_putData:
~~~fortran
call Grid_putPointData(blockId, CENTER, DENS_VAR, EXTERIOR, axis, rhoZone)
~~~
For example, you may have an analytical radial density distribution ($\rho= Ar^2$) that you would like to initialize the sphere with:
~~~fortran
rr = sqrt(xx**2 + yy**2 + zz**2)
rhoZone = A*rr**2
~~~
Or maybe your initial conditions can not be expressed in closed form,then you could also read in precomputed-values for each cell. This optional tutorial will explain how to do linear interpolation to setup the numerical solution of the Lane-Emden Sphere.
### Adding new RuntimeParameters to be read into Simulation_initBlock.F90
As we have already saw, to compute the center of the box, I need to read in the dimensions of the box (``xmin,xmax``) from flash.par. Some runtime parameters are used by other simulation modules and some are specific to the problem and defined by the users.
To add in a new runtime parameter:
1) In ``Simulation_data.F90``, declare the variables to store these runtime parameters:
~~~fortran
real, save :: fattening_factor,beta_param,xmin,xmax
~~~
2) In ``Simulation_init.F90``, read in the values of the runtime parameter:
~~~fortran
call RuntimeParameters_get('xmin',xmin)
call RuntimeParameters_get('xmax',xmax)
~~~
3) In ``Simulation_initBlock.F90``, use the data:
~~~fortran
use Simulation_data, ONLY: xmin,xmax
~~~
Note you should __NOT__ declare ``real::xmin,xmax`` again inside ``Simulation_initBlock.F90``, otherwise, the values that you read in will be overridden.
|
github_jupyter
|
# Single layer Neural Network
In this notebook, we will code a single neuron and use it as a linear classifier with two inputs. The tuning of the neuron parameters is done by backpropagation using gradient descent.
```
from sklearn.datasets import make_blobs
import numpy as np
# matplotlib to display the data
import matplotlib
matplotlib.rc('font', size=16)
matplotlib.rc('xtick', labelsize=16)
matplotlib.rc('ytick', labelsize=16)
from matplotlib import pyplot as plt, cm
from matplotlib.colors import ListedColormap
%matplotlib inline
```
## Dataset
Let's create some labeled data in the form of (X, y) with an associated class which can be 0 or 1. For this we can use the function `make_blobs` in the `sklearn.datasets` module. Here we use 2 centers with coordinates (-0.5, -1.0) and (1.0, 1.0).
```
X, y = make_blobs(n_features=2, random_state=42, centers=[(-0.5, -1.0), (1.0, 1.0)])
y = y.reshape((y.shape[0], 1))
print(X.shape)
print(y.shape)
```
Plot our training data using `plt.scatter` to have a first visualization. Here we color the points with their labels stored in `y`.
```
plt.scatter(X[:, 0], X[:, 1], c=y.squeeze(), edgecolors='gray')
plt.title('training data with labels')
plt.axis('equal')
plt.show()
```
## Activation functions
Here we play with popular activation functions like tanh, ReLu or sigmoid.
```
def heaviside(x):
return np.heaviside(x, np.zeros_like(x))
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def ReLU(x):
return np.maximum(0, x)
def leaky_ReLU(x, alpha=0.1):
return np.maximum(alpha * x, x)
def tanh(x):
return np.tanh(x)
from math import pi
plt.figure()
x = np.arange(-pi, pi, 0.01)
plt.axhline(y=0., color='gray', linestyle='dashed')
plt.axhline(y=-1, color='gray', linestyle='dashed')
plt.axhline(y=1., color='gray', linestyle='dashed')
plt.axvline(x=0., color='gray', linestyle='dashed')
plt.xlim(-pi, pi)
plt.ylim(-1.2, 1.2)
plt.title('activation functions', fontsize=16)
plt.plot(x, heaviside(x), label='heavyside', linewidth=3)
legend = plt.legend(loc='lower right')
plt.savefig('activation_functions_1.pdf')
plt.plot(x, sigmoid(x), label='sigmoid', linewidth=3)
plt.legend(loc='lower right')
plt.savefig('activation_functions_2.pdf')
plt.plot(x, tanh(x), label='tanh', linewidth=3)
plt.legend(loc='lower right')
plt.savefig('activation_functions_3.pdf')
plt.plot(x, ReLU(x), label='ReLU', linewidth=3)
plt.legend(loc='lower right')
plt.savefig('activation_functions_4.pdf')
plt.plot(x, leaky_ReLU(x), label='leaky ReLU', linewidth=3)
plt.legend(loc='lower right')
plt.savefig('activation_functions_5.pdf')
plt.show()
# gradients of the activation functions
def sigmoid_grad(x):
s = sigmoid(x)
return s * (1 - s)
def relu_grad(x):
return 1. * (x > 0)
def tanh_grad(x):
return 1 - np.tanh(x) ** 2
plt.figure()
x = np.arange(-pi, pi, 0.01)
plt.plot(x, sigmoid_grad(x), label='sigmoid gradient', linewidth=3)
plt.plot(x, relu_grad(x), label='ReLU gradient', linewidth=3)
plt.plot(x, tanh_grad(x), label='tanh gradient', linewidth=3)
plt.xlim(-pi, pi)
plt.title('activation function derivatives', fontsize=16)
legend = plt.legend()
legend.get_frame().set_linewidth(2)
plt.savefig('activation_functions_derivatives.pdf')
plt.show()
```
## ANN implementation
A simple neuron with two inputs $(x_1, x_2)$ which applies an affine transform of weigths $(w_1, w_2)$ and bias $w_0$.
The neuron compute the quantity called activation $a=\sum_i w_i x_i + w_0 = w_0 + w_1 x_1 + w_2 x_2$
This quantity is send to the activation function chosen to be a sigmoid function here: $f(a)=\dfrac{1}{1+e^{-a}}$
$f(a)$ is the output of the neuron bounded between 0 and 1.
### Quick implementation
First let's implement our network in a concise fashion.
```
import numpy as np
from numpy.random import randn
X, y = make_blobs(n_samples= 100, n_features=2, random_state=42, centers=[[-0.5, -1], [1, 1]])
# adjust the sizes of our arrays
X = np.c_[np.ones(X.shape[0]), X]
print(X.shape)
y = y.reshape((y.shape[0], 1))
np.random.seed(2)
W = randn(3, 1)
print('* model params: {}'.format(W.tolist()))
eta = 1e-2 # learning rate
n_epochs = 50
for t in range(n_epochs):
# forward pass
y_pred = sigmoid(X.dot(W))
loss = np.sum((y_pred - y) ** 2)
print(t, loss)
# backprop
grad_y_pred = 2 * (y_pred - y)
grad_W = np.dot(X.T, grad_y_pred * y_pred * (1 - y_pred))
# update rule
W -= eta * grad_W
print('* new model params: {}'.format(W.tolist()))
```
### Modular implementation
Now let's create a class to represent our neural network to have more flexibility and modularity. This will prove to be useful later when we add more layers.
```
class SingleLayerNeuralNetwork:
"""A simple artificial neuron with a single layer and two inputs.
This type of network is called a Single Layer Neural Network and belongs to
the Feed-Forward Neural Networks. Here, the activation function is a sigmoid,
the loss is computed using the squared error between the target and
the prediction. Learning the parameters is achieved using back-propagation
and gradient descent
"""
def __init__(self, eta=0.01, rand_seed=42):
"""Initialisation routine."""
np.random.seed(rand_seed)
self.W = np.random.randn(3, 1) # weigths
self.eta = eta # learning rate
self.loss_history = []
def sigmoid(self, x):
"""Our activation function."""
return 1 / (1 + np.exp(-x))
def sigmoid_grad(self, x):
"""Gradient of the sigmoid function."""
return self.sigmoid(x) * (1 - self.sigmoid(x))
def predict(self, X, bias_trick=True):
X = np.atleast_2d(X)
if bias_trick:
# bias trick: add a column of 1 to X
X = np.c_[np.ones((X.shape[0])), X]
return self.sigmoid(np.dot(X, self.W))
def loss(self, X, y, bias_trick=False):
"""Compute the squared error loss for a given set of inputs."""
y_pred = self.predict(X, bias_trick=bias_trick)
y_pred = y_pred.reshape((y_pred.shape[0], 1))
loss = np.sum((y_pred - y) ** 2)
return loss
def back_propagation(self, X, y):
"""Conduct backpropagation to update the weights."""
X = np.atleast_2d(X)
y_pred = self.sigmoid(np.dot(X, self.W)).reshape((X.shape[0], 1))
grad_y_pred = 2 * (y_pred - y)
grad_W = np.dot(X.T, grad_y_pred * y_pred * (1 - y_pred))
# update weights
self.W -= eta * grad_W
def fit(self, X, y, n_epochs=10, method='batch', save_fig=False):
"""Perform gradient descent on a given number of epochs to update the weights."""
# bias trick: add a column of 1 to X
X = np.c_[np.ones((X.shape[0])), X]
self.loss_history.append(self.loss(X, y)) # initial loss
for i_epoch in range(n_epochs):
if method == 'batch':
# perform backprop on the whole training set (batch)
self.back_propagation(X, y)
# weights were updated, compute the loss
loss = self.loss(X, y)
self.loss_history.append(loss)
print(i_epoch, self.loss_history[-1])
else:
# here we update the weight for every data point (SGD)
for (xi, yi) in zip(X, y):
self.back_propagation(xi, yi)
# weights were updated, compute the loss
loss = self.loss(X, y)
self.loss_history.append(loss)
if save_fig:
self.plot_model(i_epoch, save=True, display=False)
def decision_boundary(self, x):
"""Return the decision boundary in 2D."""
return -self.W[0] / self.W[2] - self.W[1] / self.W[2] * x
def plot_model(self, i_epoch=-1, save=False, display=True):
"""Build a figure to vizualise how the model perform."""
xx0, xx1 = np.arange(-3, 3.1, 0.1), np.arange(-3, 4.1, 0.1)
XX0, XX1 = np.meshgrid(xx0, xx1)
# apply the model to the grid
y_an = np.empty(len(XX0.ravel()))
i = 0
for (x0, x1) in zip(XX0.ravel(), XX1.ravel()):
y_an[i] = self.predict(np.array([x0, x1]))
i += 1
y_an = y_an.reshape((len(xx1), len(xx0)))
figure = plt.figure(figsize=(12, 4))
ax1 = plt.subplot(1, 3, 1)
#ax1.set_title(r'$w_0=%.3f$, $w_1=%.3f$, $w_2=%.3f$' % (self.W[0], self.W[1], self.W[2]))
ax1.set_title("current prediction")
ax1.contourf(XX0, XX1, y_an, alpha=.5)
ax1.scatter(X[:, 0], X[:, 1], c=y.squeeze(), edgecolors='gray')
ax1.set_xlim(-3, 3)
ax1.set_ylim(-3, 4)
print(ax1.get_xlim())
x = np.array(ax1.get_xlim())
ax1.plot(x, self.decision_boundary(x), 'k-', linewidth=2)
ax2 = plt.subplot(1, 3, 2)
x = np.arange(3) # the label locations
rects1 = ax2.bar(x, [self.W[0, 0], self.W[1, 0], self.W[2, 0]])
ax2.set_title('model parameters')
ax2.set_xticks(x)
ax2.set_xticklabels([r'$w_0$', r'$w_1$', r'$w_2$'])
ax2.set_ylim(-1, 2)
ax2.set_yticks([0, 2])
ax2.axhline(xmin=0, xmax=2)
ax3 = plt.subplot(1, 3, 3)
ax3.plot(self.loss_history, c='lightgray', lw=2)
if i_epoch < 0:
i_epoch = len(self.loss_history) - 1
ax3.plot(i_epoch, self.loss_history[i_epoch], 'o')
ax3.set_title('loss evolution')
ax3.set_yticks([])
plt.subplots_adjust(left=0.05, right=0.98)
if save:
plt.savefig('an_%02d.png' % i_epoch)
if display:
plt.show()
plt.close()
```
### Train our model on the data set
Create two blobs with $n=1000$ data points.
Instantiate the model with $\eta$=0.1 and a random seed of 2.
Train the model using the batch gradient descent on 20 epochs.
```
X, y = make_blobs(n_samples=10000, n_features=2, random_state=42, centers=[[-0.5, -1], [1, 1]])
y = y.reshape((y.shape[0], 1))
an1 = SingleLayerNeuralNetwork(eta=0.1, rand_seed=2)
print('* init model params: {}'.format(an1.W.tolist()))
print(an1.loss(X, y, bias_trick=True))
an1.fit(X, y, n_epochs=100, method='batch', save_fig=False)
print('* new model params: {}'.format(an1.W.tolist()))
```
Now we have trained our model, plot the results
```
an1.plot_model()
```
Now try to train another network using SGD. Use only 1 epoch since with SGD, we are updating the weights with every training point (so $n$ times per epoch).
```
an2 = SingleLayerNeuralNetwork(eta=0.1, rand_seed=2)
print('* init model params: {}'.format(an2.W.tolist()))
an2.fit(X, y, n_epochs=1, method='SGD', save_fig=False)
print('* new model params: {}'.format(an2.W.tolist()))
```
plot the difference in terms of loss evolution using batch or stochastic gradient descent
```
plt.plot(an1.loss_history[:], label='batch GD')
plt.plot(an2.loss_history[::100], label='stochastic GD')
#plt.ylim(0, 2000)
plt.legend()
plt.show()
an2.plot_model()
```
## Logistic regression
Our single layer network using the logistic function for activation is very similar to the logistic regression we saw in a previous tutorial. We can easily compare our result with the logistic regression using `sklearn` toolbox.
```
from sklearn.linear_model import LogisticRegression
X, y = make_blobs(n_samples=1000, n_features=2, random_state=42, centers=[[-0.5, -1], [1, 1]])
log_reg = LogisticRegression(solver='lbfgs')
log_reg.fit(X, y)
print(log_reg.coef_)
print(log_reg.intercept_)
x0, x1 = np.meshgrid(
np.linspace(-3, 3.1, 62).reshape(-1, 1),
np.linspace(-3, 4.1, 72).reshape(-1, 1),
)
X_new = np.c_[x0.ravel(), x1.ravel()]
y_proba = log_reg.predict_proba(X_new)
zz = y_proba[:, 1].reshape(x0.shape)
plt.figure(figsize=(4, 4))
contour = plt.contourf(x0, x1, zz, alpha=0.5)
plt.scatter(X[:, 0], X[:, 1], c=y, edgecolors='gray')
# decision boundary
x_bounds = np.array([-3, 3])
boundary = -(log_reg.coef_[0][0] * x_bounds + log_reg.intercept_[0]) / log_reg.coef_[0][1]
plt.plot(x_bounds, boundary, "k-", linewidth=3)
plt.xlim(-3, 3)
plt.ylim(-3, 4)
plt.show()
```
|
github_jupyter
|
# From raw *.ome.tif file to kinetic properties for immobile particles
This notebook will run ...
* picasso_addon.localize.main()
* picasso_addon.autopick.main()
* spt.immobile_props.main()
... in a single run to get from the raw data to the fully evaluated data in a single stroke. We therefore:
1. Define the full paths to the *ome.tif files
2. Set the execution parameters
3. Connect or start a local dask parallel computing cluster
4. Run all sub-module main() functions for all defined datasets
As a result files with extension *_locs.hdf5, *_render.hdf5, *_autopick.yaml, *_tprops.hdf5 will be created in the same folder as the *.ome.tif file.
```
import os
import traceback
import importlib
from dask.distributed import Client
import multiprocessing as mp
import picasso.io as io
import picasso_addon.localize as localize
import picasso_addon.autopick as autopick
import spt.immobile_props as improps
importlib.reload(localize)
importlib.reload(autopick)
importlib.reload(improps)
```
### 1. Define the full paths to the *ome.tif files
```
dir_names=[]
dir_names.extend([r'C:\Data\p06.SP-tracking\20-03-11_pseries_fix_B21_rep\id140_B_exp200_p114uW_T21_1\test'])
file_names=[]
file_names.extend(['id140_B_exp200_p114uW_T21_1_MMStack_Pos0.ome.tif'])
```
### 2. Set the execution parameters
```
### Valid for all evaluations
params_all={'undrift':False,
'min_n_locs':5,
'filter':'fix',
}
### Exceptions
params_special={}
```
All possible parameters for ...
* picasso_addon.localize.main()
* picasso_addon.autopick.main()
* spt.immobile_props.main()
... can be given. Please run `help(localize.main)` or `help(autopick.main)` or `help(improps.main)` or readthedocs. If not stated otherwise standard values are used (indicated in brackets).
```
help(localize.main)
```
### 3. Connect or start a local dask parallel computing cluster
This is only necessary if you want to use parallel computing for the spt.immobile.props.main() execution (standard). If not set `params_all={'parallel':False}`
```
try:
client = Client('localhost:8787')
print('Connecting to existing cluster...')
except OSError:
improps.cluster_setup_howto()
```
If we execute the prompt (see below) a local cluster is started, and we only have to execute the cell above to reconnect to it the next time. If you try to create a new cluster under the same address this will throw an error!
```
Client(n_workers=max(1,int(0.8 * mp.cpu_count())),
processes=True,
threads_per_worker=1,
scheduler_port=8787,
dashboard_address=":1234")
```
### 4. Run all sub-module main() functions for all defined datasets
```
failed_path=[]
for i in range(0,len(file_names)):
### Create path
path=os.path.join(dir_names[i],file_names[i])
### Set paramters for each run
params=params_all.copy()
for key, value in params_special.items():
params[key]=value[i]
### Run main function
try:
### Load movie
movie,info=io.load_movie(path)
### Localize and undrift
out=localize.main(movie,info,path,**params)
info=info+[out[0][0]]+[out[0][1]] # Update info to used params
path=out[-1] # Update path
### Autopick
print()
locs=out[1]
out=autopick.main(locs,info,path,**params)
info=info+[out[0]] # Update info to used params
path=out[-1] # Update path
### Immobile kinetics analysis
print()
locs=out[1]
out=improps.main(locs,info,path,**params)
except Exception:
traceback.print_exc()
failed_path.extend([path])
print()
print('Failed attempts: %i'%(len(failed_path)))
```
|
github_jupyter
|
# What are Tensors?
```
# -*- coding: utf-8 -*-
import numpy as np
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10
# Create random input and output data
x = np.random.randn(N, D_in)
y = np.random.randn(N, D_out)
# Randomly initialize weights
w1 = np.random.randn(D_in, H)
w2 = np.random.randn(H, D_out)
learning_rate = 1e-6
for t in range(500):
# Forward pass: compute predicted y
h = x.dot(w1)
h_relu = np.maximum(h, 0)
y_pred = h_relu.dot(w2)
# Compute and print loss
loss = np.square(y_pred - y).sum()
print(t, loss)
# Backprop to compute gradients of w1 and w2 with respect to loss
grad_y_pred = 2.0 * (y_pred - y)
grad_w2 = h_relu.T.dot(grad_y_pred)
grad_h_relu = grad_y_pred.dot(w2.T)
grad_h = grad_h_relu.copy()
grad_h[h < 0] = 0
grad_w1 = x.T.dot(grad_h)
# Update weights
w1 -= learning_rate * grad_w1
w2 -= learning_rate * grad_w2
```
# PyTorch Tensors
Clearly modern deep neural networks are in need of more than what our beloved numpy can offer.
Here we introduce the most fundamental PyTorch concept: the *Tensor*. A PyTorch Tensor is conceptually identical to a numpy array: a Tensor is an n-dimensional array, and PyTorch provides many functions for operating on these Tensors. Like numpy arrays, PyTorch Tensors do not know anything about deep learning or computational graphs or gradients; they are a generic tool for scientific computing.
However unlike numpy, PyTorch Tensors can utilize GPUs to accelerate their numeric computations. To run a PyTorch Tensor on GPU, you simply need to cast it to a new datatype.
Here we use PyTorch Tensors to fit a two-layer network to random data. Like the numpy example above we need to manually implement the forward and backward passes through the network:
```
import torch
dtype = torch.FloatTensor
# dtype = torch.cuda.FloatTensor # Uncomment this to run on GPU
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10
# Create random input and output data
x = torch.randn(N, D_in).type(dtype)
y = torch.randn(N, D_out).type(dtype)
# Randomly initialize weights
w1 = torch.randn(D_in, H).type(dtype)
w2 = torch.randn(H, D_out).type(dtype)
learning_rate = 1e-6
for t in range(500):
# Forward pass: compute predicted y
h = x.mm(w1)
h_relu = h.clamp(min=0)
y_pred = h_relu.mm(w2)
# Compute and print loss
loss = (y_pred - y).pow(2).sum()
print(t, loss)
# Backprop to compute gradients of w1 and w2 with respect to loss
grad_y_pred = 2.0 * (y_pred - y)
grad_w2 = h_relu.t().mm(grad_y_pred)
grad_h_relu = grad_y_pred.mm(w2.t())
grad_h = grad_h_relu.clone()
grad_h[h < 0] = 0
grad_w1 = x.t().mm(grad_h)
# Update weights using gradient descent
w1 -= learning_rate * grad_w1
w2 -= learning_rate * grad_w2
```
# Autograd
PyTorch variables and autograd. Autograd package provides cool functionality as the forward pass of your network defines the computational graph; nodes in the graph will be Tensors and edges will be functions that produce output Tensors from input Tensors. Backprop through this graph then allows us to easily compue gradients.
Here we wrap the PyTorch Tensor in a Variable object; where Vaiabel represents a node in the computational graph. if x is a variable then x.data is a Tensor and x.grad is another Varialble holding the gradient of x w.r.t to some scalar value.
PyTorch Variables have samer API as PyTorch Tensots: any operation that you can do with Tensor, also works fine with Variables, difference only being that the Variable defines a computational graph, allowing us to automatically compute gradients.
```
# Use of Vaiables and Autograd in a 2-layer network with no need to manually implement backprop!
import torch
from torch.autograd import Variable
dtype = torch.FloatTensor
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10
# Create random Tensors to hold input and outputs and wrap them in Variables.
x = Variable(torch.randn(N, D_in).type(dtype), requires_grad=False) # requires_grad=False means no need to compute gradients
y = Variable(torch.randn(N, D_out).type(dtype), requires_grad=False)
# Create random Tensors to hold weights and wrap them in Variables.
# requires_grad=True here to compute gradients w.r.t Variables during a backprop pass.
w1 = Variable(torch.randn(D_in, H).type(dtype), requires_grad=True) # requires_grad=False means no need to compute gradients
w2 = Variable(torch.randn(H, D_out).type(dtype), requires_grad=True)
learning_rate = 1e-6
for t in range(500):
# Forward pass: compute predicted y using operations on Variables; these
# are exactly the same operations we used to compute the forward pass using
# Tensors, but we do not need to keep references to intermediate values since
# we are not implementing the backward pass by hand.
y_pred = x.mm(w1).clamp(min=0).mm(w2)
# Compute and print loss using operations on Variables.
# Now loss is a Variable of shape (1,) and loss.data is a Tensor of shape
# (1,); loss.data[0] is a scalar value holding the loss.
loss = (y_pred - y).pow(2).sum()
print(t, loss.data[0])
# Use autograd to compute the backward pass. This call will compute the
# gradient of loss with respect to all Variables with requires_grad=True.
# After this call w1.grad and w2.grad will be Variables holding the gradient
# of the loss with respect to w1 and w2 respectively.
loss.backward()
# Update weights using gradient descent; w1.data and w2.data are Tensors,
# w1.grad and w2.grad are Variables and w1.grad.data and w2.grad.data are
# Tensors.
w1.data -= learning_rate * w1.grad.data
w2.data -= learning_rate * w2.grad.data
# Manually zero the gradients after updating weights
w1.grad.data.zero_()
w2.grad.data.zero_()
```
# PyTorch: Defining new autograd functions
Under the hood, each primitive autograd operator is really two functions that operate on Tensors. The forward function computes output Tensors from input Tensors. The backward function receives the gradient of the output Tensors with respect to some scalar value, and computes the gradient of the input Tensors with respect to that same scalar value.
In PyTorch we can easily define our own autograd operator by defining a subclass of torch.autograd.Function and implementing the forward and backward functions. We can then use our new autograd operator by constructing an instance and calling it like a function, passing Variables containing input data.
In this example we define our own custom autograd function for performing the ReLU nonlinearity, and use it to implement our two-layer network:
```
# -*- coding: utf-8 -*-
import torch
from torch.autograd import Variable
class MyReLU(torch.autograd.Function):
"""
We can implement our own custom autograd Functions by subclassing
torch.autograd.Function and implementing the forward and backward passes
which operate on Tensors.
"""
def forward(self, input):
"""
In the forward pass we receive a Tensor containing the input and return a
Tensor containing the output. You can cache arbitrary Tensors for use in the
backward pass using the save_for_backward method.
"""
self.save_for_backward(input)
return input.clamp(min=0)
def backward(self, grad_output):
"""
In the backward pass we receive a Tensor containing the gradient of the loss
with respect to the output, and we need to compute the gradient of the loss
with respect to the input.
"""
input, = self.saved_tensors
grad_input = grad_output.clone()
grad_input[input < 0] = 0
return grad_input
dtype = torch.FloatTensor
# dtype = torch.cuda.FloatTensor # Uncomment this to run on GPU
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10
# Create random Tensors to hold input and outputs, and wrap them in Variables.
x = Variable(torch.randn(N, D_in).type(dtype), requires_grad=False)
y = Variable(torch.randn(N, D_out).type(dtype), requires_grad=False)
# Create random Tensors for weights, and wrap them in Variables.
w1 = Variable(torch.randn(D_in, H).type(dtype), requires_grad=True)
w2 = Variable(torch.randn(H, D_out).type(dtype), requires_grad=True)
learning_rate = 1e-6
for t in range(500):
# Construct an instance of our MyReLU class to use in our network
relu = MyReLU()
# Forward pass: compute predicted y using operations on Variables; we compute
# ReLU using our custom autograd operation.
y_pred = relu(x.mm(w1)).mm(w2)
# Compute and print loss
loss = (y_pred - y).pow(2).sum()
print(t, loss.data[0])
# Use autograd to compute the backward pass.
loss.backward()
# Update weights using gradient descent
w1.data -= learning_rate * w1.grad.data
w2.data -= learning_rate * w2.grad.data
# Manually zero the gradients after updating weights
w1.grad.data.zero_()
w2.grad.data.zero_()
```
## What is a nn module
When building neural networks we frequently think of arranging the computation into layers, some of which have learnable parameters which will be optimized during learning.
In TensorFlow, packages like Keras, TensorFlow-Slim, and TFLearn provide higher-level abstractions over raw computational graphs that are useful for building neural networks.
In PyTorch, the nn package serves this same purpose. The nn package defines a set of Modules, which are roughly equivalent to neural network layers. A Module receives input Variables and computes output Variables, but may also hold internal state such as Variables containing learnable parameters. The nn package also defines a set of useful loss functions that are commonly used when training neural networks.
In this example we use the nn package to implement our two-layer network:
```
# -*- coding: utf-8 -*-
import torch
from torch.autograd import Variable
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10
# Create random Tensors to hold inputs and outputs, and wrap them in Variables.
x = Variable(torch.randn(N, D_in))
y = Variable(torch.randn(N, D_out), requires_grad=False)
# Use the nn package to define our model as a sequence of layers. nn.Sequential
# is a Module which contains other Modules, and applies them in sequence to
# produce its output. Each Linear Module computes output from input using a
# linear function, and holds internal Variables for its weight and bias.
model = torch.nn.Sequential(
torch.nn.Linear(D_in, H),
torch.nn.ReLU(),
torch.nn.Linear(H, D_out),
)
# The nn package also contains definitions of popular loss functions; in this
# case we will use Mean Squared Error (MSE) as our loss function.
loss_fn = torch.nn.MSELoss(size_average=False)
learning_rate = 1e-4
for t in range(500):
# Forward pass: compute predicted y by passing x to the model. Module objects
# override the __call__ operator so you can call them like functions. When
# doing so you pass a Variable of input data to the Module and it produces
# a Variable of output data.
y_pred = model(x)
# Compute and print loss. We pass Variables containing the predicted and true
# values of y, and the loss function returns a Variable containing the
# loss.
loss = loss_fn(y_pred, y)
print(t, loss.data[0])
# Zero the gradients before running the backward pass.
model.zero_grad()
# Backward pass: compute gradient of the loss with respect to all the learnable
# parameters of the model. Internally, the parameters of each Module are stored
# in Variables with requires_grad=True, so this call will compute gradients for
# all learnable parameters in the model.
loss.backward()
# Update the weights using gradient descent. Each parameter is a Variable, so
# we can access its data and gradients like we did before.
for param in model.parameters():
param.data -= learning_rate * param.grad.data
```
## PyTorch - optim
With learning rate of $1e-4$
```
import torch
from torch.autograd import Variable
N, D_in, H, D_out = 64, 1000, 100, 10
x = Variable(torch.randn(N, D_in))
y = Variable(torch.randn(N, D_out), requires_grad=False)
model = torch.nn.Sequential( torch.nn.Linear(D_in, H),
torch.nn.ReLU(),
torch.nn.Linear(H, D_out)
)
loss_fxn = torch.nn.MSELoss(size_average=False)
learning_rate = 1e-4
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
# We loop
for i in range(500):
y_pred = model(x)
loss = loss_fxn(y_pred, y)
print(t, loss.data[0])
optimizer.zero_grad()
loss.backward()
optimizer.step()
```
## Custom nn module
For more complex computation, you can define your own module by subclassing nn.Module
```
import torch
from torch.autograd import Variable
class DoubleLayerNet(torch.nn.Module):
def __init__(self, D_in, H, D_out):
# initialize 2 instances of nn.Linear mods
super(DoubleLayerNet, self).__init__()
self.linear1 = torch.nn.Linear(D_in, H)
self.linear2 = torch.nn.Linear(H, D_out)
def forward(self, x):
# in this fxn we accept a Var of input data and
# return a Var of output data.
h_relu = self.linear1(x).clamp(min=0)
y_pred = self.linear2(h_relu)
return y_pred
# Next, again as usual, define batch size, input dimensions, hidden dimension and output dimension
N, D_in, H, D_out = 64, 1000, 100, 10
# Create some random tensors to hold both input and output
x = Variable(torch.randn(N, D_in))
y = Variable(torch.randn(N, D_out), requires_grad=False)
# Build model by instantiating class defined above
my_model = DoubleLayerNet(D_in, H, D_out)
# Build loss fxn and optimizer
criterion = torch.nn.MSELoss(size_average=False)
optimizer = torch.optim.SGD(model.parameters(), lr=1e-4)
# and then we loop
for i in range(500):
# fwd pass, calculate predicted y by passing x to the model
y_pred = my_model(x)
#calculate and print loss
loss = criteria(y_pred, y)
print(t, loss.data[0])
# Zero gradients, performs a backprop pass and update the weights as it goe along
optimizer.zero_grad()
loss.backward()
optimizer.step()
```
|
github_jupyter
|
#### New to Plotly?
Plotly's Python library is free and open source! [Get started](https://plot.ly/python/getting-started/) by downloading the client and [reading the primer](https://plot.ly/python/getting-started/).
<br>You can set up Plotly to work in [online](https://plot.ly/python/getting-started/#initialization-for-online-plotting) or [offline](https://plot.ly/python/getting-started/#initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plot.ly/python/getting-started/#start-plotting-online).
<br>We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started!
#### Basic Symmetric Error Bars
```
import plotly.plotly as py
import plotly.graph_objs as go
data = [
go.Scatter(
x=[0, 1, 2],
y=[6, 10, 2],
error_y=dict(
type='data',
array=[1, 2, 3],
visible=True
)
)
]
py.iplot(data, filename='basic-error-bar')
```
#### Asymmetric Error Bars
```
import plotly.plotly as py
import plotly.graph_objs as go
data = [
go.Scatter(
x=[1, 2, 3, 4],
y=[2, 1, 3, 4],
error_y=dict(
type='data',
symmetric=False,
array=[0.1, 0.2, 0.1, 0.1],
arrayminus=[0.2, 0.4, 1, 0.2]
)
)
]
py.iplot(data, filename='error-bar-asymmetric-array')
```
#### Error Bars as a Percentage of the y Value
```
import plotly.plotly as py
import plotly.graph_objs as go
data = [
go.Scatter(
x=[0, 1, 2],
y=[6, 10, 2],
error_y=dict(
type='percent',
value=50,
visible=True
)
)
]
py.iplot(data, filename='percent-error-bar')
```
#### Asymmetric Error Bars with a Constant Offset
```
import plotly.plotly as py
import plotly.graph_objs as go
data = [
go.Scatter(
x=[1, 2, 3, 4],
y=[2, 1, 3, 4],
error_y=dict(
type='percent',
symmetric=False,
value=15,
valueminus=25
)
)
]
py.iplot(data, filename='error-bar-asymmetric-constant')
```
#### Horizontal Error Bars
```
import plotly.plotly as py
import plotly.graph_objs as go
data = [
go.Scatter(
x=[1, 2, 3, 4],
y=[2, 1, 3, 4],
error_x=dict(
type='percent',
value=10
)
)
]
py.iplot(data, filename='error-bar-horizontal')
```
#### Bar Chart with Error Bars
```
import plotly.plotly as py
import plotly.graph_objs as go
trace1 = go.Bar(
x=['Trial 1', 'Trial 2', 'Trial 3'],
y=[3, 6, 4],
name='Control',
error_y=dict(
type='data',
array=[1, 0.5, 1.5],
visible=True
)
)
trace2 = go.Bar(
x=['Trial 1', 'Trial 2', 'Trial 3'],
y=[4, 7, 3],
name='Experimental',
error_y=dict(
type='data',
array=[0.5, 1, 2],
visible=True
)
)
data = [trace1, trace2]
layout = go.Layout(
barmode='group'
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='error-bar-bar')
```
#### Colored and Styled Error Bars
```
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
x_theo = np.linspace(-4, 4, 100)
sincx = np.sinc(x_theo)
x = [-3.8, -3.03, -1.91, -1.46, -0.89, -0.24, -0.0, 0.41, 0.89, 1.01, 1.91, 2.28, 2.79, 3.56]
y = [-0.02, 0.04, -0.01, -0.27, 0.36, 0.75, 1.03, 0.65, 0.28, 0.02, -0.11, 0.16, 0.04, -0.15]
trace1 = go.Scatter(
x=x_theo,
y=sincx,
name='sinc(x)'
)
trace2 = go.Scatter(
x=x,
y=y,
mode='markers',
name='measured',
error_y=dict(
type='constant',
value=0.1,
color='#85144B',
thickness=1.5,
width=3,
),
error_x=dict(
type='constant',
value=0.2,
color='#85144B',
thickness=1.5,
width=3,
),
marker=dict(
color='#85144B',
size=8
)
)
data = [trace1, trace2]
py.iplot(data, filename='error-bar-style')
```
#### Reference
See https://plot.ly/python/reference/#scatter for more information and chart attribute options!
```
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
! pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'error-bars.ipynb', 'python/error-bars/', 'Error Bars | plotly',
'How to add error-bars to charts in Python with Plotly.',
title = 'Error Bars | plotly',
name = 'Error Bars',
thumbnail='thumbnail/error-bar.jpg', language='python',
page_type='example_index', has_thumbnail='true', display_as='statistical', order=1,
ipynb='~notebook_demo/18')
```
|
github_jupyter
|
<small><small><i>
All the IPython Notebooks in **[Python Seaborn Module](https://github.com/milaan9/12_Python_Seaborn_Module)** lecture series by **[Dr. Milaan Parmar](https://www.linkedin.com/in/milaanparmar/)** are available @ **[GitHub](https://github.com/milaan9)**
</i></small></small>
<a href="https://colab.research.google.com/github/milaan9/12_Python_Seaborn_Module/blob/main/017_Seaborn_FacetGrid_Plot.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# FacetGrid
Welcome to another lecture on *Seaborn*! Our journey began with assigning *style* and *color* to our plots as per our requirement. Then we moved on to *visualize distribution of a dataset*, and *Linear relationships*, and further we dived into topics covering *plots for Categorical data*. Every now and then, we've also roughly touched customization aspects using underlying Matplotlib code. That indeed is the end of the types of plots offered by Seaborn, and only leaves us with widening the scope of usage of all the plots that we have learnt till now.
Our discussion in upcoming lectures is majorly going to focus on using the core of Seaborn, based on which, *Seaborn* allows us to plot these amazing figures, that we had been detailing previously. This ofcourse isn't going to be a brand new topic because every now & then I have used these in previous lectures but hereon we're going to specifically deal with each one of those.
To introduce our new topic, i.e. **<span style="color:red">Grids</span>**, we shall at first list the options available. Majorly, there are just two aspects to our discussion on *Grids* that includes:
- **<span style="color:red">FacetGrid</span>**
- **<span style="color:red">PairGrid</span>**
Additionally, we also have a companion function for *PairGrid* to enhance execution speed of *PairGrid*, i.e.
- **<span style="color:red">Pairplot</span>**
Our discourse shall detail each one of these topics in-length for better understanding. As we have already covered the statistical inference of each type of plot, our emphasis shall mostly be on scaling and parameter variety of known plots on these grids. So let us commence our journey with **[FacetGrid](http://seaborn.pydata.org/generated/seaborn.FacetGrid.html?highlight=facetgrid#seaborn.FacetGrid)** in this lecture.
## FacetGrid
The term **Facet** here refers to *a dimension* or say, an *aspect* or a feature of a *multi-dimensional dataset*. This analysis is extremely useful when working with a multi-variate dataset which has a varied blend of datatypes, specially in *Data Science* & *Machine Learning* domain, where generally you would be dealing with huge datasets. If you're a *working pofessional*, you know what I am talking about. And if you're a *fresher* or a *student*, just to give you an idea, in this era of *Big Data*, an average *CSV file* (which is generally the most common form), or even a RDBMS size would vary from Gigabytes to Terabytes of data. If you are dealing with *Image/Video/Audio datasets*, then you may easily expect those to be in *hundreds of gigabyte*.
On the other hand, the term **Grid** refers to any *framework with spaced bars that are parallel to or cross each other, to form a series of squares or rectangles*. Statistically, these *Grids* are also used to represent and understand an entire *population* or just a *sample space* out of it. In general, these are pretty powerful tool for presentation, to describe our dataset and to study the *interrelationship*, or *correlation* between *each facet* of any *environment*.
Subplot grid for plotting conditional relationships.
The FacetGrid is an object that links a Pandas DataFrame to a matplotlib figure with a particular structure.
In particular, FacetGrid is used to draw plots with multiple Axes where each Axes shows the same relationship conditioned on different levels of some variable. It’s possible to condition on up to three variables by assigning variables to the rows and columns of the grid and using different colors for the plot elements.
The general approach to plotting here is called “small multiples”, where the same kind of plot is repeated multiple times, and the specific use of small multiples to display the same relationship conditioned on one ore more other variables is often called a “trellis plot”.
The basic workflow is to initialize the FacetGrid object with the dataset and the variables that are used to structure the grid. Then one or more plotting functions can be applied to each subset by calling **`FacetGrid.map()`** or **`FacetGrid.map_dataframe()`**. Finally, the plot can be tweaked with other methods to do things like change the axis labels, use different ticks, or add a legend. See the detailed code examples below for more information.
To kill our curiousity, let us plot a simple **<span style="color:red">FacetGrid</span>** before continuing on with our discussion. And to do that, we shall once again quickly import our package dependencies and set the aesthetics for future use with built-in datasets.
```
# Importing intrinsic libraries:
import numpy as np
import pandas as pd
np.random.seed(101)
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set(style="whitegrid", palette="rocket")
import warnings
warnings.filterwarnings("ignore")
# Let us also get tableau colors we defined earlier:
tableau_20 = [(31, 119, 180), (174, 199, 232), (255, 127, 14), (255, 187, 120),
(44, 160, 44), (152, 223, 138), (214, 39, 40), (255, 152, 150),
(148, 103, 189), (197, 176, 213), (140, 86, 75), (196, 156, 148),
(227, 119, 194), (247, 182, 210), (127, 127, 127), (199, 199, 199),
(188, 189, 34), (219, 219, 141), (23, 190, 207), (158, 218, 229)]
# Scaling above RGB values to [0, 1] range, which is Matplotlib acceptable format:
for i in range(len(tableau_20)):
r, g, b = tableau_20[i]
tableau_20[i] = (r / 255., g / 255., b / 255.)
# Loading built-in Tips dataset:
tips = sns.load_dataset("tips")
tips.head()
# Initialize a 2x2 grid of facets using the tips dataset:
sns.set(style="ticks", color_codes=True)
sns.FacetGrid(tips, row='time', col='smoker')
# Draw a univariate plot on each facet:
x = sns.FacetGrid(tips, col='time',row='smoker')
x = x.map(plt.hist,"total_bill")
bins = np.arange(0,65,5)
x = sns.FacetGrid(tips, col="time", row="smoker")
x =x.map(plt.hist, "total_bill", bins=bins, color="g")
# Plot a bivariate function on each facet:
x = sns.FacetGrid(tips, col="time", row="smoker")
x = x.map(plt.scatter, "total_bill", "tip", edgecolor="w")
# Assign one of the variables to the color of the plot elements:
x = sns.FacetGrid(tips, col="time", hue="smoker")
x = x.map(plt.scatter,"total_bill","tip",edgecolor = "w")
x =x.add_legend()
# Plotting a basic FacetGrid with Scatterplot representation:
ax = sns.FacetGrid(tips, col="sex", hue="smoker", size=5)
ax.map(plt.scatter, "total_bill", "tip", alpha=.6)
ax.add_legend()
```
This is a combined scatter representation of Tips dataset that we have seen earlier as well, where Total tip generated against Total Bill amount is drawn in accordance with their Gender and Smoking practice. With this we can conclude how **FacetGrid** helps us visualize distribution of a variable or the relationship between multiple variables separately within subsets of our dataset. Important to note here is that Seaborn FacetGrid can only support upto **3-Dimensional figures**, using `row`, `column` and `hue` dimensions of the grid for *Categorical* and *Discrete* variables within our dataset.
Let us now have a look at the *parameters* offered or supported by Seaborn for a **FacetGrid**:
**`seaborn.FacetGrid(data, row=None, col=None, hue=None, col_wrap=None, sharex=True, sharey=True, size=3, aspect=1, palette=None, row_order=None, col_order=None, hue_order=None, hue_kws=None, dropna=True, legend_out=True, despine=True, margin_titles=False, xlim=None, ylim=None, subplot_kws=None, gridspec_kws=None`**
There seems to be few new parameters out here for us, so let us one-by-one understand their scope before we start experimenting with those on our plots:
- We are well acquainted with mandatory **`data`**, **`row`**, **`col`** and **`hue`** parameters.
- Next is **`col_wrap`** that defines the **width of our variable** selected as **`col`** dimension, so that the *column facets* can span multiple rows.
- **`sharex`** helps us **draft dedicated Y-axis** for each sub-plot, if declared **`False`**. Same concept holds good for **`sharey`** as well.
- **`size`** helps us determine the size of our grid-frame.
- We may also declare **`hue_kws`** parameter that lets us **control other aesthetics** of our plot.
- **`dropna`** drops all the **NULL variables** from the selected features; and **`legend_out`** places the Legend either inside or outside our plot, as we've already seen.
- **`margin_titles`** fetch the **feature names** from our dataset; and **`xlim`** & **`ylim`** additionally offers Matplotlib style limitation to each of our axes on the grid.
That pretty much seems to cover *intrinsic parameters* so let us now try to use them one-by-one with slight modifications:
Let us begin by pulling the *Legend inside* our FacetGrid and *creating a Header* for our grid:
```
ax = sns.FacetGrid(tips, col="sex", hue="smoker", size=5, legend_out=False)
ax.map(plt.scatter, "total_bill", "tip", alpha=.6)
ax.add_legend()
plt.suptitle('Tip Collection based on Gender and Smoking', fontsize=11)
```
So declaring **`legend_out`** as **`False`** and creating a **Superhead title** using *Matplotlib* seems to be working great on our Grid. Customization on *Header size* gives us an add-on capability as well. Right now, we are going by default **`palette`** for **marker colors** which can be customized by setting to a different one. Let us try other parameters as well:
Actually, before we jump further into utilization of other parameters, let me quickly take you behind the curtain of this plot. As visible, we assigned **`ax`** as a variable to our **FacetGrid** for creating a visualizaion figure, and then plotted a **Scatterplot** on top of it, before decorating further with a *Legend* and a *Super Title*. So when we initialized the assignment of **`ax`**, the grid actually gets created using backend *Matplotlib figure and axes*, though doesn't plot anything on top of it. This is when we call Scatterplot on our sample data, that in turn at the backend calls **`FacetGrid.map()`** function to map this grid to our Scatterplot. We intended to draw a linear relation plot, and thus entered multiple variable names, i.e. **`Total Bill`** and associated **`Tip`** to form *facets*, or dimensions of our grid.
```
# Change the size and aspect ratio of each facet:
x = sns.FacetGrid(tips, col="day", size=5, aspect=.5)
x =x.map(plt.hist, "total_bill", bins=bins)
# Specify the order for plot elements:
g = sns.FacetGrid(tips, col="smoker", col_order=["Yes", "No"])
g = g.map(plt.hist, "total_bill", bins=bins, color="m")
# Use a different color palette:
kws = dict(s=50, linewidth=.5, edgecolor="w")
g =sns.FacetGrid(tips, col="sex", hue="time", palette="Set1",\
hue_order=["Dinner", "Lunch"])
g = g.map(plt.scatter, "total_bill", "tip", **kws)
g.add_legend()
# Use a dictionary mapping hue levels to colors:
pal = dict(Lunch="seagreen", Dinner="gray")
g = sns.FacetGrid(tips, col="sex", hue="time", palette=pal,\
hue_order=["Dinner", "Lunch"])
g = g.map(plt.scatter, "total_bill", "tip", **kws)
g.add_legend()
# FacetGrid with boxplot
x = sns.FacetGrid(tips,col= 'day')
x = x.map(sns.boxplot,"total_bill","time")
```
Also important to note is the use the **[matplotlib.pyplot.gca()](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.gca.html)** function, if required to *set the current axes* on our Grid. This shall fetch the current Axes instance on our current figure matching the given keyword arguments or params, & if unavailable, it shall even create one.
```
# Let us create a dummy DataFrame:
football = pd.DataFrame({
"Wins": [76, 64, 38, 78, 63, 45, 32, 46, 13, 40, 59, 80],
"Loss": [55, 67, 70, 56, 59, 69, 72, 24, 45, 21, 58, 22],
"Team": ["Arsenal"] * 4 + ["Liverpool"] * 4 + ["Chelsea"] * 4,
"Year": [2015, 2016, 2017, 2018] * 3})
```
Before I begin illustration using this DataFrame, on a lighter note, I would add a disclosure that this is a dummy dataset and holds no resemblance whatsoever to actual records of respective Soccer clubs. So if you're one among those die-hard fans of any of these clubs, kindly excuse me if the numbers don't tally, as they are all fabricated.
Here, **football** is kind of a *Time-series Pandas DataFrame* that in entirety reflects 4 features, where **`Wins`** and **`Loss`** variables represent the quarterly Scorecard of three soccer **`Teams`** for last four **`Years`**, from 2015 to 2018. Let us check how this DataFrame looks like:
```
football
```
This looks pretty good for our purpose so now let us initialize our FacetGrid on top of it and try to obtain a time-indexed with further plotting. In production environment, to keep our solution scalable, this is generally done by defining a function for data manipulation so we shall try that in this example:
```
# Defining a customizable function to be precise with our requirements & shall discuss it a little later:
# We shall be using a new type of plot here that I shall discuss in detail later on.
def football_plot(data, color):
sns.heatmap(data[["Wins", "Loss"]])
# 'margin_titles' won't necessarily guarantee desired results so better to be cautious:
ax = sns.FacetGrid(football, col="Team", size=5, margin_titles=True)
ax.map_dataframe(football_plot)
ax = sns.FacetGrid(football, col="Team", size=5)
ax.map(sns.kdeplot, "Wins", "Year", hist=True, lw=2)
```
As visible, **Heatmap** plots rectangular boxes for data points as a color-encoded matrix, and this is a topic we shall be discussing in detail in another Lecture but for now, I just wanted you to have a preview of it, and hence used it on top of our **FacetGrid**. Another good thing to know with *FacetGrid* is **gridspec** module which allows Matplotlib params to be passed for drawing attention to a particular facet by increasing its size. To better understand, let us try to use this module now:
```
# Loading built-in Titanic Dataset:
titanic = sns.load_dataset("titanic")
# Assigning reformed `deck` column:
titanic = titanic.assign(deck=titanic.deck.astype(object)).sort_values("deck")
# Creating Grid and Plot:
ax = sns.FacetGrid(titanic, col="class", sharex=False, size=7,
gridspec_kws={"width_ratios": [3.5, 2, 2]})
ax.map(sns.boxplot, "deck", "age")
ax.set_titles(fontweight='bold', size=17)
```
Breaking it down, at first we import our built-in Titanic dataset, and then assign a new column, i.e. **`deck`** using Pandas **`.assign()`** function. Here we declare this new column as a component of pre-existing **`deck`** column from Titanic dataset, but as a sorted object. Then we create our *FacetGrid* mentioning the DataFrame, the column on which Grids get segregated but with shared across *Y-axis*; for **`chosen deck`** against **`Age`** of passengers. Next in action is our **grid keyword specifications**, where we decide the *width ratio* of the plot that shall be passed on to these grids. Finally, we have our **Box Plot** representing values of **`Age`** feature across respective decks.
Now let us try to use different axes with same size for multivariate plotting on Tips dataset:
```
# Loading built-in Tips dataset:
tips = sns.load_dataset("tips")
# Mapping a Scatterplot to our FacetGrid:
ax = sns.FacetGrid(tips, col="smoker", row="sex", size=3.5)
ax = (ax.map(plt.scatter, "total_bill", "tip", color=tableau_20[6]).set_axis_labels("Total Bill Generated (USD)", "Tip Amount"))
# Increasing size for subplot Titles & making it appear Bolder:
ax.set_titles(fontweight='bold', size=11)
```
**Scatterplot** dealing with data that has multiple variables is no new science for us so instead let me highlight what **`.map()`** does for us. This function actually allows us to project our figure axes, in accordance to which our Scatterplot spreads the feature datapoints across the grids, depending upon the segregators. Here we have **`sex`** and **`smoker`** as our segregators (When I use the general term "segregator", it just refers to the columns on which we decide to determine the layout). This comes in really handy as we can pass *Matplotlib parrameters* for further customization of our plot. At the end, when we add **`.set_axis_labels()`** it gets easy for us to label our axes but please note that this method shall work for you only when you're dealing with grids, hence you didn't observe me adapting to this function, while detailing various other plots.
- Let us now talk about the **`football_plot`** function we defined earlier with **football** DataFrame. The only reason I didn't speak of it then was because I wanted you to go through a few more parameter implementation before getting into this. There are **3 important rules for defining such functions** that are supported by **[FacetGrid.map](http://xarray.pydata.org/en/stable/generated/xarray.plot.FacetGrid.map.html)**:
-They must take array-like inputs as positional arguments, with the first argument corresponding to the **`X-Axis`**, and the second argument corresponding to **`y-Axis`**.
-They must also accept two keyword arguments: **`color`**, and **`label`**. If you want to use a **`hue`** variable, than these should get passed to the underlying plotting function (As a side note: You may just catch **`**kwargs`** and not do anything with them, if it's not relevant to the specific plot you're making.
-Lastly, when called, they must draw a plot on the "currently active" matplotlib Axes.
- Important to note is that there may be cases where your function draws a plot that looks correct without taking `x`, `y`, positional inputs and then it is better to just call the plot, like: **`ax.set_axis_labels("Column_1", "Column_2")`** after you use **`.map()`**, which should rename your axes properly. Alternatively, you may also want to do something like `ax.set(xticklabels=)` to get more meaningful ticks.
- Well I am also quite stoked to mention another important function (though not that comonly used), that is **[`FacetGrid.map_dataframe()`](http://nullege.com/codes/search/axisgrid.FacetGrid.map_dataframe)**. The rules here are similar to **`FacetGrid.map`** but the function you pass must accept a DataFrame input in a parameter called `data`, and instead of taking *array-like positional* inputs it takes *strings* that correspond to variables in that dataframe. Then on each iteration through the *facets*, the function will be called with the *Input dataframe*, masked to just the values for that combination of **`row`**, **`col`**, and **`hue`** levels.
Another important to note with both the above-mentioned functions is that the **`return`** value is ignored so you don't really have to worry about it. Just for illustration purpose, let us consider drafting a function that just *draws a horizontal line* in each **`facet`** at **`y=2`** and ignores all the Input data*:
```
# That is all you require in your function:
def plot_func(x, y, color=None, label=None):
ax.map(plt.axhline, y=2)
```
I know this function concept might look little hazy at the moment but once you have covered more on dates and maptplotlib syntax in particular, the picture shall get much more clearer for you.
Let us look at one more example of **`FacetGrid()`** and this time let us again create a synthetic DataFrame for this demonstration:
```
# Creating synthetic Data (Don't focus on how it's getting created):
units = np.linspace(0, 50)
A = [1., 18., 40., 100.]
df = []
for i in A:
V1 = np.sin(i * units)
V2 = np.cos(i * units)
df.append(pd.DataFrame({"units": units, "V_1": V1, "V_2": V2, "A": i}))
sample = pd.concat(df, axis=0)
# Previewing DataFrame:
sample.head(10)
sample.describe()
# Melting our sample DataFrame:
sample_melt = sample.melt(id_vars=['A', 'units'], value_vars=['V_1', 'V_2'])
# Creating plot:
ax = sns.FacetGrid(sample_melt, col='A', hue='A', palette="icefire", row='variable', sharey='row', margin_titles=True)
ax.map(plt.plot, 'units', 'value')
ax.add_legend()
```
This process shall come in handy if you ever wish to vertically stack rows of subplots on top of one another. You do not really have to focus on the process of creating dataset, as generally you will have your dataset provided with a problem statement. For our plot, you may just consider these visual variations as **[Sinusoidal waves](https://en.wikipedia.org/wiki/Sine_wave)**. I shall attach a link in our notebook, if you wish to dig deeper into what these are and how are they actually computed.
Our next lecture would be pretty much a small follow up to this lecture, where we would try to bring more of *Categorical data* to our **`FacetGrid()`**. Meanwhile, I would again suggest you to play around with analyzing and plotting datasets, as much as you can because visualization is a very important facet of *Data Science & Research*. And, I shall see you in our next lecture with **[Heat Map](https://github.com/milaan9/12_Python_Seaborn_Module/blob/main/018_Seaborn_Heat_Map.ipynb)**.
|
github_jupyter
|
```
import pandas as pd
import numpy as np
import mxnet as mx
from mxnet import nd, autograd, gluon, init
from mxnet.gluon import nn, rnn
import gluonnlp as nlp
import pkuseg
import multiprocessing as mp
import time
from d2l import try_gpu
import itertools
import jieba
from sklearn.metrics import accuracy_score, f1_score
import d2l
import re
import warnings
warnings.filterwarnings("ignore")
# fixed random number seed
np.random.seed(2333)
mx.random.seed(2333)
DATA_FOLDER = 'data/'
TRAIN_DATA = 'train.csv'
WORD_EMBED = 'sgns.weibo.bigram-char'
LABEL_FILE = 'train.label'
N_ROWS=1000
ctx = mx.gpu(0)
seg = pkuseg.pkuseg(model_name='web')
train_df = pd.read_csv(DATA_FOLDER+TRAIN_DATA, sep='|')
train_df = train_df.sample(frac=1)
train_df.head()
dataset =[ [row[0], row[1]] for _, row in train_df.iterrows()]
train_dataset, valid_dataset = nlp.data.train_valid_split(dataset)
len(train_dataset), len(valid_dataset)
def tokenizer(x):
tweet, label = x
if type(tweet) != str:
print(tweet)
tweet = str(tweet)
word_list = jieba.lcut(tweet)
if len(word_list)==0:
word_list=['<unk>']
return word_list, label
def get_length(x):
return float(len(x[0]))
def to_word_list(dataset):
start = time.time()
with mp.Pool() as pool:
# Each sample is processed in an asynchronous manner.
dataset = gluon.data.ArrayDataset(pool.map(tokenizer, dataset))
lengths = gluon.data.ArrayDataset(pool.map(get_length, dataset))
end = time.time()
print('Done! Tokenizing Time={:.2f}s, #Sentences={}'.format(end - start, len(dataset)))
return dataset, lengths
train_word_list, train_word_lengths = to_word_list(train_dataset)
valid_word_list, valid_word_lengths = to_word_list(valid_dataset)
train_seqs = [sample[0] for sample in train_word_list]
counter = nlp.data.count_tokens(list(itertools.chain.from_iterable(train_seqs)))
vocab = nlp.Vocab(counter, max_size=200000)
# load customed pre-trained embedding
embedding_weights = nlp.embedding.TokenEmbedding.from_file(file_path=DATA_FOLDER+WORD_EMBED)
vocab.set_embedding(embedding_weights)
print(vocab)
def token_to_idx(x):
return vocab[x[0]], x[1]
# A token index or a list of token indices is returned according to the vocabulary.
with mp.Pool() as pool:
train_dataset = pool.map(token_to_idx, train_word_list)
valid_dataset = pool.map(token_to_idx, valid_word_list)
batch_size = 1024
bucket_num = 20
bucket_ratio = 0.1
def get_dataloader():
# Construct the DataLoader Pad data, stack label and lengths
batchify_fn = nlp.data.batchify.Tuple(nlp.data.batchify.Pad(axis=0), \
nlp.data.batchify.Stack())
# in this example, we use a FixedBucketSampler,
# which assigns each data sample to a fixed bucket based on its length.
batch_sampler = nlp.data.sampler.FixedBucketSampler(
train_word_lengths,
batch_size=batch_size,
num_buckets=bucket_num,
ratio=bucket_ratio,
shuffle=True)
print(batch_sampler.stats())
# train_dataloader
train_dataloader = gluon.data.DataLoader(
dataset=train_dataset,
batch_sampler=batch_sampler,
batchify_fn=batchify_fn)
# valid_dataloader
valid_dataloader = gluon.data.DataLoader(
dataset=valid_dataset,
batch_size=batch_size,
shuffle=False,
batchify_fn=batchify_fn)
return train_dataloader, valid_dataloader
train_dataloader, valid_dataloader = get_dataloader()
for tweet, label in train_dataloader:
print(tweet, label)
break
```
## Model contruction
Self attention layer, weighted cross entropy, and whole model
```
class TextCNN(nn.Block):
def __init__(self, vocab_len, embed_size, kernel_sizes, num_channels, \
dropout, nclass, **kwargs):
super(TextCNN, self).__init__(**kwargs)
self.embedding = nn.Embedding(vocab_len, embed_size)
self.constant_embedding = nn.Embedding(vocab_len, embed_size)
self.dropout = nn.Dropout(dropout)
self.decoder = nn.Dense(nclass)
self.pool = nn.GlobalMaxPool1D()
self.convs = nn.Sequential()
for c, k in zip(num_channels, kernel_sizes):
self.convs.add(nn.Conv1D(c, k, activation='relu'))
def forward(self, inputs):
embeddings = nd.concat(
self.embedding(inputs), self.constant_embedding(inputs), dim=2)
embeddings = embeddings.transpose((0, 2, 1))
encoding = nd.concat(*[nd.flatten(
self.pool(conv(embeddings))) for conv in self.convs], dim=1)
outputs = self.decoder(self.dropout(encoding))
return outputs
vocab_len = len(vocab)
emsize = 300 # word embedding size
nhidden = 400 # lstm hidden_dim
nlayers = 4 # lstm layers
natt_unit = 400 # the hidden_units of attention layer
natt_hops = 20 # the channels of attention
nfc = 256 # last dense layer size
nclass = 72 # we have 72 emoji in total
drop_prob = 0.2
pool_way = 'flatten' # # The way to handle M
prune_p = None
prune_q = None
ctx = try_gpu()
kernel_sizes, nums_channels = [2, 3, 4, 5], [100, 100, 100, 100]
model = TextCNN(vocab_len, emsize, kernel_sizes, nums_channels, drop_prob, nclass)
model.initialize(init.Xavier(), ctx=ctx)
print(model)
model.embedding.weight.set_data(vocab.embedding.idx_to_vec)
model.constant_embedding.weight.set_data(vocab.embedding.idx_to_vec)
model.constant_embedding.collect_params().setattr('grad_req', 'null')
tmp = nd.array([10, 20, 30, 40, 50, 60], ctx=ctx).reshape(1, -1)
model(tmp)
class WeightedSoftmaxCE(nn.HybridBlock):
def __init__(self, sparse_label=True, from_logits=False, **kwargs):
super(WeightedSoftmaxCE, self).__init__(**kwargs)
with self.name_scope():
self.sparse_label = sparse_label
self.from_logits = from_logits
def hybrid_forward(self, F, pred, label, class_weight, depth=None):
if self.sparse_label:
label = F.reshape(label, shape=(-1, ))
label = F.one_hot(label, depth)
if not self.from_logits:
pred = F.log_softmax(pred, -1)
weight_label = F.broadcast_mul(label, class_weight)
loss = -F.sum(pred * weight_label, axis=-1)
# return F.mean(loss, axis=0, exclude=True)
return loss
def calculate_loss(x, y, model, loss, class_weight):
pred = model(x)
y = nd.array(y.asnumpy().astype('int32')).as_in_context(ctx)
if loss_name == 'sce':
l = loss(pred, y)
elif loss_name == 'wsce':
l = loss(pred, y, class_weight, class_weight.shape[0])
else:
raise NotImplemented
return pred, l
def one_epoch(data_iter, model, loss, trainer, ctx, is_train, epoch,
clip=None, class_weight=None, loss_name='sce'):
loss_val = 0.
total_pred = []
total_true = []
n_batch = 0
for batch_x, batch_y in data_iter:
batch_x = batch_x.as_in_context(ctx)
batch_y = batch_y.as_in_context(ctx)
if is_train:
with autograd.record():
batch_pred, l = calculate_loss(batch_x, batch_y, model, \
loss, class_weight)
# backward calculate
l.backward()
# clip gradient
clip_params = [p.data() for p in model.collect_params().values()]
if clip is not None:
norm = nd.array([0.0], ctx)
for param in clip_params:
if param.grad is not None:
norm += (param.grad ** 2).sum()
norm = norm.sqrt().asscalar()
if norm > clip:
for param in clip_params:
if param.grad is not None:
param.grad[:] *= clip / norm
# update parmas
trainer.step(batch_x.shape[0])
else:
batch_pred, l = calculate_loss(batch_x, batch_y, model, \
loss, class_weight)
# keep result for metric
batch_pred = nd.argmax(nd.softmax(batch_pred, axis=1), axis=1).asnumpy()
batch_true = np.reshape(batch_y.asnumpy(), (-1, ))
total_pred.extend(batch_pred.tolist())
total_true.extend(batch_true.tolist())
batch_loss = l.mean().asscalar()
n_batch += 1
loss_val += batch_loss
# check the result of traing phase
if is_train and n_batch % 400 == 0:
print('epoch %d, batch %d, batch_train_loss %.4f, batch_train_acc %.3f' %
(epoch, n_batch, batch_loss, accuracy_score(batch_true, batch_pred)))
# metric
F1 = f1_score(np.array(total_true), np.array(total_pred), average='weighted')
acc = accuracy_score(np.array(total_true), np.array(total_pred))
loss_val /= n_batch
if is_train:
print('epoch %d, learning_rate %.5f \n\t train_loss %.4f, acc_train %.3f, F1_train %.3f, ' %
(epoch, trainer.learning_rate, loss_val, acc, F1))
# declay lr
if epoch % 3 == 0:
trainer.set_learning_rate(trainer.learning_rate * 0.9)
else:
print('\t valid_loss %.4f, acc_valid %.3f, F1_valid %.3f, ' % (loss_val, acc, F1))
def train_valid(data_iter_train, data_iter_valid, model, loss, trainer, ctx, nepochs,
clip=None, class_weight=None, loss_name='sce'):
for epoch in range(1, nepochs+1):
start = time.time()
# train
is_train = True
one_epoch(data_iter_train, model, loss, trainer, ctx, is_train,
epoch, clip, class_weight, loss_name)
# valid
is_train = False
one_epoch(data_iter_valid, model, loss, trainer, ctx, is_train,
epoch, clip, class_weight, loss_name)
end = time.time()
print('time %.2f sec' % (end-start))
print("*"*100)
from util import get_weight
weight_list = get_weight(DATA_FOLDER, LABEL_FILE)
class_weight = None
loss_name = 'sce'
optim = 'adam'
lr, wd = .001, .999
clip = None
nepochs = 5
trainer = gluon.Trainer(model.collect_params(), optim, {'learning_rate': lr})
if loss_name == 'sce':
loss = gluon.loss.SoftmaxCrossEntropyLoss()
elif loss_name == 'wsce':
loss = WeightedSoftmaxCE()
# the value of class_weight is obtained by counting data in advance. It can be seen as a hyperparameter.
class_weight = nd.array(weight_list, ctx=ctx)
# train and valid
print(ctx)
train_valid(train_dataloader, valid_dataloader, model, loss, \
trainer, ctx, nepochs, clip=clip, class_weight=class_weight, \
loss_name=loss_name)
model.save_parameters("model/textcnn.params")
kernel_sizes, nums_channels = [2, 3, 4, 5], [100, 100, 100, 100]
model = TextCNN(vocab_len, emsize, kernel_sizes, nums_channels, 0, nclass)
model.load_parameters('model/textcnn.params', ctx=ctx)
TEST_DATA = 'test.csv'
predictions = []
test_df = pd.read_csv(DATA_FOLDER+TEST_DATA, header=None, sep='\t')
len(test_df)
start = time.time()
for _, tweet in test_df.iterrows():
token = vocab[jieba.lcut(tweet[1])]
if len(token)<5:
token += [0.]*(5-len(token))
inp = nd.array(token, ctx=ctx).reshape(1,-1)
pred = model(inp)
pred = nd.argmax(pred, axis=1).asscalar()
predictions.append(int(pred))
if len(predictions)%2000==0:
ckpt = time.time()
print('current pred len %d, time %.2fs' % (len(predictions), ckpt-start))
start = ckpt
submit = pd.DataFrame({'Expected': predictions})
submit.to_csv('submission.csv', sep=',', index_label='ID')
```
|
github_jupyter
|
TSG088 - Hadoop datanode logs
=============================
Steps
-----
### Parameters
```
import re
tail_lines = 500
pod = None # All
container = "hadoop"
log_files = [ "/var/log/supervisor/log/datanode*.log" ]
expressions_to_analyze = [
re.compile(".{23} WARN "),
re.compile(".{23} ERROR ")
]
log_analyzer_rules = []
```
### Instantiate Kubernetes client
```
# Instantiate the Python Kubernetes client into 'api' variable
import os
from IPython.display import Markdown
try:
from kubernetes import client, config
from kubernetes.stream import stream
except ImportError:
# Install the Kubernetes module
import sys
!{sys.executable} -m pip install kubernetes
try:
from kubernetes import client, config
from kubernetes.stream import stream
except ImportError:
display(Markdown(f'HINT: Use [SOP059 - Install Kubernetes Python module](../install/sop059-install-kubernetes-module.ipynb) to resolve this issue.'))
raise
if "KUBERNETES_SERVICE_PORT" in os.environ and "KUBERNETES_SERVICE_HOST" in os.environ:
config.load_incluster_config()
else:
try:
config.load_kube_config()
except:
display(Markdown(f'HINT: Use [TSG118 - Configure Kubernetes config](../repair/tsg118-configure-kube-config.ipynb) to resolve this issue.'))
raise
api = client.CoreV1Api()
print('Kubernetes client instantiated')
```
### Get the namespace for the big data cluster
Get the namespace of the Big Data Cluster from the Kuberenetes API.
**NOTE:**
If there is more than one Big Data Cluster in the target Kubernetes
cluster, then either:
- set \[0\] to the correct value for the big data cluster.
- set the environment variable AZDATA\_NAMESPACE, before starting
Azure Data Studio.
```
# Place Kubernetes namespace name for BDC into 'namespace' variable
if "AZDATA_NAMESPACE" in os.environ:
namespace = os.environ["AZDATA_NAMESPACE"]
else:
try:
namespace = api.list_namespace(label_selector='MSSQL_CLUSTER').items[0].metadata.name
except IndexError:
from IPython.display import Markdown
display(Markdown(f'HINT: Use [TSG081 - Get namespaces (Kubernetes)](../monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [TSG010 - Get configuration contexts](../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [SOP011 - Set kubernetes configuration context](../common/sop011-set-kubernetes-context.ipynb) to resolve this issue.'))
raise
print('The kubernetes namespace for your big data cluster is: ' + namespace)
```
### Get the Hadoop datanode logs from the hadoop container
### Get tail for log
```
# Display the last 'tail_lines' of files in 'log_files' list
pods = api.list_namespaced_pod(namespace)
entries_for_analysis = []
for p in pods.items:
if pod is None or p.metadata.name == pod:
for c in p.spec.containers:
if container is None or c.name == container:
for log_file in log_files:
print (f"- LOGS: '{log_file}' for CONTAINER: '{c.name}' in POD: '{p.metadata.name}'")
try:
output = stream(api.connect_get_namespaced_pod_exec, p.metadata.name, namespace, command=['/bin/sh', '-c', f'tail -n {tail_lines} {log_file}'], container=c.name, stderr=True, stdout=True)
except Exception:
print (f"FAILED to get LOGS for CONTAINER: {c.name} in POD: {p.metadata.name}")
else:
for line in output.split('\n'):
for expression in expressions_to_analyze:
if expression.match(line):
entries_for_analysis.append(line)
print(line)
print("")
print(f"{len(entries_for_analysis)} log entries found for further analysis.")
```
### Analyze log entries and suggest relevant Troubleshooting Guides
```
# Analyze log entries and suggest further relevant troubleshooting guides
from IPython.display import Markdown
print(f"Applying the following {len(log_analyzer_rules)} rules to {len(entries_for_analysis)} log entries for analysis, looking for HINTs to further troubleshooting.")
print(log_analyzer_rules)
hints = 0
if len(log_analyzer_rules) > 0:
for entry in entries_for_analysis:
for rule in log_analyzer_rules:
if entry.find(rule[0]) != -1:
print (entry)
display(Markdown(f'HINT: Use [{rule[2]}]({rule[3]}) to resolve this issue.'))
hints = hints + 1
print("")
print(f"{len(entries_for_analysis)} log entries analyzed (using {len(log_analyzer_rules)} rules). {hints} further troubleshooting hints made inline.")
print("Notebook execution is complete.")
```
|
github_jupyter
|
## Project 2: Exploring the Uganda's milk imports and exports
A country's economy depends, sometimes heavily, on its exports and imports. The United Nations Comtrade database provides data on global trade. It will be used to analyse the Uganda's imports and exports of milk in 2015:
* How much does the Uganda export and import and is the balance positive (more exports than imports)?
* Which are the main trading partners, i.e. from/to which countries does the Uganda import/export the most?
* Which are the regular customers, i.e. which countries buy milk from the Uganda every month?
* Which countries does the Uganda both import from and export to?
```
import warnings
warnings.simplefilter('ignore', FutureWarning)
from pandas import *
%matplotlib inline
```
## Getting and preparing the data
The data is obtained from the [United Nations Comtrade](http://comtrade.un.org/data/) website, by selecting the following configuration:
- Type of Product: goods
- Frequency: monthly
- Periods: Jan - May 2018
- Reporter: Uganda
- Partners: all
- Flows: imports and exports
- HS (as reported) commodity codes: 401 (Milk and cream, neither concentrated nor sweetened) and 402 (Milk and cream, concentrated or sweetened)
```
LOCATION = 'comrade_milk_ug_jan_dec_2015.csv'
```
On reading in the data, the commodity code has to be read as a string, to not lose the leading zero.
```
import pandas as pd
milk = pd.read_csv(LOCATION, dtype={'Commodity Code':str})
milk.tail(2)
```
The data only covers the first five months of 2015. Most columns are irrelevant for this analysis, or contain always the same value, like the year and reporter columns. The commodity code is transformed into a short but descriptive text and only the relevant columns are selected.
```
def milkType(code):
if code == '401': # neither concentrated nor sweetened
return 'unprocessed'
if code == '402': # concentrated or sweetened
return 'processed'
return 'unknown'
COMMODITY = 'Milk and cream'
milk[COMMODITY] = milk['Commodity Code'].apply(milkType)
MONTH = 'Period'
PARTNER = 'Partner'
FLOW = 'Trade Flow'
VALUE = 'Trade Value (US$)'
headings = [MONTH, PARTNER, FLOW, COMMODITY, VALUE]
milk = milk[headings]
milk.head()
```
The data contains the total imports and exports per month, under the 'World' partner. Those rows are removed to keep only the per-country data.
```
milk = milk[milk[PARTNER] != 'World']
milk.head()
milk.tail()
```
## Total trade flow
To answer the first question, 'how much does the Uganda export and import and is the balance positive (more exports than imports)?', the dataframe is split into two groups: exports from the Uganda and imports into the Uganda. The trade values within each group are summed up to get the total trading.
```
grouped = milk.groupby([FLOW])
grouped[VALUE].aggregate(sum)
```
This shows a trade surplus of over 30 million dollars.
## Main trade partners
To address the second question, 'Which are the main trading partners, i.e. from/to which countries does the Uganda import/export the most?', the dataframe is split by country instead, and then each group aggregated for the total trade value. This is done separately for imports and exports. The result is sorted in descending order so that the main partners are at the top.
```
imports = milk[milk[FLOW] == 'Imports']
grouped = imports.groupby([PARTNER])
print('The Uganda imports from', len(grouped), 'countries.')
print('The 5 biggest exporters to the Uganda are:')
totalImports = grouped[VALUE].aggregate(sum).sort_values(inplace=False,ascending=False)
totalImports.head()
```
The export values can be plotted as a bar chart, making differences between countries easier to see.
```
totalImports.head(10).plot(kind='barh')
```
We can deduce that Switzerland is the lowest partnering company of milk to Uganda for imports.
```
exports = milk[milk[FLOW] == 'Exports']
grouped = exports.groupby([PARTNER])
print('The Uganda exports to', len(grouped), 'countries.')
print('The 5 biggest importers from the Uganda are:')
grouped[VALUE].aggregate(sum).sort_values(ascending=False,inplace=False).head()
```
## Regular importers
Given that there are two commodities, the third question, 'Which are the regular customers, i.e. which countries buy milk from the Uganda every month?', is meant in the sense that a regular customer imports both commodities every month. This means that if the exports dataframe is grouped by country, each group has exactly ten rows (two commodities bought each of the five months). To see the countries, only the first month of one commodity has to be listed, as by definition it's the same countries every month and for the other commodity.
```
def buysEveryMonth(group):
reply = len(group) == 20
return reply
grouped = exports.groupby([PARTNER])
regular = grouped.filter(buysEveryMonth)
print(regular)
regular[(regular[MONTH] == 201501) & (regular[COMMODITY] == 'processed')]
```
Just over 5% of the total Uganda exports are due to these regular customers.
```
regular[VALUE].sum() / exports[VALUE].sum()
```
## Bi-directional trade
To address the fourth question,
'Which countries does the Uganda both import from and export to?', a pivot table is used to list the total export and import value for each country.
```
countries = pivot_table(milk, index=[PARTNER], columns=[FLOW],
values=VALUE, aggfunc=sum)
countries.head()
```
Removing the rows with a missing value will result in only those countries with bi-directional trade flow with the Uganda.
```
countries.dropna()
```
## Conclusions
The milk and cream trade of the Uganda from January to December 2015 was analysed in terms of which countries the Uganda mostly depends on for income (exports) and goods (imports). Over the period, the Uganda had a trade surplus of over 1 million US dollars.
Kenya is the main partner, but it exported from the Uganda almost the triple in value than it imported to the Uganda.
The Uganda exported to over 100 countries during the period, but only imported from 24 countries, the main ones (top five by trade value) being not so geographically close (Kenya, Netherlands, United Arab Emirates, Oman, and South Africa). Kenya and Netherlands are the main importers that are not also main exporters except Kenya.
The Uganda is heavily dependent on its regular customers, the 10 countries that buy all types of milk and cream every month. They contribute three quarters of the total export value.
Although for some, the trade value (in US dollars) is suspiciously low, which raises questions about the data's accuracy.
|
github_jupyter
|
```
#################
# Preprocessing #
#################
# Scores by other composers from the Bach family have been removed beforehand.
# Miscellaneous scores like mass pieces have also been removed; the assumption here is that
# since different interpretations of the same piece (e.g. Ave Maria, etc) exist, including
# theses pieces might hurt the prediction accuracy, here mostly based on chord progression.
# (more exactly, a reduced version of the chord progression.)
# In shell, find and copy midi files to target data directory and convert to mxl:
'''
cd {TARGETDIR}
find {MIDIFILEDIR} \( -name "bach*.mid" -o -name "beethoven*.mid" -o -name "scarlatti*.mid" \) -type f -exec cp {} . \;
find . -type f -name "*.mid" -exec /Applications/MuseScore\ 2.app/Contents/MacOS/mscore {} --export-to {}.mxl \;
for f in *.mxl; do mv "$f" "${f%.mid.mxl}.mxl"; done
ls *.mxl > mxl_list.txt
'''
from music21 import *
from os import listdir
from os.path import isfile, getsize
# timeout function that lets move on beyond too big files.
# by Thomas Ahle: http://stackoverflow.com/a/22348885
import signal
class timeout:
def __init__(self, seconds=1, error_message='Timeout'):
self.seconds = seconds
self.error_message = error_message
def handle_timeout(self, signum, frame):
raise TimeoutError(self.error_message)
def __enter__(self):
signal.signal(signal.SIGALRM, self.handle_timeout)
signal.alarm(self.seconds)
def __exit__(self, type, value, traceback):
signal.alarm(0)
def parse(mxllist, composer):
composer_list = [f for f in mxllist if f.replace('-', '_').split('_')[0] == composer]
for file in composer_list:
if (getsize(file)>10000): # remove too short scores that may contain no notes
with timeout(seconds=6000):
try:
s = converter.parse(mxldir+file)
try:
k = s.flat.keySignature.sharps
except AttributeError:
k = s.analyze('key').sharps
except:
with open('{}-parsed.txt'.format(composer), 'a') as output_file:
output_file.write('key could not by analyzed\n')
with open('{}-transposed.txt'.format(composer), 'a') as output_file:
output_file.write('key could not by analyzed\n')
continue
t = s.transpose((k*5)%12)
except:
with open('{}-parsed.txt'.format(composer), 'a') as output_file:
output_file.write('timeout\n')
with open('{}-transposed.txt'.format(composer), 'a') as output_file:
output_file.write('timeout\n')
continue
fp_s = converter.freeze(s, fmt='pickle')
fp_t = converter.freeze(t, fmt='pickle')
with open('{}-parsed.txt'.format(composer), 'a') as output_file:
output_file.write(fp_s+'\n')
with open('{}-transposed.txt'.format(composer), 'a') as output_file:
output_file.write(fp_t+'\n')
with open('mxl_list.txt', 'r') as f:
mxllist = [line.strip() for line in f.readlines()]
parse(mxllist, 'bach')
parse(mxllist, 'beethoven')
parse(mxllist, 'debussy')
parse(mxllist, 'scarlatti')
parse(mxllist, 'victoria')
######################
# Feature Extraction #
######################
import itertools
from collections import Counter
flatten = lambda l: [item for sublist in l for item in sublist] # by Alex Martinelli & Guillaume Jacquenot: http://stackoverflow.com/a/952952
uniqify = lambda seq: list(set(seq))
# Define known chords
major, minor, suspended, augmented, diminished, major_sixth, minor_sixth, dominant_seventh, major_seventh, minor_seventh, half_diminished_seventh, diminished_seventh, major_ninth, dominant_ninth, dominant_minor_ninth, minor_ninth = [0,4,7],[0,3,7],[0,5,7],[0,4,8],[0,3,6],[0,4,7,9],[0,3,7,9],[0,4,7,10],[0,4,7,11],[0,3,7,10],[0,3,6,10],[0,3,6,9],[0,2,4,7,11],[0,2,4,7,10],[0,1,4,7,10],[0,2,3,7,10]
chord_types_list = [major, minor, suspended, augmented, diminished, major_sixth, minor_sixth, dominant_seventh, major_seventh, minor_seventh, half_diminished_seventh, diminished_seventh, major_ninth, dominant_ninth, dominant_minor_ninth, minor_ninth]
chord_types_string = ['major', 'minor', 'suspended', 'augmented', 'diminished', 'major_sixth', 'minor_sixth', 'dominant_seventh', 'major_seventh', 'minor_seventh', 'half_diminished_seventh', 'diminished_seventh', 'major_ninth', 'dominant_ninth', 'dominant_minor_ninth', 'minor_ninth']
roots = list(range(12))
chord_orders = flatten([[{(n+r)%12 for n in v} for v in chord_types_list] for r in roots])
unique_orders = []
for i in range(192):
if chord_orders[i] not in unique_orders:
unique_orders.append(chord_orders[i])
def merge_chords(s):
sf = s.flat
chords_by_offset = []
for i in range(int(sf.highestTime)):
chords_by_offset.append(chord.Chord(sf.getElementsByOffset(i,i+1, includeEndBoundary=False, mustFinishInSpan=False, mustBeginInSpan=False).notes))
return chords_by_offset
def find_neighbor_note(n, k):
# find notes k steps away from n
return (roots[n-6:]+roots[:(n+6)%12])[6+k], (roots[n-6:]+roots[:(n+6)%12])[6-k]
def find_note_distance(n1, n2):
return abs(6 - (roots[n1-6:]+roots[:(n1+6)%12]).index(n2))
def find_chord_distance(set1, set2):
d1, d2 = set1.difference(set2), set2.difference(set1)
if len(d1) < len(d2):
longer, shorter = d2, list(d1)
else:
longer, shorter = d1, list(d2)
distances = []
for combination in itertools.combinations(longer, len(shorter)):
for permutation in itertools.permutations(combination):
dist_p = abs(len(d1)-len(d2))*3 # length difference means notes need to be added/deleted. weighted by 3
for i in range(len(shorter)):
dist_p += find_note_distance(shorter[i], permutation[i])
distances.append(dist_p)
return min(distances)
CACHE = dict()
def find_closest_chord(c, cache=CACHE):
if len(c) == 0:
return -1 # use -1 for rest (chords are 0 to 191)
# retrieve from existing knowledge
o_str, o, p = str(c.normalOrder), set(c.normalOrder), c.pitchClasses
if o in chord_orders:
return chord_orders.index(o)
# the above root sometimes differs from c.findRoot(), which might be more reliable.
# however, the errors are rare and it should be good enough for now.
if o_str in cache.keys():
return cache[o_str]
# find closest chord from scratch
chord_distances = dict()
most_common_note = Counter(c.pitchClasses).most_common(1)[0][0]
for i in range(192):
d = find_chord_distance(o, chord_orders[i])
# prioritize found chord's root note if most common note of the chord.
if int(i/16) == most_common_note:
d += -1
if chord_distances.get(d) == None:
chord_distances[d] = []
chord_distances[d].append(i)
# if multiple chords are tied, use first one (could be better)
closest_chord = chord_distances[min(chord_distances.keys())][0]
cache[o_str] = closest_chord
return closest_chord
def extract_features(parsed_list, idx):
s = converter.thaw(parsed_list[idx])
chords_by_offset = merge_chords(s)
chord_sequence = []
for i in range(len(chords_by_offset)):
chord_sequence.append(find_closest_chord(chords_by_offset[i], CACHE))
return chord_sequence
with open('bach-parsed.txt', 'r') as f:
FILES_BACH = [line.strip() for line in f.readlines()]
with open('beethoven-parsed.txt', 'r') as f:
FILES_BEETHOVEN = [line.strip() for line in f.readlines()]
with open('debussy-parsed.txt', 'r') as f:
FILES_DEBUSSY = [line.strip() for line in f.readlines()]
with open('scarlatti-parsed.txt', 'r') as f:
FILES_SCARLATTI = [line.strip() for line in f.readlines()]
with open('victoria-parsed.txt', 'r') as f:
FILES_VICTORIA = [line.strip() for line in f.readlines()]
for i in range(len(FILES_BACH)):
with open('bach-chordsequence.txt', 'a') as f:
f.write(str(extract_features(FILES_BACH, i))+'\n')
for i in range(len(FILES_BEETHOVEN)):
with open('beethoven-chordsequence.txt', 'a') as f:
f.write(str(extract_features(FILES_BEETHOVEN, i))+'\n')
for i in range(len(FILES_DEBUSSY)):
with open('debussy-chordsequence.txt', 'a') as f:
f.write(str(extract_features(FILES_DEBUSSY, i))+'\n')
for i in range(len(FILES_SCARLATTI)):
with open('scarlatti-chordsequence.txt', 'a') as f:
f.write(str(extract_features(FILES_SCARLATTI, i))+'\n')
for i in range(len(FILES_VICTORIA)):
with open('victoria-chordsequence.txt', 'a') as f:
f.write(str(extract_features(FILES_VICTORIA, i))+'\n')
# Additional feature set: extract durations of notes, chords, and rests
def find_length_add_to_list(cnr, out_list):
try:
out_list.append(cnr.duration.fullName)
except:
out_list.append(str(cnr.duration.quarterLength))
def extract_cnr_duration(piece):
s = converter.thaw(piece).flat
chords, notes, rests = [], [], []
for c in s.getElementsByClass(chord.Chord):
find_length_add_to_list(c, chords)
for n in s.getElementsByClass(note.Note):
find_length_add_to_list(n, notes)
for r in s.getElementsByClass(note.Rest):
find_length_add_to_list(r, rests)
elements = ['chord|'+d for d in chords] + ['note|'+d for d in notes] + ['rest|'+d for d in rests]
return ';'.join(elements)
for piece in FILES_BACH:
with open('bach-durations.txt', 'a') as f:
f.write(extract_cnr_duration(piece)+'\n')
for piece in FILES_BEETHOVEN:
with open('beethoven-durations.txt', 'a') as f:
f.write(extract_cnr_duration(piece)+'\n')
for piece in FILES_DEBUSSY:
with open('debussy-durations.txt', 'a') as f:
f.write(extract_cnr_duration(piece)+'\n')
for piece in FILES_SCARLATTI:
with open('scarlatti-durations.txt', 'a') as f:
f.write(extract_cnr_duration(piece)+'\n')
for piece in FILES_VICTORIA:
with open('victoria-durations.txt', 'a') as f:
f.write(extract_cnr_duration(piece)+'\n')
```
|
github_jupyter
|
# Multilayer Perceptron
Some say that 9 out of 10 people who use neural networks apply a Multilayer Perceptron (MLP). A MLP is basically a feed-forward network with 3 layers (at least): an input layer, an output layer, and a hidden layer in between. Thus, the MLP has no structural loops: information always flows from left (input)to right (output). The lack of inherent feedback saves a lot of headaches. Its analysis is totally straightforward given that the output of the network is always a function of the input, it does not depend on any former state of the model or previous input.

Regarding the topology of a MLP it is normally assumed to be a densely-meshed one-to-many link model between the layers. This is mathematically represented by two matrices of parameters named “the thetas”. In any case, if a certain connection is of little relevance with respect to the observable training data, the network will automatically pay little attention to its contribution and assign it a low weight close to zero.
## Prediction
The evaluation of the output of a MLP, i.e., its prediction, given an input vector of data is a matter of matrix multiplication. To that end, the following variables are described for convenience:
* $N$ is the dimension of the input layer.
* $H$ is the dimension of the hidden layer.
* $K$ is the dimension of the output layer.
* $M$ is the dimension of the corpus (number of examples).
Given the variables above, the parameters of the network, i.e., the thetas matrices, are defined as follows:
* $\theta^{(IN)} \rightarrow H \times (N+1)$
* $\theta^{(OUT)} \rightarrow K \times (H+1)$
```
import NeuralNetwork
# 2 input neurons, 3 hidden neurons, 1 output neuron
nn = NeuralNetwork.MLP([2,3,1])
# nn[0] -> ThetaIN, nn[1] -> ThetaOUT
print(nn)
```
What follows are the ordered steps that need to be followed in order to evaluate the network prediction.
### Input Feature Expansion
The first step to attain a successful operation of the neural network is to add a bias term to the input feature space (mapped to the input layer):
$$a^{(IN)} = [1;\ x]$$
The feature expansion of the input space with the bias term increases the learning effectiveness of the model because it adds a degree of freedom to the adaptation process. Note that $a^{(IN)}$ directly represents the activation values of the input layer. Thus, the input layer is linear with the input vector $x$ (it is defined by a linear activation function).
### Transit to the Hidden Layer
Once the activations (outputs) of the input layer are determined, their values flow into the hidden layer through the weights defined in $\theta^{(IN)}$:
$$z^{(HID)} = \theta^{(IN)}\;a^{(IN)}$$
Similarly, the dimensionality of the hidden layer is expanded with a bias term to increase its learning effectiveness:
$$a^{(HID)} = [1;\ g(z^{(HID)})]$$
Here, a new function $g()$ is introduced. This is the generic activation function of a neuron, and generally it is non-linear. Its application yields the output values of the hidden layer $a^{(HID)}$ and provides the true learning power to the neural model.
### Output
Then, the activation values of the output layer, i.e., the network prediction, are calculated as follows:
$$z^{(OUT)} = \theta^{(OUT)}\;a^{(HID)}$$
and finally
$$a^{(OUT)} = g(z^{(OUT)}) = y$$
### Activation Function
The activation function of the neuron is (usually) a non-linear function that provides the expressive power to the neural network. It is recommended this function to be smooth, differentiable and monotonically non-decreasing (for learning purposes). Typically, the logistic sigmoid function is used.
$$g(z) = \frac{1}{(1 + \exp^{-z})}$$
Note that the range of this function varies from 0 to 1. Therefore, the output values of the neurons will always be bounded by the upper and the lower limits of this range. This entails considering a scaling process if a broader range of predicted values is needed. Other activation functions can be used with the "af" parameter. For example, the range of the hyperbolic tangent ("HyperTan" function) goes from -1 to 1.
```
import numpy as np
# Random instance with 2 values
x = np.array([1.0, 2.0])
y = NeuralNetwork.MLP_Predict(nn, x)
# intermediate results are available
# y[0] -> input result, y[1] -> hidden result, y[2] -> output result
print(y)
z = np.arange(-8, 8, 0.1)
g = NeuralNetwork.Logistic(z)
%matplotlib inline
import matplotlib.pyplot as plt
plt.figure()
plt.plot(z, g, 'b-', label='g(z)')
plt.legend(loc='upper left')
plt.xlabel('Input [z]')
plt.ylabel('Output [g]')
plt.title('Logistic sigmoid activation function')
plt.show()
```
## Training
Training a neural network essentially means fitting its parameters to a set of example data considering an objective function, aka cost function. This process is also known as supervised learning. It is usually implemented as an iterative procedure.
### Cost Function
The cost function somehow encodes the objective or goal that should be attained with the network. It is usually defined as a classification or a regression evaluation function. However, the actual form of the cost function is effectively the same, which is an error or fitting function. A cost function measures the discrepancy between the desired output for a pattern and the output produced by the network.
The cost function $J$ quantifies the amount of squared error (or misfitting) that the network displays with respect to a set of data. Thus, in order to achieve a successfully working model, this cost function must be minimised with an adequate set of parameter values. To do so, several solutions are valid as long as this cost function be a convex function (i.e., a bowl-like shape). A well known example of such is the quadratic function, which trains the neural network considering a minimum squared error criterion over the whole dataset of training examples:
$$J(\theta, x) = \frac{1}{M} \sum_{m=1}^M \sum_{k=1}^K \left(Error_k^{(m)}\right)^2 = \frac{1}{M} \sum_{m=1}^M \sum_{k=1}^K \left(t_k^{(m)}-y_k^{(m)}\right)^2$$
Note that the term $t$ in the cost function represents the target value of the network (i.e., the ideal/desired network output) for a given input data value $x$. Now that the cost function can be expressed, a convex optimisation procedure (e.g., a gradient-based method) must be conducted in order to minimise its value. Note that this is essentially a least-squares regression.
### Regularisation
The mean squared-error cost function described above does not incorporate any knowledge or constraint about the characteristics of the parameters being adjusted through the optimisation training strategy. This may develop into a generalisation problem because the space of solutions is large and some of these solutions may turn the model unstable with new unseen data. Therefore, there is the need to smooth the performance of the model over a wide range of input data.
Neural networks usually generalise well as long as the weights are kept small. Thus, the Tikhonov regularisation function, aka ridge regression, is introduced as a means to control complexity of the model in favour of its increased general performance. This regularisation approach, which is used in conjunction with the aforementioned cost function, favours small weight values (it is a cost over large weight values):
$$R(\theta) = \frac{\lambda}{2 M} \sum_{\forall \theta \notin bias} \theta^2$$
There is a typical trade-off in Machine Learning, known as the bias-variance trade-off, which has a direct relationship with the complexity of the model, the nature of the data and the amount of available training data to adjust it. This ability of the model to learn more or less complex scenarios raises an issue with respect to its fitting (memorisation v. generalisation): if the data is simple to explain, a complex model is said to overfit the data, causing its overall performance to drop (high variance model). Similarly, if complex data is tackled with a simple model, such model is said to underfit the data, also causing its overall performance to drop (high bias model). As it is usual in engineering, a compromise must be reached with an adequate $\lambda$ value.
### Parameter Initialisation
The initial weights of the thetas assigned by the training process are critical with respect to the success of the learning strategy. They determine the starting point of the optimisation procedure, and depending on their value, the adjusted parameter values may end up in different places if the cost function has multiple (local) minima.
The parameter initialisation process is based on a uniform distribution between two small numbers that take into account the amount of input and output units of the adjacent layers:
$$\theta_{init} = U[-\sigma, +\sigma]\ \ where\ \ \sigma = \frac{\sqrt{6}}{\sqrt{in + out}}$$
In order to ensure a proper learning procedure, the weights of the parameters need to be randomly assigned in order to prevent any symmetry in the topology of the network model (that would be likely to end in convergence problems).
### Gradient Descent
Given the convex shape of the cost function (which usually also includes the regularisation), the minimisation objective boils down to finding the extremum of this function using its derivative in the continuos space of the weights. To this end you may use the analytic form of the derivative of the cost function (a nightmare), a numerical finite difference, or automatic differentiation.
Gradient descent is a first-order optimisation algorithm, complete but non-optimal. It first starts with some arbitrarily chosen parameters and computes the derivative of the cost function with respect to each of them $\frac{\partial J(\theta,x)}{\partial \theta}$. The model parameters are then updated by moving them some distance (determined by the so called learning rate $\eta$) from the former initial point in the direction of the steepest descent, i.e., along the negative of the gradient. If $\eta$ is set too small, though, convergence is needlessly slow, whereas if it is too large, the update correction process may overshoot and even diverge.
$$\theta^{t+1} \leftarrow \theta^t - \eta \frac{\partial^t J(\theta,x)}{\partial \theta} $$
These steps are iterated in a loop until some stopping criterion is met, e.g., a determined number of epochs (i.e., the processing of all patterns in the training example set) is reached, or when no significant improvement is observed.
#### Stochastic versus Batch Learning
One last remark should be made about the amount of examples $M$ used in the cost function for learning. If the training procedure considers several instances at once per cost gradient computation and parameter update, i.e., $M \gg 1$, the approach is called batch learning. Batch learning is usually slow because each cost computation accounts for all the available training instances, and especially if the data redundancy is high (similar patterns). However, the conditions of convergence are well understood.
Alternatively, it is usual to consider only one single training instance at a time, i.e., $M=1$, to estimate the gradient in order to speed up the iterative learning process. This procedure is called stochastic (online) learning. Online learning steps are faster to compute, but this noisy single-instance approximation of the cost gradient function makes it a little inaccurate around the optimum. However, stochastic learning often results in better solutions because of the noise in the updates, and thus it is very convenient in most cases.
```
# Load Iris dataset
from sklearn import datasets as dset
import copy
iris = dset.load_iris()
# build network with 4 input, 1 output
nn = NeuralNetwork.MLP([4,4,1])
# keep original for further experiments
orig = copy.deepcopy(nn)
# Target needs to be divided by 2 because of the sigmoid, values 0, 0.5, 1
idat, itar = iris.data, iris.target/2.0
# regularisation parameter of 0.2
tcost = NeuralNetwork.MLP_Cost(nn, idat, itar, 0.2)
# Cost value for an untrained network
print("J(ini) = " + str(tcost))
# Train with numerical gradient, 20 rounds, batch
# learning rate is 0.1
NeuralNetwork.MLP_NumGradDesc(nn, idat, itar, 0.2, 20, 0.1)
```
### Backpropagation
The backpropagation algorithm estimates the error for each neuron unit so as to effectively deploy the gradient descent optimisation procedure. It is a popular algorithm, conceptually simple, computationally efficient, and it often works. In order to conduct the estimation of the neuron-wise errors, it first propagates the training data through the network, then it computes the error with the predictions and the target values, and afterwards it backpropagates the error from the output to the input, generally speaking, from a given layer $(n)$ to the immediately former one $(n-1)$:
$$Error^{(n-1)} = Error^{(n)} \; \theta^{(n)}$$
Note that the bias neurons don't backpropagate, they are not connected to the former layer.
Finally, the gradient is computed so that the weights may be updated. Each weight links an input unit $I$ to an output unit $O$, which also provides the error feedback. The general formula that is derived for a logistic sigmoid activation function is shown as folows:
$$\theta^{(t+1)} \leftarrow \theta^{(t)} + \eta \; I \; Error \; O \; (1 - O)$$
From a computational complexity perspective, Backpropagation is much more effective than the numerical gradient applied above because it computes the errors for all the weights in 2 network traversals, whereas numerical gradient needs to compute 2 traversals per parameter. In addition, stochastic learning is generally the preferred method for Backprop.
```
# Iris example with Backprop
# load original network
nn = copy.deepcopy(orig)
# Cost value for an untrained network
tcost = NeuralNetwork.MLP_Cost(nn, idat, itar, 0.2)
print("J(ini) = " + str(tcost))
# Train with numerical gradient, 20 rounds
# learning rate is 0.1
NeuralNetwork.MLP_Backprop(nn, idat, itar, 0.2, 20, 0.1)
```
### Practical Techniques
Backpropagation learning can be tricky particularly for multilayered networks where the cost surface is non-quadratic, non-convex, and high dimensional with many local minima and/or flat regions. Its successful convergence is not guarateed. Designing and training a MLP using Backprop requires making choices such as the number and type of nodes, layers, learning rates, training and test sets, etc, and many undesirable behaviours can be avoided with practical techniques.
#### Instance Shuffling
In stochastic learning neural networks learn the most from the unexpected instances. Therefore, it is advisable to iterate over instances that are the most unfamiliar to the system (i.e., have the maximum information content). As a means to progress towards getting more chances for learning better, it is recommended to shuffle the training set so that successive training instances rarely belong to the same class.
```
from sklearn.utils import shuffle
# load original network
nn = copy.deepcopy(orig)
# shuffle instances
idat, itar = shuffle(idat, itar)
NeuralNetwork.MLP_Backprop(nn, idat, itar, 0.2, 20, 0.1)
```
#### Feature Standardisation
Convergence is usually faster if the average of each input feature over the training set is close to zero, otherwise the updates will be biased in a particular direction and thus will slow learning.
Additionally, scaling the features so that all have about the same covariance speeds learning because it helps to balance out the rate at which the weights connected to the input nodes learn.
```
# feature stats
mu_idat = np.mean(idat, axis=0)
std_idat = np.std(idat, axis=0)
# standardise
s_idat = (idat - mu_idat) / std_idat
# eval
test = copy.deepcopy(orig)
NeuralNetwork.MLP_Backprop(test, s_idat, itar, 0.2, 20, 0.1)
```
#### Feature Decorrelation
If inputs are uncorrelated then it is possible to solve for the weight values independently. With correlated inputs, the solution must be searched simultaneously, which is a much harder problem. Principal Component Analysis (aka the Karhunen-Loeve expansion) can be used to remove linear correlations in inputs.
```
# construct orthogonal basis with principal vectors
covmat = np.cov(s_idat.T)
l,v = np.linalg.eig(covmat)
# reproject
d_s_idat = s_idat.dot(v)
# eval
test = copy.deepcopy(orig)
NeuralNetwork.MLP_Backprop(test, d_s_idat, itar, 0.2, 20, 0.1)
```
#### Target Values
Target values at the sigmoid asymptotes need to be driven by large weights, which can result in instabilities. Instead, target values at the points of the extrema of the second derivative of the sigmoid activation function avoid saturating the output units. The second derivative of the logistic sigmoid is $g''(z) = g(z)(1 - g(z))(1 - 2g(z))$, shown below.
```
g = NeuralNetwork.Logistic
ddg = g(z)*(1 - g(z))*(1 - 2*g(z))
plt.figure()
plt.plot(z, ddg, 'b-', label='g\'\'(z)')
plt.legend(loc='upper left')
plt.xlabel('Input [z]')
plt.ylabel('Output [g\'\']')
plt.title('Second derivative of the logistic sigmoid activation function')
plt.show()
# max min target values
mx = max(ddg)
mi = min(ddg)
c = 0
for i in ddg:
if i == mx:
print("Max target " + str(z[c]) + " -> " + str(g(z[c])))
if i == mi:
print("Min target " + str(z[c]) + " -> " + str(g(z[c])))
c += 1
```
Therefore, optimum target values must be at 0.21 and 0.79.
```
for i in xrange(len(itar)):
if itar[i] == 0:
itar[i] = 0.21
if itar[i] == 1:
itar[i] = 0.79
test = copy.deepcopy(orig)
NeuralNetwork.MLP_Backprop(test, d_s_idat, itar, 0.2, 20, 0.1)
```
#### Target Vectors
When designing a learning system, it is suitable to take into account the nature of the problem at hand (e.g., whether if it is a classification problem or a regression problem) to determine the number of output units $K$.
In the case of classification, $K$ should be the amount of different classes, and the target output should be a binary vector. Given an instance, only the output unit that corresponds to the instance class should be set. This approach is usually referred to as "one-hot" encoding. The decision rule for classification is then driven by the maximum output unit.
In the case of a regression problem, $K$ should be equal to the number of dependent variables.
```
# Iris is a classification problem, K=3
# build network with 4 input, 3 outputs
test3 = NeuralNetwork.MLP([4,4,3])
# modify targets
t = []
for i in itar:
if i == 0.21:
t.append([0.79,0.21,0.21])
elif i == 0.5:
t.append([0.21,0.79,0.21])
else:
t.append([0.21,0.21,0.79])
t = np.array(t)
NeuralNetwork.MLP_Backprop(test3, d_s_idat, t, 0.2, 20, 0.1)
```
Finally, the effectiveness/performance of each approach should be scored with an appropriate metric: squared-error residuals like the cost function for regression problems, and competitive selection for classification.
```
# compare accuracies between single K and multiple K
single = 0
multiple = 0
for x,y in zip(d_s_idat, itar):
ps = NeuralNetwork.MLP_Predict(test, x)
ps = ps[-1][0]
pm = NeuralNetwork.MLP_Predict(test3, x)
pm = [pm[-1][0], pm[-1][1], pm[-1][2]]
if y == 0.21: # class 0
if np.abs(ps - 0.21) < np.abs(ps - 0.5):
if np.abs(ps - 0.21) < np.abs(ps - 0.79):
single += 1
if pm[0] > pm[1]:
if pm[0] > pm[2]:
multiple += 1
elif y == 0.5: # class 1
if np.abs(ps - 0.5) < np.abs(ps - 0.21):
if np.abs(ps - 0.5) < np.abs(ps - 0.79):
single += 1
if pm[1] > pm[0]:
if pm[1] > pm[2]:
multiple += 1
else: # class 2
if np.abs(ps - 0.79) < np.abs(ps - 0.21):
if np.abs(ps - 0.79) < np.abs(ps - 0.5):
single += 1
if pm[2] > pm[0]:
if pm[2] > pm[1]:
multiple += 1
print("Accuracy single: " + str(single))
print("Accuracy multiple: " + str(multiple))
```
#### Hidden Units
The number of hidden units determines the expressive power of the network, and thus, the complexity of its transfer function. The more complex a model is, the more complicated data structures it can learn. Nevertheless, this argument cannot be extended ad infinitum because a shortage of training data with respect to the amount of parameters to be learnt may lead the model to overfit the data. That’s why the aforementioned regularisation function is also used to avoid this situation.
Thus, it is common to have a skew toward suggesting a slightly more complex model than strictly necessary (regularisation will compensate for the extra complexity if necessary). Some heuristic guidelines to guess this optimum number of hidden units indicate an amount somewhat related to the number of input and output units. This is an experimental issue, though. There is no rule of thumb for this. Apply a configuration that works for your problem and you’re done.
#### Final Remarks
* Tweak the network: different activation function, adaptive learning rate, momentum, annealing, noise, etc.
* Focus on model generalisation: keep a separate self-validation set of data (not used to train the model) to test and estimate the actual performance of the model. See [test_iris.py](test_iris.py)
* Incorporate as much knowledge as possible. Expertise is a key indicator of success. Data driven models don’t do magic, the more information that is available, the greater the performance of the model.
* Feature Engineering is of utmost importance. This relates to the former point: the more useful information that can be extracted from the input data, the better performance can be expected. Salient indicators are keys to success. This may lead to selecting only the most informative features (mutual information, chi-square...), or to change the feature space that is used to represent the instance data (Principal Component Analysis for feature extraction and dimensionality reduction). And always standardise your data and exclude outliers.
* Get more data if the model is not good enough. Related to “the curse of dimensionality” principle: if good data is lacking, no successful model can be obtained. There must be a coherent relation between the parameters of the model (i.e., its complexity) and the amount of available data to train them.
* Ensemble models, integrate criteria. Bearing in mind that the optimum model structure is not known in advance, one of the most reasonable approaches to obtain a fairly good guess is to apply different models (with different learning features) to the same problem and combine/weight their outputs. Related techniques to this are also known as “boosting”.
|
github_jupyter
|
```
from sklearn.datasets import load_iris # iris dataset
from sklearn import tree # for fitting model
# for the particular visualization used
from six import StringIO
import pydot
import os.path
# to display graphs
%matplotlib inline
import matplotlib.pyplot
# get dataset
iris = load_iris()
iris.keys()
import pandas
iris_df = pandas.DataFrame(iris.data)
iris_df.columns = iris.feature_names
iris_df['target'] = [iris.target_names[target] for target in iris.target]
iris_df.head()
iris_df.describe()
print(iris_df)
# choose two features to plot
x_feature = 0
y_feature = 3
#x = list(list(zip(*iris.data))[x_feature])
#y = list(list(zip(*iris.data))[y_feature])
x = iris.data[:, x_feature]
y = iris.data[:, y_feature]
# The data are in order by type (types of irises). Find out the border indexes of the types.
end_type_one = list(iris.target).index(1)
end_type_two = list(iris.target).index(2)
fig = matplotlib.pyplot.figure() # create graph
fig.suptitle('Two Features of the Iris Data Set') # set title
# set axis labels
matplotlib.pyplot.xlabel(iris.feature_names[x_feature])
matplotlib.pyplot.ylabel(iris.feature_names[y_feature])
# put the input data on the graph, with different colors and shapes for each type
scatter_0 = matplotlib.pyplot.scatter(x[:end_type_one], y[:end_type_one],
c="red", marker="o", label=iris.target_names[0])
scatter_1 = matplotlib.pyplot.scatter(x[end_type_one:end_type_two], y[end_type_one:end_type_two],
c="blue", marker="^", label=iris.target_names[1])
scatter_2 = matplotlib.pyplot.scatter(x[end_type_two:], y[end_type_two:],
c="green", marker="*", label=iris.target_names[2])
matplotlib.pyplot.legend(handles=[scatter_0, scatter_1, scatter_2]) # make legend
matplotlib.pyplot.show() # show the graph
print(iris.data)
print(x)
decision_tree = tree.DecisionTreeClassifier() # make model
decision_tree.fit(iris.data, iris.target) # fit model to data
# make pdf diagram of decision tree
dot_data = StringIO()
tree.export_graphviz(decision_tree, out_file=dot_data, feature_names=iris.feature_names, class_names=iris.target_names,
filled=True, rounded=True, special_characters=True)
graph = pydot.graph_from_dot_data(dot_data.getvalue())[0]
graph.write_pdf(os.path.expanduser("~/Desktop/introToML/ML/New Jupyter Notebooks/iris_decision_tree_regular.pdf"))
inputs = [iris.data[0], iris.data[end_type_one], iris.data[end_type_two]] # use the first input of each class
print('Class predictions: {0}'.format(list(iris.target_names[prediction] for prediction in decision_tree.predict(inputs)))) # print predictions
print('Probabilities:\n{0}'.format(decision_tree.predict_proba(inputs))) # print prediction probabilities
```
# Exercise Option #1 - Standard Difficulty
0. Submit the PDF you generated as a separate file in Canvas.
1. According to the PDF, a petal width <= 0.8 cm would tell you with high (100%) probability that you are looking at a setosa iris.
2. According to the PDF, you're supposed to look at the petal length, petal width, and sepal length to tell a virginica from a versicolor.
3. The array value at each node in the pdf shows how many data values of each class passed through the node.
4. The predictions are always have a 100% probability because any data value you give will end up at one end node. Each end node has one class prediction.
5. Below I use a subset of the features (3/4). The new decision tree was completely different than the original: it had more nodes and a different overall shape. When looking at the original decision tree, most of the nodes separated data based on petal length or petal width. The one feature that the new tree does not use is petal width, which is the most likely cause for why the second tree had to use more nodes (it lacked a feature that would make it easy to distinguish the classes).
```
# Use 3/4 columns (the first, second, & third)
first_feature = 0
second_feature = 1
third_feature = 2
iris_inputs = iris.data[:,[first_feature, second_feature, third_feature]] # use only two collumns of the data
decision_tree_with_portion = tree.DecisionTreeClassifier() # make model
decision_tree_with_portion.fit(iris_inputs, iris.target) # fit model to data
# make pdf diagram of decision tree
dot_data = StringIO()
tree.export_graphviz(decision_tree_with_portion, out_file=dot_data, feature_names=iris.feature_names[:3], class_names=iris.target_names,
filled=True, rounded=True, special_characters=True)
graph = pydot.graph_from_dot_data(dot_data.getvalue())[0]
graph.write_pdf(os.path.expanduser("~/Desktop/introToML/ML/New Jupyter Notebooks/iris_decision_tree_with_portion.pdf"))
new_inputs = [iris_inputs[0], iris_inputs[end_type_one], iris_inputs[end_type_two]] # make new inputs with iris_inputs, which only has two features per input
print('Class predictions: {0}'.format(list(iris.target_names[prediction] for prediction in decision_tree_with_portion.predict(new_inputs)))) # print predictions
print('Probabilities:\n{0}'.format(decision_tree_with_portion.predict_proba(new_inputs))) # print prediction probabilities
```
# Exercise Option #2 - Advanced Difficulty
Try fitting a Random Forest model to the iris data. See [this example](http://scikit-learn.org/stable/modules/ensemble.html#forest).
As seen below, the random forest & decision tree had the same F1 score (a perfect 1.0), meaning that they performed the same.
```
# https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html?highlight=random%20forest#sklearn.ensemble.RandomForestClassifier
from sklearn.ensemble import RandomForestClassifier
rand_forst = RandomForestClassifier() # make model
rand_forst = rand_forst.fit(iris.data, iris.target) # fit model
print('Class predictions: {0}'.format(list(iris.target_names[prediction] for prediction in rand_forst.predict(inputs)))) # print class predictions
# https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html?highlight=f1#sklearn.metrics.f1_score
from sklearn.metrics import f1_score
# get predictions for whole dataset
decision_tree_predictions = decision_tree.predict(iris.data)
rand_forst_predictions = rand_forst.predict(iris.data)
# print F1 scores
print ('Decision tree F1 score: {}'.format(f1_score(iris.target, decision_tree_predictions, average='weighted')))
print ('Random forest F1 score: {}'.format(f1_score(iris.target, rand_forst_predictions, average='weighted')))
```
|
github_jupyter
|
## 1. Volatility changes over time
<p>What is financial risk? </p>
<p>Financial risk has many faces, and we measure it in many ways, but for now, let's agree that it is a measure of the possible loss on an investment. In financial markets, where we measure prices frequently, volatility (which is analogous to <em>standard deviation</em>) is an obvious choice to measure risk. But in real markets, volatility changes with the market itself. </p>
<p><img src="https://assets.datacamp.com/production/project_738/img/VolaClusteringAssetClasses.png" alt=""></p>
<p>In the picture above, we see the returns of four very different assets. All of them exhibit alternating regimes of low and high volatilities. The highest volatility is observed around the end of 2008 - the most severe period of the recent financial crisis.</p>
<p>In this notebook, we will build a model to study the nature of volatility in the case of US government bond yields.</p>
```
# Load the packages
library(xts)
library(readr)
# Load the data
yc_raw <- read_csv("datasets/FED-SVENY.csv")
# Convert the data into xts format
yc_all <- as.xts(x = yc_raw[, -1], order.by = yc_raw$Date)
# Show only the tail of the 1st, 5th, 10th, 20th and 30th columns
yc_all_tail <- tail(yc_all[,c(1,5,10, 20, 30)])
yc_all_tail
```
## 2. Plotting the evolution of bond yields
<p>In the output table of the previous task, we see the yields for some maturities.</p>
<p>These data include the whole yield curve. The yield of a bond is the price of the money lent. The higher the yield, the more money you receive on your investment. The yield curve has many maturities; in this case, it ranges from 1 year to 30 years. Different maturities have different yields, but yields of neighboring maturities are relatively close to each other and also move together.</p>
<p>Let's visualize the yields over time. We will see that the long yields (e.g. SVENY30) tend to be more stable in the long term, while the short yields (e.g. SVENY01) vary a lot. These movements are related to the monetary policy of the FED and economic cycles.</p>
```
library(viridis)
# Define plot arguments
yields <- yc_all
plot.type <- "single"
plot.palette <- viridis(n = 30)
asset.names <- colnames(yc_all)
# Plot the time series
plot.zoo(x = yc_all, plot.type = plot.type, col = plot.palette)
# Add the legend
legend(x = "topleft", legend = asset.names,
col = plot.palette, cex = 0.45, lwd = 3)
```
## 3. Make the difference
<p>In the output of the previous task, we see the level of bond yields for some maturities, but to understand how volatility evolves we have to examine the changes in the time series. Currently, we have yield levels; we need to calculate the changes in the yield levels. This is called "differentiation" in time series analysis. Differentiation has the added benefit of making a time series independent of time.</p>
```
# Differentiate the time series
ycc_all <- diff.xts(yc_all)
# Show the tail of the 1st, 5th, 10th, 20th and 30th columns
ycc_all_tail <- tail(ycc_all[, c(1, 5, 10, 20, 30)])
ycc_all_tail
```
## 4. The US yields are no exceptions, but maturity matters
<p>Now that we have a time series of the changes in US government yields let's examine it visually.</p>
<p>By taking a look at the time series from the previous plots, we see hints that the returns following each other have some unique properties:</p>
<ul>
<li>The direction (positive or negative) of a return is mostly independent of the previous day's return. In other words, you don't know if the next day's return will be positive or negative just by looking at the time series.</li>
<li>The magnitude of the return is similar to the previous day's return. That means, if markets are calm today, we expect the same tomorrow. However, in a volatile market (crisis), you should expect a similarly turbulent tomorrow.</li>
</ul>
```
# Define the plot parameters
yield.changes <- ycc_all
plot.type <- "multiple"
# Plot the differentiated time series
plot.zoo(x = yield.changes, plot.type = plot.type,
ylim = c(-0.5, 0.5), cex.axis = 0.7,
ylab = 1:30, col = plot.palette)
```
## 5. Let's dive into some statistics
<p>The statistical properties visualized earlier can be measured by analytical tools. The simplest method is to test for autocorrelation. Autocorrelation measures how a datapoint's past determines the future of a time series. </p>
<ul>
<li>If the autocorrelation is close to 1, the next day's value will be very close to today's value. </li>
<li>If the autocorrelation is close to 0, the next day's value will be unaffected by today's value.</li>
</ul>
<p>Because we are interested in the recent evolution of bond yields, we will filter the time series for data from 2000 onward.</p>
```
# Filter for changes in and after 2000
ycc <- ycc_all["2000/",]
# Save the 1-year and 20-year maturity yield changes into separate variables
x_1 <- ycc[,"SVENY01"]
x_20 <- ycc[, "SVENY20"]
# Plot the autocorrelations of the yield changes
par(mfrow=c(2,2))
acf_1 <- acf(x_1)
acf_20 <- acf(x_20)
# Plot the autocorrelations of the absolute changes of yields
acf_abs_1 <- acf(abs(x_1))
acf_abs_20 <- acf(abs(x_20))
```
## 6. GARCH in action
<p>A Generalized AutoRegressive Conditional Heteroskedasticity (<a href="https://en.wikipedia.org/wiki/Autoregressive_conditional_heteroskedasticity">GARCH</a>) model is the most well known econometric tool to handle changing volatility in financial time series data. It assumes a hidden volatility variable that has a long-run average it tries to return to while the short-run behavior is affected by the past returns.</p>
<p>The most popular form of the GARCH model assumes that the volatility follows this process:
</p><p></p>
<math>
σ<sup>2</sup><sub>t</sub> = ω + α ⋅ ε<sup>2</sup><sub>t-1</sub> + β ⋅ σ<sup>2</sup><sub>t-1</sub>
</math>
<p></p><p></p>
<math>
where σ is the current volatility, σ<sub>t-1</sub> the last day's volatility and ε<sub>t-1</sub> is the last day's return. The estimated parameters are ω, α, and β.
</math>
<p>For GARCH modeling we will use <a href="https://cran.r-project.org/web/packages/rugarch/index.html"><code>rugarch</code></a> package developed by Alexios Ghalanos.</p>
```
library(rugarch)
# Specify the GARCH model with the skewed t-distribution
spec <- ugarchspec(distribution.model = "sstd")
# Fit the model
fit_1 <- ugarchfit(x_1, spec = spec)
# Save the volatilities and the rescaled residuals
vol_1 <- sigma(fit_1)
res_1 <- scale(residuals(fit_1, standardize = TRUE)) * sd(x_1) + mean(x_1)
# Plot the yield changes with the estimated volatilities and residuals
merge_1 <- merge.xts(x_1, vol_1, res_1)
plot.zoo(merge_1)
```
## 7. Fitting the 20-year maturity
<p>Let's do the same for the 20-year maturity. As we can see in the plot from Task 6, the bond yields of various maturities show similar but slightly different characteristics. These different characteristics can be the result of multiple factors such as the monetary policy of the FED or the fact that the investors might be different.</p>
<p>Are there differences between the 1-year maturity and 20-year maturity plots?</p>
```
# Fit the model
fit_20 <- ugarchfit(x_20, spec = spec)
# Save the volatilities and the rescaled residuals
vol_20 <- sigma(fit_20)
res_20 <- scale(residuals(fit_20, standardize = TRUE)) * sd(x_20) + mean(x_20)
# Plot the yield changes with the estimated volatilities and residuals
merge_20 <- merge.xts(x_20, vol_20, res_20)
plot.zoo(merge_20)
```
## 8. What about the distributions? (Part 1)
<p>From the plots in Task 6 and Task 7, we can see that the 1-year GARCH model shows a similar but more erratic behavior compared to the 20-year GARCH model. Not only does the 1-year model have greater volatility, but the volatility of its volatility is larger than the 20-year model. That brings us to two statistical facts of financial markets not mentioned yet. </p>
<ul>
<li>The unconditional (before GARCH) distribution of the yield differences has heavier tails than the normal distribution.</li>
<li>The distribution of the yield differences adjusted by the GARCH model has lighter tails than the unconditional distribution, but they are still heavier than the normal distribution.</li>
</ul>
<p>Let's find out what the fitted GARCH model did with the distribution we examined.</p>
```
# Calculate the kernel density for the 1-year maturity and residuals
density_x_1 <- density(x_1)
density_res_1 <- density(res_1)
# Plot the density diagram for the 1-year maturity and residuals
plot(density_x_1)
lines(density_res_1, col = "red")
# Add the normal distribution to the plot
norm_dist <- dnorm(seq(-0.4, 0.4, by = .01), mean = mean(x_1), sd = sd(x_1))
lines(seq(-0.4, 0.4, by = .01),
norm_dist,
col = "darkgreen"
)
# Add legend
legend <- c("Before GARCH", "After GARCH", "Normal distribution")
legend("topleft", legend = legend,
col = c("black", "red", "darkgreen"), lty=c(1,1))
```
## 9. What about the distributions? (Part 2)
<p>In the previous plot, we see that the two distributions from the GARCH models are different from the normal distribution of the data, but the tails, where the differences are the most profound, are hard to see. Using a Q-Q plot will help us focus in on the tails.</p>
<p>You can read an excellent summary of Q-Q plots <a href="https://stats.stackexchange.com/questions/101274/how-to-interpret-a-qq-plot">here</a>.</p>
```
# Define the data to plot: the 1-year maturity yield changes and residuals
data_orig <- x_1
data_res <- res_1
# Define the benchmark distribution
distribution <- qnorm
# Make the Q-Q plot of original data with the line of normal distribution
qqnorm(data_orig, ylim = c(-0.5, 0.5))
qqline(data_orig, distribution = distribution, col = "darkgreen")
# Make the Q-Q plot of GARCH residuals with the line of normal distribution
par(new=TRUE)
qqnorm(data_res * 0.614256270265139, col = "red", ylim = c(-0.5, 0.5))
qqline(data_res * 0.614256270265139, distribution = distribution, col = "darkgreen")
legend("topleft", c("Before GARCH", "After GARCH"), col = c("black", "red"), pch=c(1,1))
```
## 10. A final quiz
<p>In this project, we fitted a GARCH model to develop a better understanding of how bond volatility evolves and how it affects the probability distribution. In the final task, we will evaluate our model. Did the model succeed, or did it fail?</p>
```
# Q1: Did GARCH revealed how volatility changed over time? # Yes or No?
(Q1 <- "Yes")
# Q2: Did GARCH bring the residuals closer to normal distribution? Yes or No?
(Q2 <- "Yes")
# Q3: Which time series of yield changes deviates more
# from a normally distributed white noise process? Choose 1 or 20.
(Q3 <- 1)
```
|
github_jupyter
|
```
# Copyright 2019 The Kubeflow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Install Pipeline SDK - This only needs to be ran once in the enviroment.
!python3 -m pip install 'kfp>=0.1.31' --quiet
!pip3 install tensorflow==1.14 --upgrade
```
## KubeFlow Pipelines Serving Component
In this notebook, we will demo:
* Saving a Keras model in a format compatible with TF Serving
* Creating a pipeline to serve a trained model within a KubeFlow cluster
Reference documentation:
* https://www.tensorflow.org/tfx/serving/architecture
* https://www.tensorflow.org/beta/guide/keras/saving_and_serializing
* https://www.kubeflow.org/docs/components/serving/tfserving_new/
### Setup
```
# Set your output and project. !!!Must Do before you can proceed!!!
project = 'Your-Gcp-Project-ID' #'Your-GCP-Project-ID'
model_name = 'model-name' # Model name matching TF_serve naming requirements
import time
ts = int(time.time())
model_version = str(ts) # Here we use timestamp as version to avoid conflict
output = 'Your-Gcs-Path' # A GCS bucket for asset outputs
KUBEFLOW_DEPLOYER_IMAGE = 'gcr.io/ml-pipeline/ml-pipeline-kubeflow-deployer:1.7.0-rc.3'
model_path = '%s/%s' % (output,model_name)
model_version_path = '%s/%s/%s' % (output,model_name,model_version)
```
### Load a Keras Model
Loading a pretrained Keras model to use as an example.
```
import tensorflow as tf
model = tf.keras.applications.NASNetMobile(input_shape=None,
include_top=True,
weights='imagenet',
input_tensor=None,
pooling=None,
classes=1000)
```
### Saved the Model for TF-Serve
Save the model using keras export_saved_model function. Note that specifically for TF-Serve the output directory should be structure as model_name/model_version/saved_model.
```
tf.keras.experimental.export_saved_model(model, model_version_path)
```
### Create a pipeline using KFP TF-Serve component
```
def kubeflow_deploy_op():
return dsl.ContainerOp(
name = 'deploy',
image = KUBEFLOW_DEPLOYER_IMAGE,
arguments = [
'--model-export-path', model_path,
'--server-name', model_name,
]
)
import kfp
import kfp.dsl as dsl
# The pipeline definition
@dsl.pipeline(
name='sample-model-deployer',
description='Sample for deploying models using KFP model serving component'
)
def model_server():
deploy = kubeflow_deploy_op()
```
Submit pipeline for execution on Kubeflow Pipelines cluster
```
kfp.Client().create_run_from_pipeline_func(model_server, arguments={})
#vvvvvvvvv This link leads to the run information page. (Note: There is a bug in JupyterLab that modifies the URL and makes the link stop working)
```
|
github_jupyter
|
# トークトリアル 4
# リガンドベーススクリーニング:化合物類似性
#### Developed in the CADD seminars 2017 and 2018, AG Volkamer, Charité/FU Berlin
Andrea Morger and Franziska Fritz
## このトークトリアルの目的
このトークトリアルでは、化合物をエンコード(記述子、フィンガープリント)し、比較(類似性評価)する様々なアプローチを取り扱います。さらに、バーチャルスクリーニングを実施します。バーチャルスクリーニングは、ChEMBLデータベースから取得し、リピンスキーのルールオブファイブでフィルタリングをかけた、EGFRに対して評価済みの化合物データセット(**トークトリアル 2** 参照) に対して、EGFR阻害剤ゲフィチニブ(Gefitinib)との類似性検索を実施するという形で実施します。
## 学習の目標
### 理論
* 化合物類似性(Molecular similarity)
* 化合物記述子(Molecular descriptors)
* 化合物フィンガープリント(Molecular fingerprints)
* 部分構造ベースのフィンガープリント(Substructure-based fingerprints)
* MACCSフィンガープリント(MACCS fingerprints)
* Morganフィンガープリント、サーキュラーフィンガープリント(Morgan fingerprints, circular fingerprints)
* 化合物類似性評価(Molecular similarity measures)
* タニモト係数(Tanimoto coefficient)
* Dice係数(Dice coefficient)
* バーチャルスクリーニング(Virtual screening)
* 類似性検索(similarity search)によるバーチャルスクリーニング
### 実践
* 分子の読み込みと描画
* 化合物記述子の計算
* 1D 化合物記述子:分子量
* 2D 化合物記述子:MACCS ファインガープリント
* 2D 化合物記述子:Morgan フィンガープリント
* 化合物類似性の計算
* MACCS フィンガープリント:タニモト類似性とDice類似性
* Morgan フィンガープリント:タニモト類似性とDice類似性
* 類似性検索によるバーチャルスクリーニング
* データセットの全化合物に対する化合物クエリの比較
* 類似度の分布
* 最も類似した分子の描画
* エンリッチメントプロットの作成
## レファレンス
* レビュー"Molecular similarity in medicinal chemistry" ([<i>J. Med. Chem.</i> (2014), <b>57</b>, 3186-3204](http://pubs.acs.org/doi/abs/10.1021/jm401411z))
* RDKitのMorganフィンガープリント ([RDKit tutorial on Morgan fingerprints](http://www.rdkit.org/docs/GettingStartedInPython.html#morgan-fingerprints-circular-fingerprints))
* ECFP - extended-connectivity fingerprints ([<i>J. Chem. Inf. Model.</i> (2010), <b>50</b>,742-754](https://pubs.acs.org/doi/abs/10.1021/ci100050t))
* ケミカルスペース
([<i>ACS Chem. Neurosci.</i> (2012), <b>19</b>, 649-57](https://www.ncbi.nlm.nih.gov/pubmed/23019491))
* RDKitの化合物記述子リスト ([RDKit documentation: Descriptors](https://www.rdkit.org/docs/GettingStartedInPython.html#list-of-available-descriptors))
* RDKitのフィンガープリントのリスト ([RDKit documentation: Fingerprints](https://www.rdkit.org/docs/GettingStartedInPython.html#list-of-available-fingerprints))
* エンリッチメントプロット([Applied Chemoinformatics, Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim, (2018), **1**, 313-31](https://onlinelibrary.wiley.com/doi/10.1002/9783527806539.ch6h))
_____________________________________________________________________________________________________________________
## 理論
### 化合物類似性
化合物類似性は化学情報学(ケモインフォマティクス、chemical informatics)の中でよく知られており、頻繁に用いられる考え方です。化合物とその特性の比較はいろいろな用途で使用でき、望みの特性と生理活性をもつ新しい化合物を見つけるのに役立つかもしれません。
構造的に類似の化合物は類似の特性、そして類似の生理活性を示すという考え方は、類似性質原則(similar property principle、SPP)や構造活性相関(structure activity relationship、SAR)に表れています。この文脈において、バーチャルスクリーニングは、結合親和性のわかっている化合物セットがあれば、そのような化合物をさらに探すことができる、というアイデアに基づいています。
### 化合物記述子
類似度は適用範囲に応じて様々な方法で評価することができます(参考 <a href="http://pubs.acs.org/doi/abs/10.1021/jm401411z"><i>J. Med. Chem.</i> (2014), <b>57</b>, 3186-3204</a>):
* **1D 化合物記述子**: 溶解度、logP、分子量、融点 etc.
* グローバル記述子(Global descriptor):分子全体を一つの値だけで表現する <br>
* 通常、機械学習(machine learning、ML)を適用するには分子を特定するのに十分な特性とはならない
* 機械学習のための化合物エンコーディングを改良するために2Dフィンガープリントに付け加えることができる
* **2D 化合物記述子**: 分子グラフ(Molecular graph)、経路(path)、フラグメント、原子環境(atom environment)
* 分子の個々の部位の詳細な表現
* 一つの分子に対して多数のフィンガープリントと呼ばれる特徴/ビット
* 類似性検索と機械学習で非常によく使われる
* **3D 化合物記述子**: 形状(Shape), 立体化学
* 化学者は通常2次元表現で訓練されている <br>
* 化合物の自由度(flexibility、化合物の「正しい」配座はどれか?)のため、2次元表現と比べて頑健性が低い
* **生物学的類似性**
* 生物学的フィンガープリント(例、個々のビットが異なるターゲット分子に対して評価された生理活性を表す)
* 化合物構造からは独立
* 実験データ(あるいは予測値)が必要
すでに **トークトリアル 2** で、分子量やlogPといった1D 物理化学パラメーターを計算する方法を学びました。RDKitに実装されているそのような記述子は [RDKit documentation: Descriptors](https://www.rdkit.org/docs/GettingStartedInPython.html#list-of-available-descriptors) で見つけることができます。
以降では、2D(あるいは3D)化合物記述子の定義に焦点を当てます。多くの場合、分子ごとに固有の(ユニークな)ものとなるので、これらの記述子はフィンガープリント(指紋)ともよばれます。
### 化合物フィンガープリント(Molecular fingerprints)
#### 部分構造に基づくフィンガープリント(Substructure-based fingerprints)
化合物フィンガープリントは化学的な特徴と分子の特徴をビット文字列(bitstring)やビットベクトル(bitvector)、配列(array)の形でエンコードします。各ビットは、事前に定義された分子の特徴あるいは環境に相当し、「1」はその特徴が存在していることを、「0」は存在していないことを示します。実装の仕方によっては、数え上げベース(count-based)となっていて、ある特定の特徴がいくつ存在しているかを数えるようになっていることに注意してください。
フィンガープリントのデザインには複数の方法があります。ここではよく使われる2Dフィンガープリントのとして、MACCSキーとMorganフィンガープリントの2種類を導入します。
[RDKit documentation: Fingerprints](https://www.rdkit.org/docs/GettingStartedInPython.html#list-of-available-fingerprints)に記載されているように、RDKitではこの2つ以外にも多数のフィンガープリントを提供しています。
#### MACCS フィンガープリント(MACCS fingerprints)
Molecular ACCess System (MACCS) フィンガープリント、あるいはMACCS構造キーとも名付けられている手法は、あらかじめ定義された166個の構造フラグメントから構成されています。各位置は、ある特定の構造フラグメントあるいはキーが存在しているかいないかを問い合わせた(クエリ)結果を格納しています。それぞれのキーは創薬化学者によって経験的に定義されたもので、利用、解釈が容易です。([RDKit documentation: MACCS keys](http://rdkit.org/Python_Docs/rdkit.Chem.MACCSkeys-module.html)).
<img src="images/maccs_fp.png" align="above" alt="Image cannot be shown" width="250">
<div align="center"> Figure 2: MACCSフィンガープリントの図例(Andrea Morgerによる図)</div>
#### Morganフィンガープリントとサーキュラーフィンガープリント(Morgan fingerprints and circular fingerprints)
この一連のフィンガープリントはMorganアルゴリズムに基づいています。ビットは分子の各原子の円形状の環境(circular environment)に相当しています。半径(radius)によって、環境として考慮にいれる近接の結合と原子の数を設定します。ビット文字列の長さを定義することもでき、より長いビット文字列を希望する長さに縮められます。従って、Morganフィンガープリントはある特定の数のビットには制限されません。Morganフィンガープリントに関してもっと知りたい場合は[RDKit documentation: Morgan fingerprints](http://www.rdkit.org/docs/GettingStartedInPython.html#morgan-fingerprints-circular-fingerprints) を参照してください。Extended connectivity fingerprints (ECFP)もよく使われるフィンガープリントで、Morganアルゴリズムのバリエーションから導かれています。さらなる情報は([<i>J. Chem. Inf. Model.</i> (2010), <b>50</b>,742-754](https://pubs.acs.org/doi/abs/10.1021/ci100050t))を参照してください。
<img src="images/morgan_fp.png" align="above" alt="Image cannot be shown" width="270">
<div align="center">Figure 3: Morganサーキュラーフィンガープリントの図例(Andrea Morgerによる図)</div>
### 化合物類似性評価
記述子/フィンガープリントの計算ができれば、それらを比較することで、二つの分子の間の類似度を評価することができます。化合物類似度は様々な類似度係数で定量化することができますが、よく使われる2つの指標はタニモト係数とDice係数です(Tanimoto and Dice index) ([<i>J. Med. Chem.</i> (2014), <b>57</b>, 3186-3204](http://pubs.acs.org/doi/abs/10.1021/jm401411z))。
#### タニモト係数(Tanimoto coefficient)
$$T _{c}(A,B) = \frac{c}{a+b-c}$$
a: 化合物Aに存在する特徴の数 <br>
b: 化合物Bに存在する特徴の数 <br>
c: 化合物AとBで共有されている特徴の数
#### Dice係数(Dice coefficient)
$$D_{c}(A,B) = \frac{c}{\frac{1}{2}(a+b)}$$
a: 化合物Aに存在する特徴の数 <br>
b: 化合物Bに存在する特徴の数 <br>
c: 化合物AとBで共有されている特徴の数
類似度評価は通常、それぞれのフィンガープリントの正の(1の)ビットの数と、両者が共通してもつ正のビットの数を考慮します。Dice類似度は通常タニモト類似度よりも大きな値を返し、それはそれぞれの分母の違いに由来します。:
$$\frac{c}{a+b-c} \leq \frac{c}{\frac{1}{2}(a+b)}$$
### バーチャルスクリーニング(Virtual screening)
医薬品探索の初期段階における課題は、低分子(化合物)のセットを、有りうる巨大なケミカルスペースから、研究対象のターゲット分子に結合するポテンシャルのあるものに範囲を狭めることです。このケミカルスペースは非常に大きく、低分子化合物群は部分構造(chemical moiety)の10<sup>20</sup> の組み合わせにまでいたります ([<i>ACS Chem. Neurosci.</i> (2012), <b>19</b>, 649-57](https://www.ncbi.nlm.nih.gov/pubmed/23019491)) 。
目的のターゲット分子に対するこれら低分子の活性を評価するハイスループットスクリーニング(HTS)実験は費用と時間が非常にかかるので、計算機に支援された(computer-aided)手法により、試験にかける低分子のリストをより絞り込む(focused list)ことが期待されています。このプロセスはバーチャル(ハイスループット)スクリーニングと呼ばれていて、研究対象のターゲット分子に結合する可能性の最も高い低分子を見つけるために、巨大な低分子ライブラリーをルールとパターンのどちらか、あるいは両方によってフィルタリングします。
#### 類似度検索を用いたバーチャルスクリーニング
バーチャルスクリーニングの簡単な方法として、既知の活性化合物(群)と新規な化合物セットを比較して、最も類似しているものを探すことが行われます。類似性質原則(similar property principle、SPP)に基づき、(例えば既知の阻害剤に)最も類似した化合物は類似の効果を有すると推測されます。類似性検索に必要となるものは次の通りです(上でも詳細に議論しています)。
* 化学/分子の特徴をエンコードした表現
* 特徴のポテンシャルの重み付け(オプション)
* 類似度評価
類似性検索はある特定のデータベースの全ての化合物と一つの化合物との間の類似度を計算することで実行することができます。データベースの化合物の類似度係数によるランク付けにより、最も類似度の高い分子が得られます。
#### エンリッチメントプロット(Enrichment plots)
エンリッチメントプロットはバーチャルスクリーニングの結果の妥当性を評価するために使われ、ランク付けされたリストの上位x%の中に含まれる活性化合物の比率を表します。すなわち、
* データセット全体のうち、トップにランクした化合物の比率(x-axis) vs.
* データセット全体のうち活性化合物(y-axis)の比率
<img src="images/enrichment_plot.png" align="above" alt="Image cannot be shown" width="270">
<div align="center">Figure 4: バーチャルスクリーニングの結果のエンリッチメントプロットの例</div>
## 実践
実践編の最初のパートでは、RDKitを使って化合物のエンコード(化合物フィンガープリント)をしたのち、上の理論編で議論したように、類似度(化合物類似性評価)を計算するため、それらの比較を実行します。
2番目のパートではこれらのエンコーディングと比較手法を使って類似度検索(バーチャルスクリーニング)を実施します。既知のEGFR阻害剤ゲフィチニブ(Gefitinib)をクエリとして使用し、EGFRに対して試験済みの化合物データセットの中から類似した化合物を検索します。このデータセットは **トークトリアル1**でChEMBLから抽出し、**トークトリアル2**でリピンスキーのルールオブファイブによりフィルタリングをおこなったものです。
### 化合物の読み込みと描画
まず、8個の化合物例を定義し描画します。後ほど、これらの分子をエンコードし比較します。SMILES形式の分子をRDKitのmolオブジェクトに変換し、RDKitの`Draw`関数で可視化します。
```
# 関連するPythonパッケージのimport
# 基本的な分子を取り扱う機能はモジュール rdkti.Chem にあります
from rdkit import Chem
# 描画関係
from rdkit.Chem.Draw import IPythonConsole
from rdkit.Chem import Draw
from rdkit.Chem import Descriptors
from rdkit.Chem import AllChem
from rdkit.Chem import MACCSkeys
from rdkit.Chem import rdFingerprintGenerator
from rdkit import DataStructs
import math
import numpy as np
import pandas as pd
from rdkit.Chem import PandasTools
import matplotlib.pyplot as plt
# SMILES形式の化合物
smiles1 = 'CC1C2C(C3C(C(=O)C(=C(C3(C(=O)C2=C(C4=C1C=CC=C4O)O)O)O)C(=O)N)N(C)C)O' # Doxycycline
smiles2 = 'CC1(C(N2C(S1)C(C2=O)NC(=O)C(C3=CC=C(C=C3)O)N)C(=O)O)C' # Amoxicilline
smiles3 = 'C1=COC(=C1)CNC2=CC(=C(C=C2C(=O)O)S(=O)(=O)N)Cl' # Furosemide
smiles4 = 'CCCCCCCCCCCC(=O)OCCOC(=O)CCCCCCCCCCC' # Glycol dilaurate
smiles5 = 'C1NC2=CC(=C(C=C2S(=O)(=O)N1)S(=O)(=O)N)Cl' # Hydrochlorothiazide
smiles6 = 'CC1=C(C(CCC1)(C)C)C=CC(=CC=CC(=CC(=O)O)C)C' # Isotretinoine
smiles7 = 'CC1(C2CC3C(C(=O)C(=C(C3(C(=O)C2=C(C4=C1C=CC=C4O)O)O)O)C(=O)N)N(C)C)O' # Tetracycline
smiles8 = 'CC1C(CC(=O)C2=C1C=CC=C2O)C(=O)O' # Hemi-cycline D
# 化合物SMILESのリストを作成
smiles = [smiles1, smiles2, smiles3, smiles4, smiles5, smiles6, smiles7, smiles8]
# ROMolオブジェクトのリストを作成
mols = [Chem.MolFromSmiles(i) for i in smiles]
# 化合物名称のリストを作成
mol_names = ['Doxycycline', 'Amoxicilline', 'Furosemide', 'Glycol dilaurate',
'Hydrochlorothiazide', 'Isotretinoine', 'Tetracycline', 'Hemi-cycline D']
# 化合物の描画
Draw.MolsToGridImage(mols, molsPerRow=2, subImgSize=(450,150), legends=mol_names)
```
### 化合物記述子の計算
化合物の比較を行うために1Dと2Dの化合物記述子を抽出、生成します。2D記述子については、あとで化合物の類似度の計算に使用するので、異なるタイプのフィンガープリントを生成します。
#### 1D 化合物記述子:分子量
例示構造の分子量を計算します。
```
# 化合物の分子量を計算
mol_weights = [Descriptors.MolWt(mol) for mol in mols]
```
視覚的に比較するために、類似の分子量の化合物構造を描画します。分子量は化合物の類似度にとって有用な記述子となるでしょうか?
```
# 結果を格納するデータフレームの生成
sim_mw_df = pd.DataFrame({'smiles': smiles, 'name': mol_names, 'mw': mol_weights, "Mol": mols})
# 分子量でソート
sim_mw_df.sort_values(['mw'], ascending=False, inplace=True)
sim_mw_df[["smiles", "name", "mw"]]
# 分子量とともに化合物を描画
Draw.MolsToGridImage(sim_mw_df["Mol"],
legends=[i+': '+str(round(j, 2))+" Da" for i,j in zip(sim_mw_df["name"], sim_mw_df["mw"])],
molsPerRow=2, subImgSize=(450, 150))
```
見てわかるように類似の分子量を持つ化合物は類似の構造をもつことがあります(例 Doxycycline/Tetracycline)。一方で、類似の数の原子を持ちながらも全く異なる原子の配置となっているものもあります(例 Doxycycline/Glycol dilaurate あるいはHydrochlorothiazide/Isotretinoine)。
次に、より詳細な分子の特徴を説明するために、2D化合物記述子を見て見ましょう。
#### 2D 化合物記述子:MACCSフィンガープリント
MACCSフィンガープリントはRDKitを使って簡単に生成することができます。明示的なビットベクトル(explicit bitvector)は我々人間が読めるものではないので、さらにビット文字配列(bitstring)へと変換します。
```
# MACCSフィンガープリントの生成
maccs_fp1 = MACCSkeys.GenMACCSKeys(mols[0]) # Doxycycline
maccs_fp2 = MACCSkeys.GenMACCSKeys(mols[1]) # Amoxicilline
maccs_fp1
# フィンガープリントをビット文字配列としてプリント
maccs_fp1.ToBitString()
# 全化合物のMACCS fingerprintsを生成
maccs_fp_list = []
for i in range(len(mols)):
maccs_fp_list.append(MACCSkeys.GenMACCSKeys(mols[i]))
```
#### 2D 化合物記述子:Morganフィンガープリント
RDKitを使ってMorganサーキュラーフィンガープリントも計算します。2つの異なる関数により、Morganフィンガープリントは整数(int)あるいはビット(bit)ベクトルとして計算することができます。
```
# Morganフィンガープリントを生成(int vector)、デフォルトでは半径2でベクトルの長さは2048
circ_fp1 = rdFingerprintGenerator.GetCountFPs(mols[:1])[0]
circ_fp1
# セットされた値をみてみます:
circ_fp1.GetNonzeroElements()
# Morganフィンガープリントを(bit vectorとして)生成、デフォルトでは半径2でフィンガープリントの長さは2048
circ_b_fp1 = rdFingerprintGenerator.GetFPs(mols[:1])[0]
circ_b_fp1
# フィンガープリントをビット文字列としてプリント
circ_b_fp1.ToBitString()
# 全化合物のMorganフィンガープリントを生成
circ_fp_list = rdFingerprintGenerator.GetFPs(mols)
```
### 化合物類似度の計算
次では、2つの類似度評価、すなわち**Tanimoto**と**Dice**を、2つのタイプのフィンガープリント、すなわち**MACCS**と**Morgan**フィンガープリントに適用します。
例:2つのMACCSフィンガープリントをタニモト類似度で比較
```
# 2つの化合物のタニモト係数を計算
DataStructs.TanimotoSimilarity(maccs_fp1, maccs_fp2)
# 同じ化合物のタニモト係数を計算
DataStructs.TanimotoSimilarity(maccs_fp1, maccs_fp1)
```
次に、クエリ化合物を我々の化合物リストと比較したいと思います。
そこで、RDKitの ```BulkTanimotoSimilarity```関数と```BulkDiceSimilarity```関数を使って、タニモトあるいはDice類似度の類似度評価に基づいて、クエリのフィンガープリントと、リストに格納されたフィンガープリントとの類似度を計算します。
類似度を計算したあとで、次の関数を使ってランク付けした化合物を描画したいと思います。:
```
def draw_ranked_molecules(sim_df_sorted, sorted_column):
"""
(ソートした)データフレームの分子を描画する関数
"""
# ラベルを定義:最初の分子はクエリ(Query)で、次の分子はランク1から始まる
rank = ["#"+str(i)+": " for i in range(0, len(sim_df_sorted))]
rank[0] = "Query: "
# Doxycyclineと最も類似した化合物(Tanimoto と MACCS フィンガープリント)
top_smiles = sim_df_sorted["smiles"].tolist()
top_mols = [Chem.MolFromSmiles(i) for i in top_smiles]
top_names = [i+j+" ("+str(round(k, 2))+")" for i, j, k in zip(rank, sim_df_sorted["name"].tolist(),
sim_df_sorted[sorted_column])]
return Draw.MolsToGridImage(top_mols, legends=top_names, molsPerRow=2, subImgSize=(450, 150))
```
次に、タニモト/Dice類似度評価に基づいて、MACCS/Morganフィンガープリントの比較の全ての組み合わせを調べます。そこで、結果を要約するデータフレームを作成します。
```
# 結果を格納するデータフレームの生成
sim_df = pd.DataFrame({'smiles': smiles, 'name': mol_names})
```
#### MACCSフィンガープリント:タニモト類似度
```
# 類似度評価スコアをデータフレームに追加
sim_df['tanimoto_MACCS'] = DataStructs.BulkTanimotoSimilarity(maccs_fp1,maccs_fp_list)
# MACCSフィンガープリントのタニモト類似度で並べ替えたデータフレーム
sim_df_sorted_t_ma = sim_df.copy()
sim_df_sorted_t_ma.sort_values(['tanimoto_MACCS'], ascending=False, inplace=True)
sim_df_sorted_t_ma
# MACCSフィンガープリントのタニモト類似度でランクした分子の描画
draw_ranked_molecules(sim_df_sorted_t_ma, "tanimoto_MACCS")
```
MACCSフィンガープリントを使用した場合、Tetracyclineは最も類似した分子(スコアが高い)で、ついでAmoxicillineでした。1D 化合物記述子の分子量とは対照的に、線形分子のGlucol dilaurateは類似していない(ランクが最も低い)と認識されました。
#### MACCSフィンガープリント:Dice類似度
```
# データフレームへの類似度スコアの追加
sim_df['dice_MACCS'] = DataStructs.BulkDiceSimilarity(maccs_fp1, maccs_fp_list)
# MACCSフィンガープリントのDice類似度でソートしたデータフレーム
sim_df_sorted_d_ma = sim_df.copy()
sim_df_sorted_d_ma.sort_values(['dice_MACCS'], ascending=False, inplace=True)
sim_df_sorted_d_ma
```
定義より、タニモトとDice類似度評価は同じランキング結果になりますが、Dice類似度の値の方が大きくなります(タニモトとDiceを求める式はこのトークトリアルの理論編を参照してください)。
#### Morganフィンガープリント:タニモト類似度
```
# データフレームへの類似度スコアの追加
sim_df['tanimoto_morgan'] = DataStructs.BulkTanimotoSimilarity(circ_b_fp1, circ_fp_list)
sim_df['dice_morgan'] = DataStructs.BulkDiceSimilarity(circ_b_fp1, circ_fp_list)
# Morganフィンガープリントのタニモト類似度で並べ替えたデータフレーム
sim_df_sorted_t_mo = sim_df.copy()
sim_df_sorted_t_mo.sort_values(['tanimoto_morgan'], ascending=False, inplace=True)
sim_df_sorted_t_mo
# Morganフィンガープリントのタニモト類似度による化合物ランキングの描画
draw_ranked_molecules(sim_df_sorted_t_mo, "tanimoto_morgan")
```
MACCSとMorganの類似度をタニモト(Morgan) vs タニモト(MACCS)でプロットし比較します。
```
fig, axes = plt.subplots(figsize=(6,6), nrows=1, ncols=1)
sim_df_sorted_t_mo.plot('tanimoto_MACCS','tanimoto_morgan',kind='scatter',ax=axes)
plt.plot([0,1],[0,1],'k--')
axes.set_xlabel("MACCS")
axes.set_ylabel("Morgan")
plt.show()
```
異なるフィンガープリント(ここでは、MACCSフィンガープリントとMorganフィンガープリント)を用いると、異なる類似度の値(ここでは、タニモト係数)となり、ここで示したように、潜在的には化合物類似度のランキングが異なるものとなります。
MorganフィンガープリントはDoxycyclineに対してTetracyclineを(スコアはより低かったでしたが)最も類似した化合物として認識し、Glycol dilaurateを最も似ていない化合物として認識しました。一方で、2番目にランク付されたのはHemi-cycline Dでした。この化合物はサイクリン系化合物の部分構造で、Morganフィンガープリントのアルゴリズムが原子の環境に基づくものであることがその理由であるかもしれません(一方で、MACCSフィンガープリントは特定の特徴の出現頻度を求めるものとなっています)。
### 類似度検索を使ったバーチャルスクリーニング
フィンガープリントと類似度の計算方法を学んだので、この知識を化合物セット全体からのクエリ化合物の類似度検索に応用することができます。
既知のEGFR阻害剤ゲフィチニブ(Gefitinib)をクエリとして使用し、EGFRに対して試験済みの化合物データセットの中から類似した化合物を検索します。このデータセットは **トークトリアル1**でChEMBLから抽出し、**トークトリアル2**でリピンスキーのルールオブファイブによりフィルタリングをおこなったものです。
#### クエリ化合物をデータセットの全化合物と比較
**トークトリアル2**で取得したChEMBLデータベースから取り出したEGFRに対して評価済みのフィルタリングされた化合物を含むcsvファイルから化合物を読み込みます。1つのクエリ化合物(ここではゲフィチニブ)を使って、類似の化合物をデータセットの中から探し出します。
```
# SMILES形式の化合物を含むcsvファイルからデータを読み込む
filtered_df = pd.read_csv('../data/T2/EGFR_compounds_lipinski.csv', delimiter=';', usecols=['molecule_chembl_id', 'smiles', 'pIC50'])
filtered_df.head()
# クエリ化合物のSMILESからMolオブジェクトを生成
query = Chem.MolFromSmiles('COC1=C(OCCCN2CCOCC2)C=C2C(NC3=CC(Cl)=C(F)C=C3)=NC=NC2=C1'); # Gefitinib, Iressa
query
# クエリ化合物のMACCSフィンガープリントとMorganフィンガープリントを生成
maccs_fp_query = MACCSkeys.GenMACCSKeys(query)
circ_fp_query = rdFingerprintGenerator.GetCountFPs([query])[0]
# ファイルの全化合物のMACCSフィンガープリントとMorganフィンガープリントを生成
ms = [Chem.MolFromSmiles(i) for i in filtered_df.smiles]
circ_fp_list = rdFingerprintGenerator.GetCountFPs(ms)
maccs_fp_list = [MACCSkeys.GenMACCSKeys(m) for m in ms]
# クエリ化合物(Gefitinib)とファイルの全化合物のタニモト類似性を計算(MACCS、Morgan)
tanimoto_maccs = DataStructs.BulkTanimotoSimilarity(maccs_fp_query,maccs_fp_list)
tanimoto_circ = DataStructs.BulkTanimotoSimilarity(circ_fp_query,circ_fp_list)
# クエリ化合物(Gefitinib)とファイルの全化合物のDice類似性を計算(MACCS、Morgan)
dice_maccs = DataStructs.BulkDiceSimilarity(maccs_fp_query,maccs_fp_list)
dice_circ = DataStructs.BulkDiceSimilarity(circ_fp_query,circ_fp_list)
# ChEMBL IDとSMILES、Gefitinibに対する化合物のタニモト類似性のテーブルを作成
similarity_df = pd.DataFrame({'ChEMBL_ID':filtered_df.molecule_chembl_id,
'bioactivity':filtered_df.pIC50,
'tanimoto_MACCS': tanimoto_maccs,
'tanimoto_morgan': tanimoto_circ,
'dice_MACCS': dice_maccs,
'dice_morgan': dice_circ,
'smiles': filtered_df.smiles,})
# データフレームを表示
similarity_df.head()
```
#### 類似性評価の値の分布
理論編で述べたように、同じフィンガープリント(例 MACCSフィンガープリント)について比較すれば、タニモト類似度の値はDIce類似度の値よりも小さくなります。また、2つの異なるフィンガープリント(例 MACCSフィンガープリントとMorganフィンガープリント)を比較すると、類似性評価の値(例 タニモト類似度)は変化します。
ヒストグラムをプロットすることで分布を見ることができます。
```
# MACCSフィンガープリントのタニモト類似度の分布をプロット
%matplotlib inline
fig, axes = plt.subplots(figsize=(10,6), nrows=2, ncols=2)
similarity_df.hist(["tanimoto_MACCS"], ax=axes[0,0])
similarity_df.hist(["tanimoto_morgan"], ax=axes[0,1])
similarity_df.hist(["dice_MACCS"], ax=axes[1,0])
similarity_df.hist(["dice_morgan"], ax=axes[1,1])
axes[1,0].set_xlabel("similarity value")
axes[1,0].set_ylabel("# molecules")
plt.show()
```
ここでも類似度を比較します。今回は直接、2つのフィンガープリントに関するタニモト類似度とDice類似度を比較します。
```
fig, axes = plt.subplots(figsize=(12,6), nrows=1, ncols=2)
similarity_df.plot('tanimoto_MACCS','dice_MACCS',kind='scatter',ax=axes[0])
axes[0].plot([0,1],[0,1],'k--')
axes[0].set_xlabel("Tanimoto(MACCS)")
axes[0].set_ylabel("Dice(MACCS)")
similarity_df.plot('tanimoto_morgan','dice_morgan',kind='scatter',ax=axes[1])
axes[1].plot([0,1],[0,1],'k--')
axes[1].set_xlabel("Tanimoto(Morgan)")
axes[1].set_ylabel("Dice(Morgan)")
plt.show()
```
類似度分布は類似度を解釈するのに重要です(例 MACCSフィンガープリントとMorganフィンガープリント、タニモト類似度とDice類似度について値0.6は異なる評価を与えられる必要があります)
次では、Morganフィンガープリントに基づき、タニモト類似度で最もよく似た化合物を描画します。
#### 最も類似の化合物を描画
私たちの作成したランキングにおいて最も類似した化合物との比較として、ゲフィチニブ(Gefitinib)の構造を視覚的に調べます。生理活性の情報(**トークトリアル1**でChEMBLから抽出したpIC50)も含めます。
```
# tanimoto_morganでソートしたデータフレーム
similarity_df.sort_values(['tanimoto_morgan'], ascending=False, inplace=True)
similarity_df.head()
# データフレームにSMILES文字列の構造表現(ROMol - RDKit オブジェクト Mol)を追加
PandasTools.AddMoleculeColumnToFrame(similarity_df, 'smiles')
# クエリ構造とトップランクの化合物群(+生理活性)の描画
sim_mols = [Chem.MolFromSmiles(i) for i in similarity_df.smiles][:11]
legend = ['#' + str(a) + ' ' + b + ' ('+str(round(c,2))+')' for a, b, c in zip(range(1,len(sim_mols)+1),
similarity_df.ChEMBL_ID,
similarity_df.bioactivity)]
Chem.Draw.MolsToGridImage(mols = [query] + sim_mols[:11],
legends = (['Gefitinib'] + legend),
molsPerRow = 4)
```
データセットにおいてゲフィチニブと比較してトップにランクした化合物群は、最初は我々のデータセットに含まれるゲフィチニブのエントリー(rank1 と rank2)で、続いてゲフィチニブの変換体(例 benzole置換基パターンが異なるもの)です。
注:ChEMBLにはゲフィチニブ(よく研究された化合物なので)の完全な構造活性相関分析がふくまれていて、したがって私たちが取得したデータセットにゲフィチニブ様化合物が多く含まれていることは驚くべきことではありません。
それでは、類似度検索がどの程度、データセット上の活性化合物と不活性化合物を区別することができるか、その性能をチェックしたいと思います。そこで、**トークトリアル1** でChEMBLから取得した化合物の(EGFRに対する)生理活性の値を使用します。
#### エンリッチメントプロットの生成
バーチャルスクリーニングの妥当性を評価し、見つかった活性化合物の比率を見るためにエンリッチメントプロットを作成します。
エンリッチメントプロットが示すのは;
* データセット全体のうち、トップにランクした化合物の比率(x-axis) vs.
* データセット全体のうち活性化合物(y-axis)の比率
MACCSフィンガープリントとMorganフィンガープリントのタニモト類似度を比較します。
化合物を活性化合物あるいは不活性化合物のいずれとして取り扱うかを決めるために、一般に使用されるpIC50のカットオフ値6.3を適用します。文献中にはpIC50カットオフ値として5〜7にわたる範囲でいくつか提案がなされていて、データポイントをとらない排除範囲を定義しているものもありますが、私たちはこのカットオフ(6.3)は合理的と考えています。
同じカットオフを**トークトリアル10**の機械学習でも用います。
```
# 活性化合物と不活性化合物を区別するpIC50 カットオフ値
threshold = 6.3
similarity_df.head()
def get_enrichment_data(similarity_df, similarity_measure, threshold):
"""
エンリッチメントプロットのxとyの値を計算する関数:
x - データセットで何%にランクしているか
y - 何%の本当に活性な化合物が見つかったか
"""
# データセットの化合物の数を取得
mols_all = len(similarity_df)
# データセットの活性化合物の数を取得
actives_all = sum(similarity_df.bioactivity >= threshold)
# データセット全体を処理している間、活性化合物のカウンターを保持するリストを初期化
actives_counter_list = []
# 活性化合物のカウンターを初期化
actives_counter = 0
# 注: エンリッチメントプロットのためデータをランク付けしなければなりません。
# 選択した類似度評価によって化合物を並べ替えます。
similarity_df.sort_values([similarity_measure], ascending=False, inplace=True)
# ランク付けされたデータセットを一つずつ処理し、(生理活性をチェックすることで)各化合物が活性化合物どうか確認します
for value in similarity_df.bioactivity:
if value >= threshold:
actives_counter += 1
actives_counter_list.append(actives_counter)
# 化合物の数をデータセットのランク何%になるかに変換
mols_perc_list = [i/mols_all for i in list(range(1, mols_all+1))]
# 活性化合物の数を本当の活性化合物の何%が見つかったかに変換
actives_perc_list = [i/actives_all for i in actives_counter_list]
# xとyの値とラベルをもつデータフレームを生成
enrich_df = pd.DataFrame({'% ranked dataset':mols_perc_list,
'% true actives identified':actives_perc_list,
'similarity_measure': similarity_measure})
return enrich_df
# プロットする類似度評価を定義
sim_measures = ['tanimoto_MACCS', 'tanimoto_morgan']
# 全類似度評価についてエンリッチメントプロットのデータを持つデータフレームのリストを作成
enrich_data = [get_enrichment_data(similarity_df, i, threshold) for i in sim_measures]
# プロットのためのデータセットを準備:
# 類似度評価毎のデータフレームを一つのデータフレームに連結
# …異なる類似度評価は「similarity_measure」列によって区別可能です
enrich_df = pd.concat(enrich_data)
fig, ax = plt.subplots(figsize=(6, 6))
fontsize = 20
for key, grp in enrich_df.groupby(['similarity_measure']):
ax = grp.plot(ax = ax,
x = '% ranked dataset',
y = '% true actives identified',
label=key,
alpha=0.5, linewidth=4)
ax.set_ylabel('% True actives identified', size=fontsize)
ax.set_xlabel('% Ranked dataset', size=fontsize)
# データセットの活性化合物比率
ratio = sum(similarity_df.bioactivity >= threshold) / len(similarity_df)
# 理想的な場合のカーブをプロット
ax.plot([0,ratio,1], [0,1,1], label="Optimal curve", color="black", linestyle="--")
# ランダムな場合のカーブをプロット
ax.plot([0,1], [0,1], label="Random curve", color="grey", linestyle="--")
plt.tick_params(labelsize=16)
plt.legend(labels=['MACCS', 'Morgan', "Optimal", "Random"], loc=(.5, 0.08),
fontsize=fontsize, labelspacing=0.3)
# プロットを保存ーテキストボックスを含めるためにbbox_inchesを使用:
# https://stackoverflow.com/questions/44642082/text-or-legend-cut-from-matplotlib-figure-on-savefig?rq=1
plt.savefig("../data/T4/enrichment_plot.png", dpi=300, bbox_inches="tight", transparent=True)
plt.show()
```
エンリッチメントプロットによるとMACCSフィンガープリントよりもMorganフィンガープリント基づく比較の方が少し良いパフォーマンスを示しています。
```
# ランク付されたデータセットのx%についてEFを取得
def print_data_ef(perc_ranked_dataset, enrich_df):
data_ef = enrich_df[enrich_df['% ranked dataset'] <= perc_ranked_dataset].tail(1)
data_ef = round(float(data_ef['% true actives identified']), 1)
print("Experimental EF for ", perc_ranked_dataset, "% of ranked dataset: ", data_ef, "%", sep="")
# ランク付されたデータセットのx%についてランダムEFを取得
def print_random_ef(perc_ranked_dataset):
random_ef = round(float(perc_ranked_dataset), 1)
print("Random EF for ", perc_ranked_dataset, "% of ranked dataset: ", random_ef, "%", sep="")
# ランク付されたデータセットのx%について理想的な場合のEFを取得
def print_optimal_ef(perc_ranked_dataset, similarity_df, threshold):
ratio = sum(similarity_df.bioactivity >= threshold) / len(similarity_df) * 100
if perc_ranked_dataset <= ratio:
optimal_ef = round(100/ratio * perc_ranked_dataset, 1)
else:
optimal_ef = round(float(100), 2)
print("Optimal EF for ", perc_ranked_dataset, "% of ranked dataset: ", optimal_ef, "%", sep="")
# パーセンテージを選択
perc_ranked_list = 5
# EFデータを取得
print_data_ef(perc_ranked_list, enrich_df)
print_random_ef(perc_ranked_list)
print_optimal_ef(perc_ranked_list, similarity_df, threshold)
```
**訳注(04/2020)**
オリジナルの実践編はここまでですが、このEFの結果は少しおかしい気がします。エンリッチメントプロットを見ると、エンリッチメントファクターは「**optimal** > **Experimental** > **Random**」となると思われます。**Random**よりも**Experimental**が低いということは、むしろ不活性化合物へと選択のバイアスがかかっていることになってしまいます。
どこかおかしいところがないか?順番に見ていきます。
まずEFの算出に使われているDataFrame **enrich_df**は2つの類似度評価基準のデータを繋げたものなので、それぞれ別々にしてみます。
```
# tanimoto_MACCS
enrich_df_taMA = enrich_df[enrich_df['similarity_measure'] == 'tanimoto_MACCS']
# tanimoto_morgan
enrich_df_tamo = enrich_df[enrich_df['similarity_measure'] == 'tanimoto_morgan']
print("Size of enrich_df: ", len(enrich_df))
print("Size of tanimoto_MACCS DataFrame: ", len(enrich_df_taMA))
print("Size of tanimoto_morgan DataFrame: ", len(enrich_df_tamo))
```
DataFrameの5% ranked Datasetに相当する箇所を見てみます。
```
# 5% に相当する数
index_5perc = round(len(enrich_df_taMA)*0.05)
# DataFrameのindexは0から始まるので-1した行を表示
enrich_df_taMA[index_5perc-1:index_5perc]
```
見やすさのためsliceでDataFrameとして取り出しています。
ランク上位5%(0.049)に相当する数のなかに、実際の活性評価でactiveだったものは7.3%(% true actives identified, 0.07319)となっています。この値は先の**Random**、**Optimal**と比較して妥当な値に思います。
DataFrameのデータ自体には問題なさそうなので、値の取り出し方(関数`print_data_ef`)に問題があったのでしょう。
関数の中身を順番に実行してみます。
```
# 5%に設定
perc_ranked_dataset = 5
# 5%内のDataFrameを取り出し、その一番最後の行(tail)を取り出す
enrich_df[enrich_df['% ranked dataset'] <= perc_ranked_dataset].tail(1)
```
取り出されたのは「index:4522」で4523番目の行です。この`% true actives identifed`列がEFとして取り出されていた値(1.0)です。
閾値5以下で取り出されたのは`similarity_measure`:**tanimoto_morgan**の全化合物でした。単純にDataFrameのデータは%に換算していないのに、取り出す際に%換算の値を使ってしまったのが原因のようです。
それではそれぞれの類似度について正しい値を確認してみます。
```
# 関数の再定義
def print_data_ef2(perc_ranked_dataset, enrich_df):
perc_ranked_dataset_100 = perc_ranked_dataset / 100
data_ef = enrich_df[enrich_df['% ranked dataset'] <= perc_ranked_dataset_100].tail(1)
data_ef = round(float(data_ef['% true actives identified'] * 100), 1)
print("Experimental EF for ", perc_ranked_dataset, "% of ranked dataset: ", data_ef, "%", sep="")
# MACCS keyの場合
# パーセンテージを選択
perc_ranked_list = 5
# EFデータを取得
print_data_ef2(perc_ranked_list, enrich_df_taMA)
print_random_ef(perc_ranked_list)
print_optimal_ef(perc_ranked_list, similarity_df, threshold)
# Morganフィンガープリントの場合
# パーセンテージを選択
perc_ranked_list = 5
# EFデータを取得
print_data_ef2(perc_ranked_list, enrich_df_tamo)
print_random_ef(perc_ranked_list)
print_optimal_ef(perc_ranked_list, similarity_df, threshold)
```
いずれも「**optimal** > **Experimental** > **Random**」となっており、**Morgan**の方が**MACCS**よりも若干良い値となっています。無事エンリッチメントプロットと比較してもおかしくない値が得られました。
**訳注ここまで**
## ディスカッション
ここではタニモト類似度を使ってバーチャルスクリーニングを実施しました。もちろん、Dice類似度や他の類似度評価を使っても行うことができます。
化合物フィンガープリントを使用した類似度検索の欠点は、化合物類似度に基づくものなので新規な構造を生み出さないことです。化合物類似度を扱う上でのその他の課題としては、いわゆるアクティビティクリフ(activity cllif)があります。分子の官能基におけるちょっとした変化が生理活性の大きな変化を起こすことがあります。
## クイズ
* アクティビティクリフを回避するにはどこから始めれば良いでしょうか?
* MACCSフィンガープリントとMorganフィンガープリントを互いに比較した場合の利点と欠点は何でしょう?
* 使用したフィンガープリントによっておこる、類似度データフレームにおける順序の違いをどう説明できるでしょうか?
|
github_jupyter
|
<a href="https://colab.research.google.com/github/adasegroup/ML2021_seminars/blob/master/seminar13/gp.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Gaussian Processes (GP) with GPy
In this notebook we are going to use GPy library for GP modeling [SheffieldML github page](https://github.com/SheffieldML/GPy).
Why **GPy**?
* Specialized library of GP models (regression, classification, GPLVM)
* Variety of covariance functions is implemented
* There are GP models for large-scale problems
* Easy to use
Run the following line to install GPy library
```
!pip install GPy
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
import GPy
%matplotlib inline
```
Current documentation of GPy library can be found [here](http://gpy.readthedocs.org/en/latest/).
## Gaussian Process Regression
A data set $\left (X, \mathbf{y} \right ) = \left \{ (x_i, y_i), x_i \in \mathbb{R}^d, y_i \in \mathbb{R} \right \}_{i = 1}^N$ is given.
Assumption:
$$
y = f(x) + \varepsilon,
$$
where $f(x)$ is a Gaussian Processes and $\varepsilon \sim \mathcal{N}(0, \sigma_n^2)$ is a Gaussian noise .
Posterior distribution of function value $y^*$ at point $x^*$
$$
y_* | X, \mathbf{y}, x_* \sim \mathcal{N}(m(x_*), \sigma(x_*)),
$$
with predictive mean and variance given by
$$
m(x_*) = \mathbf{k}^T \mathbf{K}_y^{-1} \mathbf{y} = \sum_{i = 1}^N \alpha_i k(x_*, x_i),
$$
$$
\sigma^2(x_*) = k(x_*, x_*) - \mathbf{k}^T\mathbf{K}_y^{-1}\mathbf{k},
$$
where
$$
\mathbf{k} = \left ( k(x_*, x_1), \ldots, k(x_*, x_N) \right )^T
$$
$$
\mathbf{K}_y = \|k(x_i, x_j)\|_{i, j = 1}^N + \sigma_n^2 \mathbf{I}
$$
### Exercises
1. What the posterior variance at the points from the training set is equal to? What if the noise variance is equal to 0?
2. Suppose that we want to minimize some unknown function $f(\mathbf{x})$.
We are given a set of observations $y_i = f(\mathbf{x}_i) + \varepsilon_i$, where $\varepsilon_i \sim \mathcal{N}(0, \sigma^2)$.
Using the observations we built a GP model $\hat{f}(\mathbf{x})$.
Now, let us consider the value called *improvement*:
$$
I(\mathbf{x}) = \max(0, y^* - f(\mathbf{x})),
$$
where $y^*$ is currently found minimum value of $f(\mathbf{x})$.
To choose the next candidate for the minimum we would like to maximize the *Expected Improvement*
$$
EI(x) = \mathbb{E}_f I(\mathbf{x})
$$
1. Express the $EI(\mathbf{x})$ in terms $\Phi(\cdot)$ and $\phi(\cdot)$ - the pdf and cdf of the standard normal distribution $\mathcal{N}(0, 1)$.
2. Assuming $\sigma = 0$ what is the value of $EI(\mathbf{x})$ for any value $y_i$ from the dataset?
## Building GPR model
Lets fit GPR model for function $f(x) = − \cos(\pi x) + \sin(4\pi x)$ in $[0, 1]$,
with noise $y(x) = f(x) + \epsilon$, $\epsilon \sim \mathcal{N}(0, 0.1)$.
```
N = 10
X = np.linspace(0.05, 0.95, N).reshape(-1, 1)
Y = -np.cos(np.pi * X) + np.sin(4 * np.pi * X) + \
np.random.normal(loc=0.0, scale=0.1, size=(N, 1))
plt.figure(figsize=(5, 3))
plt.plot(X, Y, '.')
```
#### 1. Define covariance function
The most popular kernel - RBF kernel - has 2 parameters: `variance` and `lengthscale`, $k(x, y) = \sigma^2 \exp\left ( -\dfrac{\|x - y\|^2}{2l^2}\right )$,
where `variance` is $\sigma^2$, and `lengthscale` - $l$.
```
input_dim = 1
variance = 1
lengthscale = 0.2
kernel = GPy.kern.RBF(input_dim, variance=variance,
lengthscale=lengthscale)
```
#### 2. Create GPR model
```
model = GPy.models.GPRegression(X, Y, kernel)
print(model)
model.plot(figsize=(5, 3))
```
### Parameters of the covariance function
Values of parameters of covariance function can be set like: `k.lengthscale = 0.1`.
Let's change the value of `lengthscale` parameter and see how it changes the covariance function.
```
k = GPy.kern.RBF(1)
theta = np.asarray([0.2, 0.5, 1, 2, 4, 10])
figure, axes = plt.subplots(2, 3, figsize=(8, 4))
for t, ax in zip(theta, axes.ravel()):
k.lengthscale = t
k.plot(ax=ax)
ax.set_ylim([0, 1])
ax.set_xlim([-4, 4])
ax.legend([t])
```
### Task
Try to change parameters to obtain more accurate model.
```
######## Your code here ########
kernel =
model =
model.Gaussian_noise.variance.fix(0.01)
print(model)
model.plot()
```
### Tuning parameters of the covariance function
The parameters are tuned by maximizing likelihood. To do it just use `optimize()` method of the model.
```
model = GPy.models.GPRegression(X, Y, kernel)
model.optimize()
print(model)
model.plot(figsize=(5, 3))
```
### Noise variance
Noise variance acts like a regularization in GP models. Larger values of noise variance lead to more smooth model.
Let's check it: try to change noise variance to some large value, to some small value and see the results.
Noise variance accessed like this: `model.Gaussian_noise.variance = 1`
```
######## Your code here ########
model.Gaussian_noise.variance =
model.plot(figsize=(5, 3))
```
Now, let's generate more noisy data and try to fit model.
```
N = 40
X = np.linspace(0.05, 0.95, N).reshape(-1, 1)
Y = -np.cos(np.pi * X) + np.sin(4 * np.pi * X) + \
np.random.normal(loc=0.0, scale=0.5, size=(N, 1))
kernel = GPy.kern.RBF(1)
model = GPy.models.GPRegression(X, Y, kernel)
model.optimize()
print(model)
model.plot(figsize=(5, 3))
```
Now, let's fix noise variance to some small value and fit the model
```
kernel = GPy.kern.RBF(1)
model = GPy.models.GPRegression(X, Y, kernel)
model.Gaussian_noise.variance.fix(0.01)
model.optimize()
model.plot(figsize=(5, 3))
```
## Approximate multi-dimensional function
```
def rosenbrock(x):
x = 0.5 * (4 * x - 2)
y = np.sum((1 - x[:, :-1])**2 +
100 * (x[:, 1:] - x[:, :-1]**2)**2, axis=1)
return y
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from sklearn.metrics import mean_squared_error
def plot_2d_func(func, n_rows=1, n_cols=1, title=None):
grid_size = 100
x_grid = np.meshgrid(np.linspace(0, 1, grid_size), np.linspace(0, 1, grid_size))
x_grid = np.hstack((x_grid[0].reshape(-1, 1), x_grid[1].reshape(-1, 1)))
y = func(x_grid)
fig = plt.figure(figsize=(n_cols * 6, n_rows * 6))
ax = fig.add_subplot(n_rows, n_cols, 1, projection='3d')
ax.plot_surface(x_grid[:, 0].reshape(grid_size, grid_size), x_grid[:, 1].reshape(grid_size, grid_size),
y.reshape(grid_size, grid_size),
cmap=cm.jet, rstride=1, cstride=1)
if title is not None:
ax.set_title(title)
return fig
```
#### Here how the function looks like in 2D
```
fig = plot_2d_func(rosenbrock)
```
### Training set
Note that it is 3-dimensional now
```
dim = 3
x_train = np.random.rand(300, dim)
y_train = rosenbrock(x_train).reshape(-1, 1)
```
### Task
Try to approximate Rosenbrock function using RBF kernel. MSE (mean squared error) should be $<10^{-2}$.
**Hint**: if results are not good maybe it is due to bad local minimum. You can do one of the following things:
1. Try to use multi-start by calling `model.optimize_restarts(n_restarts)` method of the model.
2. Constrain model parameters to some reasonable bounds. You can do it for example as follows:
`model.Gaussian_noise.variance.constrain_bounded(0, 1)`
```
######## Your code here ########
model =
x_test = np.random.rand(3000, dim)
y_test = rosenbrock(x_test)
y_pr = model.predict(x_test)[0]
mse = mean_squared_error(y_test.ravel(), y_pr.ravel())
print('\nMSE: {}'.format(mse))
```
### Covariance functions
Short info about covariance function can be printed using `print(k)`.
```
k = GPy.kern.RBF(1)
print(k)
```
You can plot the covariance function using `plot()` method.
```
k.plot(figsize=(5, 3))
```
## More "complex" functions
The most popular covariance function is RBF. However, not all the functions can be modelled using RBF covariance function. For example, approximations of discontinuous functions will suffer from oscillations, approximation of curvy function may suffer from oversmoothing.
```
def heaviside(x):
return np.asfarray(x > 0)
def rastrigin(x):
"""
Parameters
==========
x : ndarray - 2D array in [0, 1]
Returns
=======
1D array of values of Rastrigin function
"""
scale = 8 # 10.24
x = scale * x - scale / 2
y = 10 * x.shape[1] + (x**2).sum(axis=1) - 10 * np.cos(2 * np.pi * x).sum(axis=1)
return y
fig = plot_2d_func(rastrigin, 1, 2, title='Rastrigin function')
x = np.linspace(-1, 1, 100)
y = heaviside(x)
ax = fig.add_subplot(1, 2, 2)
ax.plot(x, y)
ax.set_title('Heaviside function')
plt.show()
```
#### Example of oscillations
As you can see there are oscillations in viscinity of discontinuity because we are trying to approximate
discontinuous function using infinitily smooth function.
```
np.random.seed(42)
X = np.random.rand(30, 1) * 2 - 1
y = heaviside(X)
k = GPy.kern.RBF(input_dim=1, variance=1., lengthscale=1.)
m = GPy.models.GPRegression(X, y, k)
m.optimize()
m.plot(figsize=(5, 3))
plt.ylim([-0.2, 1.2])
```
#### Example of oversmoothing
Actually, the GP model only approximates trend of the function.
All the curves are treated as noise.
The knowledge about this (in fact there is some repeated structure) should be incorporated into the model via kernel function.
```
np.random.seed(42)
X = np.random.rand(300, 2)
y = rastrigin(X)
k = GPy.kern.RBF(input_dim=2)
m = GPy.models.GPRegression(X, y.reshape(-1, 1), k)
m.optimize()
fig = plot_2d_func(lambda x: m.predict(x)[0])
```
### Covariance functions in GPy
Popular covariance functions: `Exponential`, `Matern32`, `Matern52`, `RatQuad`, `Linear`, `StdPeriodic`.
* Exponential:
$$
k(x, x') = \sigma^2 \exp \left (-\frac{r}{l} \right), \quad r = \|x - x'\|
$$
* Matern32
$$
k(x, x') = \sigma^2 \left (1 + \sqrt{3}\frac{r}{l} \right )\exp \left (-\sqrt{3}\frac{r}{l} \right )
$$
* Matern52
$$
k(x, x') = \sigma^2 \left (1 + \sqrt{5}\frac{r}{l} + \frac{5}{3}\frac{r^2}{l^2} \right ) \exp \left (-\sqrt{5}\frac{r}{l} \right )
$$
* RatQuad
$$
k(x, x') = \left ( 1 + \frac{r^2}{2\alpha l^2}\right )^{-\alpha}
$$
* Linear
$$
k(x, x') = \sum_i \sigma_i^2 x_i x_i'
$$
* Poly
$$
k(x, x') = \sigma^2 (x^T x' + c)^d
$$
* StdPeriodic
$$
k(x, x') = \sigma^2 \exp\left ( -2 \frac{\sin^2(\pi r)}{l^2}\right )
$$
```
covariance_functions = [GPy.kern.Exponential(1), GPy.kern.Matern32(1),
GPy.kern.RatQuad(1), GPy.kern.Linear(1),
GPy.kern.Poly(1), GPy.kern.StdPeriodic(1)]
figure, axes = plt.subplots(2, 3, figsize=(9, 6))
axes = axes.ravel()
for i, k in enumerate(covariance_functions):
k.plot(ax=axes[i])
axes[i].set_title(k.name)
figure.tight_layout()
```
## Combination of covariance functions
* Sum of covariance function is a valid covariance function:
$$
k(x, x') = k_1(x, x') + k_2(x, x')
$$
* Product of covariance functions is a valid covariance funciton:
$$
k(x, x') = k_1(x, x') k_2(x, x')
$$
### Combinations of covariance functions in GPy
In GPy to combine covariance functions you can just use operators `+` and `*`.
Let's plot some of the combinations
```
covariance_functions = [GPy.kern.Linear(input_dim=1), GPy.kern.StdPeriodic(input_dim=1), GPy.kern.RBF(input_dim=1, lengthscale=1)]
operations = {'+': lambda x, y: x + y, '*': lambda x, y: x * y}
figure, axes = plt.subplots(len(operations), len(covariance_functions), figsize=(9, 6))
import itertools
axes = axes.ravel()
count = 0
for j, base_kernels in enumerate(itertools.combinations(covariance_functions, 2)):
for k, (op_name, op) in enumerate(operations.items()):
kernel = op(base_kernels[0], base_kernels[1])
kernel.plot(ax=axes[count])
axes[count].set_title('{} {} {}'.format(base_kernels[0].name, op_name, base_kernels[1].name),
fontsize=14)
count += 1
figure.tight_layout()
```
### Additive kernels
One of the popular approach to model the function of interest is
$$
f(x) = \sum_{i=1}^d f_i(x_i) + \sum_{i < j} f_{ij}(x_i, x_j) + \ldots
$$
**Example**: $\quad f(x_1, x_2) = f_1(x_1) + f_2(x_2)$
To model it using GP use additive kernel $\quad k(x, y) = k_1(x_1, y_1) + k_2(x_2, y_2)$.
More general - add kernels each depending on subset of inputs
$$
k(x, y) = k_1(x, y) + \ldots + k_D(x, y),
$$
where, for example, $k_1(x, x') = k_1(x_1, x_1'), \; k_2(x, x') = k_2((x_1, x_3), (x_1', x_3'))$, etc.
Here is an example of ${\rm RBF}(x_1) + {\rm RBF}(x_2)$
```
k1 = GPy.kern.RBF(1, active_dims=[0])
k2 = GPy.kern.RBF(1, active_dims=[1])
kernel = k1 + k2
x = np.meshgrid(np.linspace(-3, 3, 50), np.linspace(-3, 3, 50))
x = np.hstack((x[0].reshape(-1, 1), x[1].reshape(-1, 1)))
z = kernel.K(x, np.array([[0, 0]]))
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
figure = plt.figure()
ax = figure.add_subplot(111, projection='3d')
ax.plot_surface(x[:, 0].reshape(50, 50), x[:, 1].reshape(50, 50), z.reshape(50, 50), cmap=cm.jet)
plt.show()
```
### Kernels on arbitrary types of objects
Kernels can be defined over all types of data structures: text, images, matrices, graphs, etc. You just need to define similarity between objects.
#### Kernels on categorical data
* Represent your categorical variable as a by a one-of-k encoding: $\quad x = (x_1, \ldots, x_k)$.
* Use RBF kernel with `ARD=True`: $\quad k(x , x') = \sigma^2 \prod_{i = 1}^k\exp{\left ( -\dfrac{(x_i - x_i')^2}{\sigma_i^2} \right )}$. The lengthscale will now encode whether the rest of the function changes.
* Short lengthscales for categorical variables means your model is not sharing any information between data of different categories.
## 2 Sampling from GP
So, you have defined some complex kernel.
You can plot it to see how it looks and guess what kind of functions it can approximate.
Another way to do it is to actually generate random functions using this kernel.
GP defines distribution over functions, which is defined by its *mean function* $m(x)$ and *covariance function* $k(x, y)$: for any set $\mathbf{x}_1, \ldots, \mathbf{x}_N \in \mathbb{R}^d \rightarrow$ $\left (f(\mathbf{x}_1), \ldots, f(\mathbf{x}_N) \right ) \sim \mathcal{N}(\mathbf{m}, \mathbf{K})$,
where $\mathcal{m} = (m(\mathbf{x}_1, \ldots, \mathbf{x}_N)$, $\mathbf{K} = \|k(\mathbf{x}_i, \mathbf{x}_j)\|_{i,j=1}^N$.
Sampling procedure:
1. Generate set of points $\mathbf{x}_1, \ldots, \mathbf{x}_N$.
2. Calculate mean and covariance matrix $\mathcal{m} = (m(\mathbf{x}_1, \ldots, \mathbf{x}_N)$, $\mathbf{K} = \|k(\mathbf{x}_i, \mathbf{x}_j)\|_{i,j=1}^N$.
3. Generate vector from multivariate normal distribution $\mathcal{N}(\mathbf{m}, \mathbf{K})$.
Below try to change RBF kernel to some other kernel and see the results.
```
k = GPy.kern.RBF(input_dim=1, lengthscale=0.3)
X = np.linspace(0, 5, 500).reshape(-1, 1)
mu = np.zeros(500)
C = k.K(X, X)
Z = np.random.multivariate_normal(mu, C, 3)
plt.figure()
for i in range(3):
plt.plot(X, Z[i, :])
```
### Task
Build a GP model that predicts airline passenger counts on international flights.
```
!wget https://github.com/adasegroup/ML2020_seminars/raw/master/seminar11/data/airline.npz
data = np.load('airline.npz')
X = data['X']
y = data['y']
train_indices = list(range(70)) + list(range(90, 129))
test_indices = range(70, 90)
X_train = X[train_indices]
y_train = y[train_indices]
X_test = X[test_indices]
y_test = y[test_indices]
plt.figure(figsize=(5, 3))
plt.plot(X_train, y_train, '.')
```
You need to obtain something like this
<img src=https://github.com/adasegroup/ML2020_seminars/raw/master/seminar11/imgs/airline_result.png>
```
def plot_model(X, y, model):
x = np.linspace(1948, 1964, 400).reshape(-1, 1)
prediction_mean, prediction_var = model.predict(x)
prediction_std = np.sqrt(prediction_var).ravel()
prediction_mean = prediction_mean.ravel()
plt.figure(figsize=(5, 3))
plt.plot(X, y, '.', label='Train data')
plt.plot(x, prediction_mean, label='Prediction')
plt.fill_between(x.ravel(), prediction_mean - prediction_std, prediction_mean + prediction_std, alpha=0.3)
```
#### Let's try RBF kernel
```
######## Your code here ########
k_rbf =
```
As you can see below it doesn't work ;(
```
model = GPy.models.GPRegression(X, y, k_rbf)
model.optimize()
print(model)
plot_model(X_train, y_train, model)
```
We will try to model this data set using 3 additive components: trend, seasonality and noise.
So, the kernel should be a sum of 3 kernels:
`kernel = kernel_trend + kernel_seasonality + kernel_noise`
#### Let's first try to model trend
Trend is almost linear with some small nonlinearity, so you can use sum of linear kernel with some other which gives this small nonlinearity.
```
######## Your code here ########
k_trend =
model = GPy.models.GPRegression(X, y, k_trend)
model.optimize()
print(model)
plot_model(X_train, y_train, model)
```
#### Let's model periodicity
Just periodic kernel will not work (why?).
Try to use product of periodic kernel with some other kernel (or maybe 2 other kernels).
Note that the amplitude increases with x.
```
######## Your code here ########
k_trend =
k_seasonal =
kernel = k_trend + k_seasonal
model = GPy.models.GPRegression(X, y, kernel)
model.optimize()
print(model)
plot_model(X_train, y_train, model)
```
#### Let's add noise model
The dataset is heteroscedastic, i.e. noise variance depends on x: it increases linearly with x.
Noise can be modeled using `GPy.kern.White(1)`, but it assumes that noise variance is the same at every x.
By what kernel it should be multiplied?
```
######## Your code here ########
k_trend =
k_periodicity =
k_noise =
kernel = k_trend + k_periodicity + k_noise
model = GPy.models.GPRegression(X, y, kernel)
model.optimize()
print(model)
plot_model(X_train, y_train, model)
```
# Automatic covariance structure search
We can construct kernel is automatic way.
Here is our data set (almost the same)
```
idx_test = np.where((X[:,0] > 1957))[0]
idx_train = np.where((X[:,0] <= 1957))[0]
X_train = X[idx_train]
y_train = y[idx_train]
X_test = X[idx_test]
y_test = y[idx_test]
plt.figure(figsize=(7, 5))
plt.plot(X_train, y_train, '.', color='red');
plt.plot(X_test, y_test, '.', color='green');
def plot_model_learned(X, y, train_idx, test_idx, model):
prediction_mean, prediction_var = model.predict(X)
prediction_std = np.sqrt(prediction_var).ravel()
prediction_mean = prediction_mean.ravel()
plt.figure(figsize=(7, 5))
plt.plot(X, y, '.')
plt.plot(X[train_idx], y[train_idx], '.', color='green')
plt.plot(X, prediction_mean, color='red')
plt.fill_between(X.ravel(), prediction_mean - prediction_std, prediction_mean + prediction_std, alpha=0.3)
```
## Expressing Sturcture Through Kernels
For example:
$$
\underbrace{\text{RBF}\times\text{Lin}}_\text{increasing trend} + \underbrace{\text{RBF}\times\text{Per}}_\text{varying-amplitude periodic} + \underbrace{\text{RBF}}_\text{residual}
$$
## Greedy Searching for the Optimum Kernel Combination
One can wonder: how to automatically search the kernel structure? We can optimize some criteria, which balance between a loss function value and the complexity of the model.
Reasinobale candidate for this is BIC-criteria:
$$
BIC = - 2. \text{Log-Liklihood} + m \cdot\log{n}
$$
where $n$ sample size and $m$ number of the parameters.
However, the procedure of fitting Gaussian Process is quite expensive $O(n^3)$. Hence, instead of the combinatorial search through all possible combinations, we grow the kernel structure greedy.
You can find more details at the https://github.com/jamesrobertlloyd/gp-structure-search. For now, we present toy-example algorithm.
Consider the set of operations:
$$
\text{Algebra: } +,\times
$$
and the set of basic kernels:
$$
\text{Kernels: } \text{Poly}, \text{RBF}, \text{Periodic}
$$
For each level we select extenstion of our current kernel with the lowest BIC. This is an example of the possible kernel grow process (mark notes the lowest BIC at the level):
<img src='https://github.com/adasegroup/ML2020_seminars/raw/master/seminar11/imgs/gp.png'>
### Task*
Implement function that trains a model with given kernel and dataset, calculates and returns BIC
The log-lilkelihood of the model can be calculated using `model.log_likelihood()` method,
number of parameters of the model you can get via `len(model.param_array)`.
```
def train_model_get_bic(X_train, y_train, kernel, num_restarts=1):
'''
Input:
X_train: numpy array of train features, n*d (d>=1)
y_train: numpy array n*1
kernel: GPy object kern
num_restars: number of the restarts of the optimization routine
Output:
bic value
'''
kernel = kernel.copy()
######## Your code here ########
return bic
```
Here is a utility function which take list of kernels and operations between them, calculates all product kernels
and returns a list of them.
After that we need only take sum of the kernels from this list.
```
def _get_all_product_kernels(op_list, kernel_list):
'''
Find product pairs and calculate them.
For example, if we are given expression:
K = k1 * k2 + k3 * k4 * k5
the function will calculate all the product kernels
k_mul_1 = k1 * k2
k_mul_2 = k3 * k4 * k5
and return list [k_mul_1, k_mul_2].
'''
product_index = np.where(np.array(op_list) == '*')[0]
if len(product_index) == 0:
return kernel_list
product_index = product_index[0]
product_kernel = kernel_list[product_index] * kernel_list[product_index + 1]
if len(op_list) == product_index + 1:
kernel_list_copy = kernel_list[:product_index] + [product_kernel]
op_list_copy = op_list[:product_index]
else:
kernel_list_copy = kernel_list[:product_index] + [product_kernel] + kernel_list[product_index + 2:]
op_list_copy = op_list[:product_index] + op_list[product_index + 1:]
return _get_all_product_kernels(op_list_copy, kernel_list_copy)
```
### Task*
This is the main class, you need to implement several methods inside
1. method `init_kernel()` - this function constructs initial model, i.e. the model with one kernel. You need just iterate through the list of base kernels and choose the best one according to BIC
2. method `grow_level()` - this function adds new level. You need to iterate through all base kernels and all operations,
apply each operation to the previously constructed kernel and each base kernel (use method `_make_kernel()` for this) and then choose the best one according to BIC.
```
class GreedyKernel:
'''
Class for greedy growing kernel structure
'''
def __init__(self, algebra, base_kernels):
self.algebra = algebra
self.base_kernels = base_kernels
self.kernel = None
self.kernel_list = []
self.op_list = []
self.str_kernel = None
def _make_kernel(self, op_list, kernel_list):
'''
Sumation in kernel experssion
'''
kernels_to_sum = _get_all_product_kernels(op_list, kernel_list)
new_kernel = kernels_to_sum[0]
for k in kernels_to_sum[1:]:
new_kernel = new_kernel + k
return new_kernel
def init_kernel(self, X_train, y_train):
'''
Initialization of first kernel
'''
best_kernel = None
###### Your code here ######
# You need just iterate through the list of base kernels and choose the best one according to BIC
# save the kernel in `best_kernel` variable
# base kernels are given by self.base_kernels --- list of kernel objects
############################
assert best_kernel is not None
self.kernel_list.append(best_kernel)
self.str_kernel = str(best_kernel.name)
def grow_level(self, X_train, y_train):
'''
Select optimal extension of current kernel
'''
best_kernel = None # should be kernel object
best_op = None # should be operation name, i.e. "+" or "*"
###### Your code here ######
# You need to iterate through all base kernels and all operations,
# apply each operation to the previously constructed kernel and each base kernel
# (use method `_make_kernel()` for this) and then choose the best one according to BIC.
# base kernels are given by self.base_kernels --- list of kernel objects
# operations are given by self.algebra --- dictionary:
# {"+": lambda x, y: x + y
# "*": lambda x, y: x * y}
# best_kernel - kernel object, store in this variable the best found kernel
# best_op - '+' or '*', store in this variable the best found operation
############################
assert best_kernel is not None
assert best_op is not None
self.kernel_list.append(best_kernel)
self.op_list.append(best_op)
new_kernel = self._make_kernel(self.op_list, self.kernel_list)
str_new_kernel = '{} {} {}'.format(self.str_kernel, best_op, best_kernel.name)
return new_kernel, str_new_kernel
def grow_tree(self, X_train, y_train, max_depth):
'''
Greedy kernel growing
'''
if self.kernel == None:
self.init_kernel(X_train, y_train)
for i in range(max_depth):
self.kernel, self.str_kernel = self.grow_level(X_train, y_train)
print(self.str_kernel)
def fit_model(self, X_train, y_train, kernel, num_restarts=1):
model = GPy.models.GPRegression(X_train, y_train, kernel)
model.optimize_restarts(num_restarts, verbose=False)
return model
```
Now let us define the algebra and list of base kernels.
To make learning process more robust we constrain some parameters of the kernels to lie within
some reasonable intervals
```
# operations under kernels:
algebra = {'+': lambda x, y: x + y,
'*': lambda x, y: x * y
}
# basic kernels list:
poly_kern = GPy.kern.Poly(input_dim=1, order=1)
periodic_kern = GPy.kern.StdPeriodic(input_dim=1)
periodic_kern.period.constrain_bounded(1e-2, 1e1)
periodic_kern.lengthscale.constrain_bounded(1e-2, 1e1)
rbf_kern = GPy.kern.RBF(input_dim=1)
rbf_kern.lengthscale.constrain_bounded(1e-2, 1e1)
bias_kern = GPy.kern.Bias(1)
kernels_list = [poly_kern, periodic_kern, rbf_kern]
```
Let's train the model.
You should obtain something which is more accurate than the trend model ;)
```
GK = GreedyKernel(algebra, kernels_list)
GK.grow_tree(X_train, y_train, 4)
model = GK.fit_model(X_train, y_train, GK.kernel)
plot_model_learned(X, y, idx_train, idx_test, model)
```
## Bonus Task
Try to approximate rastrigin function
```
fig = plot_2d_func(rastrigin)
```
### Training set
```
np.random.rand(42)
x_train = np.random.rand(200, 2)
y_train = rastrigin(x_train)
```
#### Hint: you can constrain parameters of the covariance functions, for example
`model.std_periodic.period.constrain_bounded(0, 0.2)`.
```
######## Your code here ########
model =
print(model)
x_test = np.random.rand(1000, 2)
y_test = rastrigin(x_test)
y_pr = model.predict(x_test)[0]
mse = mean_squared_error(y_test.ravel(), y_pr.ravel())
print('MSE: {}'.format(mse))
fig = plot_2d_func(lambda x: model.predict(x)[0])
```
# Appendix: Gaussian Process Classification
### Classification
A data set $\left (X, \mathbf{y} \right ) = \left \{ (x_i, y_i), x_i \in \mathbb{R}^d, y_i \in \{+1, -1\} \right \}_{i = 1}^N$ is given.
Assumption:
$$
p(y = +1 \; | \; x) = \sigma(f(x)) = \pi(x),
$$
where latent function $f(x)$ is a Gaussian Processes.
We need to produce a probabilistic prediction
$$
\pi_* = p(y_* \; | \; X, \mathbf{y}, x_*) = \int \sigma(f_*) p(f_* \; | \; X, \mathbf{y}, x_*) df_*,
$$
$$
p(f_* \; | \; X, \mathbf{y}, x_*) = \int p(f_* \; | \; X, x_*, \mathbf{f}) p(\mathbf{f} \; | \; X, \mathbf{y}) d\mathbf{f},
$$
where $p(\mathbf{f} \; |\; X, \mathbf{y}) = \dfrac{p(\mathbf{y} | X, \mathbf{f}) p(\mathbf{f} | X)}{p(\mathbf{y} | X)}$ is the posterior over the latent variables.
Both integrals are intractable.
Use approximation technique like Laplace approximation or Expectation Propagation.
```
from matplotlib import cm
def cylinder(x):
y = (1 / 7.0 - (x[:, 0] - 0.5)**2 - (x[:, 1] - 0.5)**2) > 0
return y
np.random.seed(42)
X = np.random.rand(40, 2)
y = cylinder(X)
x_grid = np.meshgrid(np.linspace(0, 1, 100), np.linspace(0, 1, 100))
y_grid = cylinder(np.hstack((x_grid[0].reshape(-1, 1), x_grid[1].reshape(-1, 1)))).reshape(x_grid[0].shape)
positive_idx = y == 1
plt.figure(figsize=(5, 3))
plt.plot(X[positive_idx, 0], X[positive_idx, 1], '.', markersize=10, label='Positive')
plt.plot(X[~positive_idx, 0], X[~positive_idx, 1], '.', markersize=10, label='Negative')
im = plt.contour(x_grid[0], x_grid[1], y_grid, 10, cmap=cm.hot)
plt.colorbar(im)
plt.legend()
plt.show()
kernel = GPy.kern.RBF(2, variance=1., lengthscale=0.2, ARD=True)
model = GPy.models.GPClassification(X, y.reshape(-1, 1), kernel=kernel)
model.optimize()
print(model)
def plot_model_2d(model):
model.plot(levels=40, resolution=80, plot_data=False, figsize=(5, 3))
plt.plot(X[positive_idx, 0], X[positive_idx, 1], '.', markersize=10, label='Positive')
plt.plot(X[~positive_idx, 0], X[~positive_idx, 1], '.', markersize=10, label='Negative')
plt.legend()
plt.show()
plot_model_2d(model)
```
Let's change lengthscale to some small value
```
model.rbf.lengthscale = [0.05, 0.05]
plot_model_2d(model)
```
|
github_jupyter
|
```
%load_ext autoreload
%autoreload 2
from __future__ import print_function
import argparse
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from importlib import reload
from deeprank.dataset import DataLoader, PairGenerator, ListGenerator
from deeprank import utils
seed = 1234
torch.manual_seed(seed)
loader = DataLoader('./config/letor07_mp_fold1.model')
import json
letor_config = json.loads(open('./config/letor07_mp_fold1.model').read())
#device = torch.device("cuda")
#device = torch.device("cpu")
select_device = torch.device("cpu")
rank_device = torch.device("cuda")
Letor07Path = letor_config['data_dir']
letor_config['fill_word'] = loader._PAD_
letor_config['embedding'] = loader.embedding
letor_config['feat_size'] = loader.feat_size
letor_config['vocab_size'] = loader.embedding.shape[0]
letor_config['embed_dim'] = loader.embedding.shape[1]
letor_config['pad_value'] = loader._PAD_
pair_gen = PairGenerator(rel_file=Letor07Path + '/relation.train.fold%d.txt'%(letor_config['fold']),
config=letor_config)
from deeprank import select_module
from deeprank import rank_module
letor_config['max_match'] = 20
letor_config['win_size'] = 5
select_net = select_module.QueryCentricNet(config=letor_config, out_device=rank_device)
select_net = select_net.to(select_device)
select_net.train()
'''
letor_config['q_limit'] = 20
letor_config['d_limit'] = 2000
letor_config['max_match'] = 20
letor_config['win_size'] = 5
letor_config['finetune_embed'] = True
letor_config['lr'] = 0.0001
select_net = select_module.PointerNet(config=letor_config)
select_net = select_net.to(device)
select_net.embedding.weight.data.copy_(torch.from_numpy(loader.embedding))
select_net.train()
select_optimizer = optim.RMSprop(select_net.parameters(), lr=letor_config['lr'])
'''
letor_config["dim_q"] = 1
letor_config["dim_d"] = 1
letor_config["dim_weight"] = 1
letor_config["c_reduce"] = [1, 1]
letor_config["k_reduce"] = [1, 50]
letor_config["s_reduce"] = 1
letor_config["p_reduce"] = [0, 0]
letor_config["c_en_conv_out"] = 4
letor_config["k_en_conv"] = 3
letor_config["s_en_conv"] = 1
letor_config["p_en_conv"] = 1
letor_config["en_pool_out"] = [1, 1]
letor_config["en_leaky"] = 0.2
letor_config["dim_gru_hidden"] = 3
letor_config['lr'] = 0.005
letor_config['finetune_embed'] = False
rank_net = rank_module.DeepRankNet(config=letor_config)
rank_net = rank_net.to(rank_device)
rank_net.embedding.weight.data.copy_(torch.from_numpy(loader.embedding))
rank_net.qw_embedding.weight.data.copy_(torch.from_numpy(loader.idf_embedding))
rank_net.train()
rank_optimizer = optim.Adam(rank_net.parameters(), lr=letor_config['lr'])
def to_device(*variables, device):
return (torch.from_numpy(variable).to(device) for variable in variables)
def show_text(x):
print(' '.join([loader.word_dict[w.item()] for w in x]))
X1, X1_len, X1_id, X2, X2_len, X2_id, Y, F = \
pair_gen.get_batch(data1=loader.query_data, data2=loader.doc_data)
X1, X1_len, X2, X2_len, Y, F = \
to_device(X1, X1_len, X2, X2_len, Y, F, device=rank_device)
show_text(X2[0])
X1, X2_new, X1_len, X2_len_new, X2_pos = select_net(X1, X2, X1_len, X2_len, X1_id, X2_id)
show_text(X1[0])
for i in range(5):
print(i, end=' ')
show_text(X2_new[0][i])
print(X2_pos[20].shape)
print(len(X2_pos))
print(len(X2))
print(X2_pos[0])
print(X2_pos[1])
# X1 = X1[:1]
# X1_len = X1_len[:1]
# X2 = X2[:1]
# X2_len = X2_len[:1]
# X1_id = X1_id[:1]
# X2_id = X2_id[:1]
# show_text(X2[0])
# X1, X2_new, X1_len, X2_len_new = select_net(X1, X2, X1_len, X2_len, X1_id, X2_id)
# show_text(X1[0])
# for i in range(5):
# print(i, end=' ')
# show_text(X2_new[0][i])
import time
rank_loss_list = []
start_t = time.time()
for i in range(1000):
# One Step Forward
X1, X1_len, X1_id, X2, X2_len, X2_id, Y, F = \
pair_gen.get_batch(data1=loader.query_data, data2=loader.doc_data)
X1, X1_len, X2, X2_len, Y, F = \
to_device(X1, X1_len, X2, X2_len, Y, F, device=select_device)
X1, X2, X1_len, X2_len, X2_pos = select_net(X1, X2, X1_len, X2_len, X1_id, X2_id)
X2, X2_len = utils.data_adaptor(X2, X2_len, select_net, rank_net, letor_config)
output = rank_net(X1, X2, X1_len, X2_len, X2_pos)
# Update Rank Net
rank_loss = rank_net.pair_loss(output, Y)
print('rank loss:', rank_loss.item())
rank_loss_list.append(rank_loss.item())
rank_optimizer.zero_grad()
rank_loss.backward()
rank_optimizer.step()
end_t = time.time()
print('Time Cost: %s s' % (end_t-start_t))
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure()
plt.plot(rank_loss_list)
plt.show()
torch.save(select_net, "qcentric.model")
torch.save(rank_net, "deeprank.model")
select_net_e = torch.load(f='qcentric.model')
rank_net_e = torch.load(f='deeprank.model')
list_gen = ListGenerator(rel_file=Letor07Path+'/relation.test.fold%d.txt'%(letor_config['fold']),
config=letor_config)
map_v = 0.0
map_c = 0.0
with torch.no_grad():
for X1, X1_len, X1_id, X2, X2_len, X2_id, Y, F in \
list_gen.get_batch(data1=loader.query_data, data2=loader.doc_data):
#print(X1.shape, X2.shape, Y.shape)
X1, X1_len, X2, X2_len, Y, F = to_device(X1, X1_len, X2, X2_len, Y, F, device=select_device)
X1, X2, X1_len, X2_len, X2_pos = select_net_e(X1, X2, X1_len, X2_len, X1_id, X2_id)
X2, X2_len = utils.data_adaptor(X2, X2_len, select_net, rank_net, letor_config)
#print(X1.shape, X2.shape, Y.shape)
pred = rank_net_e(X1, X2, X1_len, X2_len, X2_pos)
map_o = utils.eval_MAP(pred.tolist(), Y.tolist())
#print(pred.shape, Y.shape)
map_v += map_o
map_c += 1.0
map_v /= map_c
print('[Test]', map_v)
```
|
github_jupyter
|
```
from IPython.display import Image
```
This is a follow on from Tutorial 1 where we browsed the Ocean marketplace and downloaded the imagenette dataset. In this tutorial, we will create a model that trains (and overfits) on the small amount of sample data. Once we know that data interface of the input is compatible with our created model (and that the model can successfully overfit on the sample data), then we can be confident enough to send the model to train on the complete dataset.
Now lets inspect the sample data. The data provider should provide this in the same format as the whole dataset. This helps us as data scientists to write scripts that run on both the sample data and the whole dataset. We call this the **interface** of the data.
```
from pathlib import Path
imagenette_dir = Path('imagenette2-sample')
print(f"Sub-directories: {sorted(list(imagenette_dir.glob('*')))}")
sorted(list(imagenette_dir.glob('*')))
train_dir, val_dir = sorted(list(imagenette_dir.glob('*')))
print(f"Sub-directories in train: {sorted(list(train_dir.glob('*/*')))}")
print(f"Sub-directories in val: {sorted(list(val_dir.glob('*/*')))}")
```
It seems like both the training and validation directorys have folders for each category of image that contain the image files. Of course, we could read the dataset docs if this wasn't immediately clear.
```
train_images = sorted(list(train_dir.glob('*/*')))
val_images = sorted(list(val_dir.glob('*/*')))
print(f"Number of train images:", len(train_images))
print(f"Number of val images:", len(val_images))
```
We will use the fast.ai library to train a simple image classifier.
```
from fastai.vision.all import *
```
First we will attempt to train as normal (using both training and validation sets) to ensure that all of the images load without any errors. First we create the dataloaders:
```
path = Path('imagenette2-sample.tgz')
import xtarfile as tarfile
tar = tarfile.open(path, "r:gz")
from PIL import Image
import io
images = []
for member in tar.getmembers():
f = tar.extractfile(member)
if f is not None:
image_data = f.read()
image = Image.open(io.BytesIO(image_data))
images.append(image)
path = Path("imagenette2-sample")
dls = ImageDataLoaders.from_folder(path, train='train', valid='val',
item_tfms=RandomResizedCrop(128, min_scale=0.35), batch_tfms=Normalize.from_stats(*imagenet_stats), bs=2)
```
We can visualise the images in the training set as follows:
```
dls.show_batch()
```
We choose a simple ResNet-34 architecture.
```
learn = cnn_learner(dls, resnet34, metrics=accuracy, pretrained=False)
```
And run training for 5 epochs with a learning rate of 0.001.
```
learn.fit_one_cycle(8, 1e-4)
```
As you can see, the accuracy is 50% meaning, which is the same as random guessing. We can visualise the results using the following. Note that the results are on the validation images.
```
learn.show_results()
```
The reason for the accuracy is that the size of the training set is not large enough to generalize to the validation set. Thus, while we have confirmed that both the training images and validation images load correctly, we have not confirmed that our selected model trains properly. To ensure, this we will instead use the training set for validation. This is a very simple case for the model since it does not have to learn to generalise and can simply memorise the input data. If the model cannot achieve this, there must be some bug in the code. Let's create new dataloaders for this scenario:
```
dls_overfit = ImageDataLoaders.from_folder(imagenette_dir, train='train', valid='train',
item_tfms=RandomResizedCrop(128, min_scale=0.35), batch_tfms=Normalize.from_stats(*imagenet_stats), bs=2)
dls_overfit.show_batch()
learn_overfit = cnn_learner(dls_overfit, resnet34, metrics=accuracy, pretrained=False)
learn_overfit.fit_one_cycle(8, 1e-4)
```
Note that the results are now on the training images.
```
learn_overfit.show_results()
preds, targs = learn_overfit.get_preds()
```
|
github_jupyter
|
```
from matplotlib import pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
tf.__version__
model = tf.keras.models.load_model("runs/machine_translation/2")
```
https://www.tensorflow.org/beta/tutorials/text/transformer#evaluate
```
tokenizer_pt = tfds.features.text.SubwordTextEncoder.load_from_file(
"subwords/ted_hrlr_translate/pt_to_en/subwords_pt")
tokenizer_en = tfds.features.text.SubwordTextEncoder.load_from_file(
"subwords/ted_hrlr_translate/pt_to_en/subwords_en")
inp_sentence = "este é um problema que temos que resolver."
```
real translation: "this is a problem we have to solve ."
```
inp = tf.expand_dims([tokenizer_pt.vocab_size] + tokenizer_pt.encode(inp_sentence) + [tokenizer_pt.vocab_size + 1], 0)
tar = tf.expand_dims([tokenizer_en.vocab_size], 0)
preds, enc_enc_attention, dec_dec_attention, enc_dec_attention = model([inp, tar])
preds = tf.argmax(preds[:, -1:, :], axis=-1, output_type=tf.int32)
preds
tar = tf.concat([tar, preds], axis=-1)
tokenizer_en.decode(tar[0].numpy()[1:])
preds, enc_enc_attention, dec_dec_attention, enc_dec_attention = model([inp, tar])
preds = tf.argmax(preds[:, -1:, :], axis=-1, output_type=tf.int32)
preds
tar = tf.concat([tar, preds], axis=-1)
tokenizer_en.decode(tar[0].numpy()[1:])
preds, enc_enc_attention, dec_dec_attention, enc_dec_attention = model([inp, tar])
preds = tf.argmax(preds[:, -1:, :], axis=-1, output_type=tf.int32)
preds
tar = tf.concat([tar, preds], axis=-1)
tokenizer_en.decode(tar[0].numpy()[1:])
preds, enc_enc_attention, dec_dec_attention, enc_dec_attention = model([inp, tar])
preds = tf.argmax(preds[:, -1:, :], axis=-1, output_type=tf.int32)
preds
tar = tf.concat([tar, preds], axis=-1)
tokenizer_en.decode(tar[0].numpy()[1:])
preds, enc_enc_attention, dec_dec_attention, enc_dec_attention = model([inp, tar])
preds = tf.argmax(preds[:, -1:, :], axis=-1, output_type=tf.int32)
preds
tar = tf.concat([tar, preds], axis=-1)
tokenizer_en.decode(tar[0].numpy()[1:])
preds, enc_enc_attention, dec_dec_attention, enc_dec_attention = model([inp, tar])
preds = tf.argmax(preds[:, -1:, :], axis=-1, output_type=tf.int32)
preds
tar = tf.concat([tar, preds], axis=-1)
tokenizer_en.decode(tar[0].numpy()[1:])
preds, enc_enc_attention, dec_dec_attention, enc_dec_attention = model([inp, tar])
preds = tf.argmax(preds[:, -1:, :], axis=-1, output_type=tf.int32)
preds
tar = tf.concat([tar, preds], axis=-1)
tokenizer_en.decode(tar[0].numpy()[1:])
preds, enc_enc_attention, dec_dec_attention, enc_dec_attention = model([inp, tar])
preds = tf.argmax(preds[:, -1:, :], axis=-1, output_type=tf.int32)
preds
tar = tf.concat([tar, preds], axis=-1)
tokenizer_en.decode(tar[0].numpy()[1:])
preds, enc_enc_attention, dec_dec_attention, enc_dec_attention = model([inp, tar])
preds = tf.argmax(preds[:, -1:, :], axis=-1, output_type=tf.int32)
preds
tar = tf.concat([tar, preds], axis=-1)
tokenizer_en.decode(tar[0].numpy()[1:])
preds, enc_enc_attention, dec_dec_attention, enc_dec_attention = model([inp, tar])
preds = tf.argmax(preds[:, -1:, :], axis=-1, output_type=tf.int32)
preds
tar = tf.concat([tar, preds], axis=-1)
tokenizer_en.decode(tar[0].numpy()[1:])
preds, enc_enc_attention, dec_dec_attention, enc_dec_attention = model([inp, tar])
preds = tf.argmax(preds[:, -1:, :], axis=-1, output_type=tf.int32)
preds
tar = tf.concat([tar, preds], axis=-1)
tokenizer_en.decode(tar[0].numpy()[1:])
preds, enc_enc_attention, dec_dec_attention, enc_dec_attention = model([inp, tar])
preds = tf.argmax(preds[:, -1:, :], axis=-1, output_type=tf.int32)
preds
tar = tf.concat([tar, preds], axis=-1)
tokenizer_en.decode(tar[0].numpy()[1:])
preds, enc_enc_attention, dec_dec_attention, enc_dec_attention = model([inp, tar])
preds = tf.argmax(preds[:, -1:, :], axis=-1, output_type=tf.int32)
preds
```
`8088` is the end token
Visualizing only encoder-decoder attention heads for the final prediction
```
enc_dec_attention_0, enc_dec_attention_1, enc_dec_attention_2, enc_dec_attention_3 = \
enc_dec_attention["layer_0"][0], enc_dec_attention["layer_1"][0], enc_dec_attention["layer_2"][0], enc_dec_attention["layer_3"][0]
xticklabels = ["##START##"] + [tokenizer_pt.decode([v]) for v in inp.numpy()[0][1:-1]] + ["##END##"]
yticklabels = ["##START##"] + [tokenizer_en.decode([v]) for v in tar.numpy()[0][1:]]
# https://matplotlib.org/users/colormaps.html
for i, cmap in enumerate(["Reds", "spring", "summer", "autumn", "winter", "cool", "Wistia", "Oranges"]):
fig = plt.figure()
fig, ax = plt.subplots(1,1, figsize=(12,12))
heatplot = ax.imshow(enc_dec_attention_0[i].numpy(), cmap=cmap)
ax.set_xticks(np.arange(11))
ax.set_yticks(np.arange(13))
ax.set_xticklabels(xticklabels)
ax.set_yticklabels(yticklabels)
plt.colorbar(heatplot)
plt.title("Layer 0, Attention Head %d" % (i + 1))
# https://matplotlib.org/users/colormaps.html
for i, cmap in enumerate(["Reds", "spring", "summer", "autumn", "winter", "cool", "Wistia", "Oranges"]):
fig = plt.figure()
fig, ax = plt.subplots(1,1, figsize=(12,12))
heatplot = ax.imshow(enc_dec_attention_1[i].numpy(), cmap=cmap)
ax.set_xticks(np.arange(11))
ax.set_yticks(np.arange(13))
ax.set_xticklabels(xticklabels)
ax.set_yticklabels(yticklabels)
plt.colorbar(heatplot)
plt.title("Layer 1, Attention Head %d" % (i + 1))
# https://matplotlib.org/users/colormaps.html
for i, cmap in enumerate(["Reds", "spring", "summer", "autumn", "winter", "cool", "Wistia", "Oranges"]):
fig = plt.figure()
fig, ax = plt.subplots(1,1, figsize=(12,12))
heatplot = ax.imshow(enc_dec_attention_2[i].numpy(), cmap=cmap)
ax.set_xticks(np.arange(11))
ax.set_yticks(np.arange(13))
ax.set_xticklabels(xticklabels)
ax.set_yticklabels(yticklabels)
plt.colorbar(heatplot)
plt.title("Layer 2, Attention Head %d" % (i + 1))
# https://matplotlib.org/users/colormaps.html
for i, cmap in enumerate(["Reds", "spring", "summer", "autumn", "winter", "cool", "Wistia", "Oranges"]):
fig = plt.figure()
fig, ax = plt.subplots(1,1, figsize=(12,12))
heatplot = ax.imshow(enc_dec_attention_3[i].numpy(), cmap=cmap)
ax.set_xticks(np.arange(11))
ax.set_yticks(np.arange(13))
ax.set_xticklabels(xticklabels)
ax.set_yticklabels(yticklabels)
plt.colorbar(heatplot)
plt.title("Layer 3, Attention Head %d" % (i + 1))
```
|
github_jupyter
|
```
import os
import pandas as pd
from bs4 import BeautifulSoup
import sys
import re
from nltk.stem import PorterStemmer
from nltk.tokenize import sent_tokenize, word_tokenize
ps = PorterStemmer()
print os.getcwd();
# if necessary change the directory
#os.chdir('c:\\Users\..')
data = pd.read_csv("nightlife_sanfrancisco_en.csv", header=0, delimiter=",")
# iexplore data set
data.shape
data.columns.values
print data["text"][0]
# Remove stop words from "words"
import nltk # import stop words
nltk.download('popular') # Download text data sets, including stop words
from nltk.corpus import stopwords # Import the stop word list
print stopwords.words("english")
#words = [w for w in words if not w in stopwords.words("english")]
#print words # "u" before each word indicates that Python is internally representing each word as a unicode string
# Clean all records
def text_to_words( raw_text ):
# 1. Remove end of line
without_end_line = re.sub('\n', ' ', raw_text)
# 2. Remove start of line
without_start_line = re.sub('\r', ' ', without_end_line)
# 3. Remove punctuation
without_punctual = re.sub(ur'[\W_]+',' ',without_start_line )
# 4. Replace number by XnumberX
without_number = re.sub('(\d+\s*)+', ' XnumberX ', without_punctual)
# 5. Remove non-letters
letters_only = re.sub("[^a-zA-Z]", " ", without_number)
# 6. Convert to lower case
lower_case = letters_only.lower()
# 7. Split into individual words
words = lower_case.split()
# 8. stemming - algorithms Porter stemmer
meaningful_words = [ps.stem(word) for word in words]
# 9.Remove stop words
# Redundant step, removing later in Creating the bag of words step
#stops = set(stopwords.words("english"))
#meaningful_words = [w for w in words if not w in stops]
# 10. Join the words back into one string separated by space and return the result.
return( " ".join( meaningful_words ))
#return (meaningful_words)
clean_text = text_to_words( data["text"][0] )
print clean_text
# Get the number of text based on the dataframe column size
num_text = data["text"].size
# Initialize an empty list to hold the clean text
clean_data = []
# Loop over each text; create an index i that goes from 0 to the length
print "Cleaning and parsing the data set text...\n"
clean_data = []
for i in xrange( 0, num_text ):
# If the index is evenly divisible by 1000, print a message
if( (i+1)%1000 == 0 ):
print "Text %d of %d\n" % ( i+1, num_text )
clean_data.append( text_to_words( data["text"][i] )) # in case of error run "pip install -U nltk"
# Compare original and edited text
data['text'][0]
clean_data[0]
print "Creating the bag of words...\n"
from sklearn.feature_extraction.text import CountVectorizer
# Initialize the "CountVectorizer" object, which is scikit-learn's
# bag of words tool.
vectorizer = CountVectorizer(analyzer = "word", \
tokenizer = None, \
preprocessor = None, \
stop_words = 'english', \
max_features = 5000)
# fit_transform() does two functions: First, it fits the model
# and learns the vocabulary; second, it transforms our training data
# into feature vectors. The input to fit_transform should be a list of
# strings.
train_data_features = vectorizer.fit_transform(clean_data)
# Numpy arrays are easy to work with, so convert the result to an
# array
train_data_features = train_data_features.toarray()
print train_data_features.shape
# Take a look at the words in the vocabulary
vocab = vectorizer.get_feature_names()
print vocab
import numpy as np
# Sum up the counts of each vocabulary word
dist = np.sum(train_data_features, axis=0)
# For each, print the vocabulary word and the number of times it
# appears in the training set
for tag, count in zip(vocab, dist):
print count, tag
# Using in model, random forest example
print "Training the random forest..."
from sklearn.ensemble import RandomForestClassifier
# Initialize a Random Forest classifier with 100 trees
forest = RandomForestClassifier(n_estimators = 100)
# Fit the forest to the training set, using the bag of words as
# features and the sentiment labels as the response variable
#
# This may take a few minutes to run
forest = forest.fit( train_data_features, data["stars"] )
```
|
github_jupyter
|
# Tutorial Part 2: Learning MNIST Digit Classifiers
In the previous tutorial, we learned some basics of how to load data into DeepChem and how to use the basic DeepChem objects to load and manipulate this data. In this tutorial, you'll put the parts together and learn how to train a basic image classification model in DeepChem. You might ask, why are we bothering to learn this material in DeepChem? Part of the reason is that image processing is an increasingly important part of AI for the life sciences. So learning how to train image processing models will be very useful for using some of the more advanced DeepChem features.
The MNIST dataset contains handwritten digits along with their human annotated labels. The learning challenge for this dataset is to train a model that maps the digit image to its true label. MNIST has been a standard benchmark for machine learning for decades at this point.

## Colab
This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.
[](https://colab.research.google.com/github/deepchem/deepchem/blob/master/examples/tutorials/02_Learning_MNIST_Digit_Classifiers.ipynb)
## Setup
We recommend running this tutorial on Google colab. You'll need to run the following cell of installation commands on Colab to get your environment set up. If you'd rather run the tutorial locally, make sure you don't run these commands (since they'll download and install a new Anaconda python setup)
```
%%capture
%tensorflow_version 1.x
!wget -c https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
!chmod +x Miniconda3-latest-Linux-x86_64.sh
!bash ./Miniconda3-latest-Linux-x86_64.sh -b -f -p /usr/local
!conda install -y -c deepchem -c rdkit -c conda-forge -c omnia deepchem-gpu=2.3.0
import sys
sys.path.append('/usr/local/lib/python3.7/site-packages/')
from tensorflow.examples.tutorials.mnist import input_data
# TODO: This is deprecated. Let's replace with a DeepChem native loader for maintainability.
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
import deepchem as dc
import tensorflow as tf
from tensorflow.keras.layers import Reshape, Conv2D, Flatten, Dense, Softmax
train = dc.data.NumpyDataset(mnist.train.images, mnist.train.labels)
valid = dc.data.NumpyDataset(mnist.validation.images, mnist.validation.labels)
keras_model = tf.keras.Sequential([
Reshape((28, 28, 1)),
Conv2D(filters=32, kernel_size=5, activation=tf.nn.relu),
Conv2D(filters=64, kernel_size=5, activation=tf.nn.relu),
Flatten(),
Dense(1024, activation=tf.nn.relu),
Dense(10),
Softmax()
])
model = dc.models.KerasModel(keras_model, dc.models.losses.CategoricalCrossEntropy())
model.fit(train, nb_epoch=2)
from sklearn.metrics import roc_curve, auc
import numpy as np
print("Validation")
prediction = np.squeeze(model.predict_on_batch(valid.X))
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(10):
fpr[i], tpr[i], thresh = roc_curve(valid.y[:, i], prediction[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
print("class %s:auc=%s" % (i, roc_auc[i]))
```
# Congratulations! Time to join the Community!
Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with DeepChem, we encourage you to finish the rest of the tutorials in this series. You can also help the DeepChem community in the following ways:
## Star DeepChem on [GitHub](https://github.com/deepchem/deepchem)
This helps build awareness of the DeepChem project and the tools for open source drug discovery that we're trying to build.
## Join the DeepChem Gitter
The DeepChem [Gitter](https://gitter.im/deepchem/Lobby) hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation!
|
github_jupyter
|
# QST CGAN with thermal noise in the channel (convolution)
```
import numpy as np
from qutip import Qobj, fidelity
from qutip.wigner import qfunc
from qutip.states import thermal_dm
from qutip import coherent_dm
from qutip.visualization import plot_wigner_fock_distribution
import tensorflow_addons as tfa
import tensorflow as tf
from qst_nn.ops import (cat, binomial, num, gkp, GaussianConv, husimi_ops, convert_to_real_ops, dm_to_tf, batched_expect)
from qst_cgan.gan import DensityMatrix, Expectation, Discriminator, generator_loss, discriminator_loss
from qst_cgan.ops import convert_to_complex_ops, tf_fidelity
from tqdm.auto import tqdm
from dataclasses import dataclass
import matplotlib.pyplot as plt
tf.keras.backend.set_floatx('float64') # Set float64 as the default
# https://scipy-cookbook.readthedocs.io/items/Matplotlib_LaTeX_Examples.html
fig_width_pt = 246.0 # Get this from LaTeX using \showthe\columnwidth
inches_per_pt = 1.0/72.27 # Convert pt to inch
golden_mean = (np.sqrt(5)-1.0)/2.0 # Aesthetic ratio
fig_width = fig_width_pt*inches_per_pt # width in inches
fig_height = fig_width*golden_mean # height in inches
fig_size = [fig_width,fig_height]
params = {
'axes.labelsize': 9,
'font.size': 9,
'legend.fontsize': 9,
'xtick.labelsize': 8,
'ytick.labelsize': 8,
'text.usetex': True,
'figure.figsize': fig_size,
'axes.labelpad':1,
'legend.handlelength':0.8,
'axes.titlesize': 9,
"text.usetex" : False
}
plt.rcParams.update(params)
# mpl.use('pdf')
```
# We create the state and the data using QuTiP
```
hilbert_size = 32
# Betas can be selected in a grid or randomly in a circle
num_grid = 64
num_points = num_grid*num_grid
beta_max_x = 5
beta_max_y = 5
xvec = np.linspace(-beta_max_x, beta_max_x, num_grid)
yvec = np.linspace(-beta_max_y, beta_max_y, num_grid)
X, Y = np.meshgrid(xvec, yvec)
betas = (X + 1j*Y).ravel()
```
# Measurement ops are simple projectors $\frac{1}{\pi}|\beta \rangle \langle \beta|$
```
m_ops = [(1/np.pi)*coherent_dm(hilbert_size, beta) for beta in betas]
ops_numpy = [op.data.toarray() for op in m_ops] # convert the QuTiP Qobj to numpy arrays
ops_tf = tf.convert_to_tensor([ops_numpy]) # convert the numpy arrays to complex TensorFlow tensors
A = convert_to_real_ops(ops_tf) # convert the complex-valued numpy matrices to real-valued TensorFlow tensors
print(A.shape, A.dtype)
```
# Convolution noise
The presence of thermal photons in the amplification channel lead to the data being
corrupted as a convolution over the Q function data (see [https://arxiv.org/abs/1206.3405](https://arxiv.org/abs/1206.3405))
The kernel for this convolution is a Gaussian determined by the average photon number in the thermal state. We corrupt our data assuming a thermal state with mean photon number 5.
```
# define normalized 2D gaussian
def gaus2d(x=0, y=0, n0=1):
return 1. / (np.pi * n0) * np.exp(-((x**2 + y**2.0)/n0))
nth = 5
X, Y = np.meshgrid(xvec, yvec) # get 2D variables instead of 1D
gauss_kernel = gaus2d(X, Y, n0=nth)
```
# State to reconstruct
Let us now create a state on which we will run QST
```
rho, _ = cat(hilbert_size, 2, 0, 0)
plot_wigner_fock_distribution(rho)
plt.show()
rho_tf = dm_to_tf([rho])
data = batched_expect(ops_tf, rho_tf)
```
# Q function plots using QuTiP and a custom TensorFlow expectation function
```
fig, ax = plt.subplots(1, 2, figsize=(7, 3))
ax[0].imshow(qfunc(rho, xvec, yvec, g=2))
ax[1].imshow(data.numpy().reshape(num_grid, num_grid))
ax[0].set_title("QuTiP Q func")
ax[1].set_title("TensorFlow computed Q func")
plt.show()
# The thermal state distribution
plot_wigner_fock_distribution(thermal_dm(hilbert_size, nth))
```
# Apply the convolution and show the simulated data that we can obtain experimentally
```
x = tf.reshape(tf.cast(data, tf.float64), (1, num_grid, num_grid, 1))
conved = GaussianConv(gauss_kernel)(x)
kernel = gauss_kernel/tf.reduce_max(gauss_kernel)
diff = conved.numpy().reshape(num_grid, num_grid)/tf.reduce_max(conved) - kernel.numpy().reshape(num_grid, num_grid)
diff = tf.convert_to_tensor(diff)
# Collect all the data in an array for plotting
matrices = [gauss_kernel.reshape((num_grid, num_grid)), x.numpy().reshape((num_grid, num_grid)),
conved.numpy().reshape((num_grid, num_grid)), diff.numpy().reshape((num_grid, num_grid))]
fig, ax = plt.subplots(1, 4, figsize=(fig_width, 0.35*2.5*fig_height), dpi=80, facecolor="white",
sharey=False, sharex=True)
axes = [ax[0], ax[1], ax[2], ax[3]]
aspect = 'equal'
for i in range(4):
im = axes[i].pcolor(xvec, yvec,
matrices[i]/np.max(matrices[i]), cmap="hot", vmin=0, vmax=1)
axes[i].set_aspect("equal")
axes[i].set_xticklabels(["", "", ""])
axes[i].set_yticklabels(["", "", ""], fontsize=6)
# axes[i].set_xlabel(r"$Re(\beta)$", fontsize=6)
axes[0].set_yticklabels(["-5", "", "5"], fontsize=6)
labels = ["Background\n(Gaussian)", "State", "Data\n(Convolution)", "Subtracted"]
for i in range(len(labels)):
axes[i].set_title(labels[i], fontsize=6)
# plt.subplots_adjust(wspace=-.4)
# cbar = fig.colorbar(im, ax=axes, pad=0.026, fraction = 0.046)
# cbar.ax.set_yticklabels(["0", "0.5", "1"])
axes[0].set_ylabel(r"Im$(\beta)$", labelpad=-8, fontsize=6)
######################################################################################################
```
# QST CGAN with a Gaussian convolution layer
```
def GeneratorConvQST(hilbert_size, num_points, noise=0.02, kernel=None):
"""
A tensorflow generative model which can be called as
>> generator([A, x])
where A is the set of all measurement operators
transformed into the shape (batch_size, hilbert_size, hilbert_size, num_points*2)
This can be done using the function `convert_to_real_ops` which
takes a set of complex operators shaped as (batch_size, num_points, hilbert_size, hilbert_size)
and converts it to this format which is easier to run convolution operations on.
x is the measurement statistics (frequencies) represented by a vector of shape
[batch_size, num_points] where we consider num_points different operators and their
expectation values.
Args:
hilbert_size (int): Hilbert size of the output matrix
This needs to be 32 now. We can adjust
the network architecture to allow it to
automatically change its outputs according
to the hilbert size in future
num_points (int): Number of different measurement operators
Returns:
generator: A TensorFlow model callable as
>> generator([A, x])
"""
initializer = tf.random_normal_initializer(0., 0.02)
n = int(hilbert_size/2)
ops = tf.keras.layers.Input(shape=[hilbert_size, hilbert_size, num_points*2],
name='operators')
inputs = tf.keras.Input(shape=(num_points), name = "inputs")
x = tf.keras.layers.Dense(16*16*2, use_bias=False,
kernel_initializer = tf.random_normal_initializer(0., 0.02),
)(inputs)
x = tf.keras.layers.LeakyReLU()(x)
x = tf.keras.layers.Reshape((16, 16, 2))(x)
x = tf.keras.layers.Conv2DTranspose(64, 4, use_bias=False,
strides=2,
padding='same',
kernel_initializer=initializer)(x)
x = tfa.layers.InstanceNormalization(axis=3)(x)
x = tf.keras.layers.LeakyReLU()(x)
x = tf.keras.layers.Conv2DTranspose(64, 4, use_bias=False,
strides=1,
padding='same',
kernel_initializer=initializer)(x)
x = tfa.layers.InstanceNormalization(axis=3)(x)
x = tf.keras.layers.LeakyReLU()(x)
x = tf.keras.layers.Conv2DTranspose(32, 4, use_bias=False,
strides=1,
padding='same',
kernel_initializer=initializer)(x)
# x = tfa.layers.InstanceNormalization(axis=3)(x)
# x = tf.keras.layers.LeakyReLU()(x)
# y = tf.keras.layers.Conv2D(8, 5, padding='same')(ops)
# out = x
# x = tf.keras.layers.concatenate([x, y])
x = tf.keras.layers.Conv2DTranspose(2, 4, use_bias=False,
strides=1,
padding='same',
kernel_initializer=initializer)(x)
x = DensityMatrix()(x)
complex_ops = convert_to_complex_ops(ops)
# prefactor = (0.25*g**2/np.pi)
prefactor = 1.
x = Expectation()(complex_ops, x, prefactor)
x = tf.keras.layers.Reshape((num_grid, num_grid, 1))(x)
x = GaussianConv(kernel, trainable=False)(x)
# x = x/tf.reduce_max(x)
x = tf.keras.layers.Reshape((num_points,))(x)
# y = kernel/tf.reduce_max(kernel)
# y = tf.reshape(y, (1, num_points))
# x = x - y
return tf.keras.Model(inputs=[ops, inputs], outputs=x)
tf.keras.backend.clear_session()
generator = GeneratorConvQST(hilbert_size, num_points, kernel=gauss_kernel)
discriminator = Discriminator(hilbert_size, num_points)
density_layer_idx = None
for i, layer in enumerate(generator.layers):
if "density_matrix" in layer._name:
density_layer_idx = i
break
print(density_layer_idx)
model_dm = tf.keras.Model(inputs=generator.input, outputs=generator.layers[density_layer_idx].output)
@dataclass
class LossHistory:
"""Class for keeping track of loss"""
generator: list
discriminator: list
l1: list
loss = LossHistory([], [], [])
fidelities = []
initial_learning_rate = 0.0002
lr_schedule = tf.keras.optimizers.schedules.InverseTimeDecay(initial_learning_rate,
decay_steps=10000,
decay_rate=.96,
staircase=False)
lam = 10.
generator_optimizer = tf.keras.optimizers.Adam(lr_schedule, 0.5, 0.5)
discriminator_optimizer = tf.keras.optimizers.Adam(lr_schedule, 0.5, 0.5)
def train_step(A, x):
"""Takes one step of training for the full A matrix representing the
measurement operators and data x.
Note that the `generator`, `discriminator`, `generator_optimizer` and the
`discriminator_optimizer` has to be defined before calling this function.
Args:
A (tf.Tensor): A tensor of shape (m, hilbert_size, hilbert_size, n x 2)
where m=1 for a single reconstruction, and n represents
the number of measured operators. We split the complex
operators as real and imaginary in the last axis. The
helper function `convert_to_real_ops` can be used to
generate the matrix A with a set of complex operators
given by `ops` with shape (1, n, hilbert_size, hilbert_size)
by calling `A = convert_to_real_ops(ops)`.
x (tf.Tensor): A tensor of shape (m, n) with m=1 for a single
reconstruction and `n` representing the number of
measurements.
"""
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
gen_output = generator([A, x], training=True)
disc_real_output = discriminator([A, x, x], training=True)
disc_generated_output = discriminator([A, x, gen_output], training=True)
gen_total_loss, gen_gan_loss, gen_l1_loss = generator_loss(
disc_generated_output, gen_output, x, lam=lam
)
disc_loss = discriminator_loss(disc_real_output, disc_generated_output)
generator_gradients = gen_tape.gradient(
gen_total_loss, generator.trainable_variables
)
discriminator_gradients = disc_tape.gradient(
disc_loss, discriminator.trainable_variables
)
generator_optimizer.apply_gradients(
zip(generator_gradients, generator.trainable_variables)
)
discriminator_optimizer.apply_gradients(
zip(discriminator_gradients, discriminator.trainable_variables)
)
loss.generator.append(gen_gan_loss)
loss.l1.append(gen_l1_loss)
loss.discriminator.append(disc_loss)
max_iterations = 300
pbar = tqdm(range(max_iterations))
for i in pbar:
train_step(A, conved.numpy().reshape(-1, num_points))
density_matrix = model_dm([A, conved.numpy().reshape(-1, num_points)])
rho_reconstructed = Qobj(density_matrix.numpy().reshape(rho.shape))
f = fidelity(rho_reconstructed, rho)
fidelities.append(f)
pbar.set_description("Fidelity {} | Gen loss {} | L1 loss {} | Disc loss {}".format(f, loss.generator[-1], loss.l1[-1], loss.discriminator[-1]))
rho_reconstructed = Qobj(density_matrix.numpy().reshape(rho.shape))
fig, ax = plot_wigner_fock_distribution(rho_reconstructed, alpha_max=beta_max_x, colorbar=True, figsize=(9, 3.5))
plt.title("Fidelity {:.4}".format(fidelity(rho_reconstructed, rho)))
plt.suptitle("QST CGAN reconstruction")
plt.show()
rho_tf_reconstructed = dm_to_tf([rho_reconstructed])
data_reconstructed = batched_expect(ops_tf, rho_tf_reconstructed)
reconstructed_x = tf.reshape(tf.cast(data_reconstructed, tf.float64), (1, num_grid, num_grid, 1))
reconstructed_conved = GaussianConv(gauss_kernel)(reconstructed_x)
diff2 = reconstructed_conved.numpy().reshape(num_grid, num_grid)/tf.reduce_max(reconstructed_conved) - kernel.numpy().reshape(num_grid, num_grid)
matrices2 = [gauss_kernel.reshape((num_grid, num_grid)), reconstructed_x.numpy().reshape((num_grid, num_grid)),
reconstructed_conved.numpy().reshape((num_grid, num_grid)), diff2.numpy().reshape((num_grid, num_grid))]
figpath = "figures/"
fig, ax = plt.subplots(2, 4, figsize=(fig_width, 0.35*2.5*fig_height), dpi=80, facecolor="white",
sharey=False, sharex=True)
axes = [ax[0, 0], ax[0, 1], ax[0, 2], ax[0, 3]]
aspect = 'equal'
for i in range(4):
im = axes[i].pcolor(xvec, yvec,
matrices[i]/np.max(matrices[i]), cmap="hot", vmin=0, vmax=1)
axes[i].set_aspect("equal")
axes[i].set_xticklabels(["", "", ""])
axes[i].set_yticklabels(["", "", ""], fontsize=6)
# axes[i].set_xlabel(r"$Re(\beta)$", fontsize=6)
axes[0].set_yticklabels(["-5", "", "5"], fontsize=6)
labels = ["Background\n(Gaussian)", "State", "Data\n(Convolution)", "Subtracted"]
for i in range(len(labels)):
axes[i].set_title(labels[i], fontsize=6)
# plt.subplots_adjust(wspace=-.4)
# cbar = fig.colorbar(im, ax=axes, pad=0.026, fraction = 0.046)
# cbar.ax.set_yticklabels(["0", "0.5", "1"])
axes[0].set_ylabel(r"Im$(\beta)$", labelpad=-8, fontsize=6)
plt.text(x = -24.5, y=30, s="cat state", fontsize=8)
######################################################################################################
axes = [ax[1, 0], ax[1, 1], ax[1, 2], ax[1, 3]]
for i in range(1, 4):
axes[i].pcolor(xvec, yvec,
matrices2[i]/np.max(matrices2[i]), cmap="hot", vmin=0, vmax=1)
axes[i].set_aspect("equal")
axes[i].set_xticklabels(["-5", "", "5"], fontsize=6)
axes[i].set_yticklabels(["", "", ""])
axes[i].set_xlabel(r"Re$(\beta)$", fontsize=6, labelpad=-4)
labels = ["Background\n(Gaussian)", "Reconstructed\nState", r"$Convoluted\noutput$"+"\noutput", "Subtracted"]
# for i in range(1, len(labels)):
# axes[i].set_title(labels[i], fontsize=6)
plt.subplots_adjust(hspace=0.7)
# cbar = fig.colorbar(im, ax=axes, pad=0.026, fraction = 0.046)
# cbar.ax.set_yticklabels(["0", "0.5", "1"])
plt.suptitle("QST-CGAN reconstruction", x=.45, y=.52, fontsize=8)
axes[1].set_ylabel(r"$Im(\beta)$", labelpad=-8, fontsize=6)
axes[1].set_yticklabels(["-5", "", "5"], fontsize=6)
axes[1].set_yticklabels(["-5", "", "5"])
axes[0].set_visible(False)
cbar = plt.colorbar(im, ax=ax.ravel().tolist(), aspect=40, ticks=[0, 0.5, 1], pad=0.02)
cbar.set_ticklabels(["0", "0.5", "1"])
cbar.ax.tick_params(labelsize=6)
# plt.text(x = -44.5, y=30, s="(a)", fontsize=8)
# plt.savefig(figpath+"fig-15a-fock-reconstruction.pdf", bbox_inches="tight", pad_inches=0)
```
|
github_jupyter
|
# T008 · Protein data acquisition: Protein Data Bank (PDB)
Authors:
- Anja Georgi, CADD seminar, 2017, Charité/FU Berlin
- Majid Vafadar, CADD seminar, 2018, Charité/FU Berlin
- Jaime Rodríguez-Guerra, Volkamer lab, Charité
- Dominique Sydow, Volkamer lab, Charité
__Talktorial T008__: This talktorial is part of the TeachOpenCADD pipeline described in the first TeachOpenCADD publication ([_J. Cheminform._ (2019), **11**, 1-7](https://jcheminf.biomedcentral.com/articles/10.1186/s13321-019-0351-x)), comprising of talktorials T001-T010.
## Aim of this talktorial
In this talktorial, we conduct the groundwork for the next talktorial where we will generate a ligand-based ensemble pharmacophore for EGFR. Therefore, we
(i) fetch all PDB IDs for EGFR from the PDB database,
(ii) retrieve five protein-ligand structures, which have the best structural quality and are derived from X-ray crystallography, and
(iii) align all structures to each in 3D as well as extract and save the ligands to be used in the next talktorial.
### Contents in Theory
* Protein Data Bank (PDB)
* Python package `pypdb`
### Contents in Practical
* Select query protein
* Get all PDB IDs for query protein
* Get statistic on PDB entries for query protein
* Get meta information on PDB entries
* Filter and sort meta information on PDB entries
* Get meta information of ligands from top structures
* Draw top ligand molecules
* Create protein-ligand ID pairs
* Get the PDB structure files
* Align PDB structures
### References
* Protein Data Bank
([PDB website](http://www.rcsb.org/))
* `pypdb` python package
([_Bioinformatics_ (2016), **1**, 159-60](https://academic.oup.com/bioinformatics/article-lookup/doi/10.1093/bioinformatics/btv543), [documentation](http://www.wgilpin.com/pypdb_docs/html/))
* Molecular superposition with the python package `opencadd` ([repository](https://github.com/volkamerlab/opencadd))
## Theory
### Protein Data Bank (PDB)
The Protein Data Bank (PDB) is one of the most comprehensive structural biology information database and a key resource in areas of structural biology, such as structural genomics and drug design ([PDB website](http://www.rcsb.org/)).
Structural data is generated from structural determination methods such as X-ray crystallography (most common method), nuclear magnetic resonance (NMR), and cryo electron microscopy (cryo-EM).
For each entry, the database contains (i) the 3D coordinates of the atoms and the bonds connecting these atoms for proteins, ligand, cofactors, water molecules, and ions, as well as (ii) meta information on the structural data such as the PDB ID, the authors, the deposition date, the structural determination method used and the structural resolution.
The structural resolution is a measure of the quality of the data that has been collected and has the unit Å (Angstrom). The lower the value, the higher the quality of the structure.
The PDB website offers a 3D visualization of the protein structures (with ligand interactions if available) and a structure quality metrics, as can be seen for the PDB entry of an example epidermal growth factor receptor (EGFR) with the PDB ID [3UG5](https://www.rcsb.org/structure/3UG5).

Figure 1: The protein structure (in gray) with an interacting ligand (in green) is shown for an example epidermal growth factor receptor (EGFR) with the PDB ID 3UG5 (figure by Dominique Sydow).
### Python package `pypdb`
`pypdb` is a python programming interface for the PDB and works exclusively in Python 3 ([_Bioinformatics_ (2016), **1**, 159-60](https://academic.oup.com/bioinformatics/article-lookup/doi/10.1093/bioinformatics/btv543), [documentation](http://www.wgilpin.com/pypdb_docs/html/)).
This package facilitates the integration of automatic PDB searches within bioinformatics workflows and simplifies the process of performing multiple searches based on the results of existing searches.
It also allows an advanced querying of information on PDB entries.
The PDB currently uses a RESTful API that allows for the retrieval of information via standard HTML vocabulary. `pypdb` converts these objects into XML strings.
## Practical
```
import collections
import logging
import pathlib
import time
import warnings
import pandas as pd
from tqdm.auto import tqdm
import redo
import requests_cache
import nglview
import pypdb
from rdkit.Chem import Draw
from rdkit.Chem import PandasTools
from opencadd.structure.superposition.api import align, METHODS
from opencadd.structure.core import Structure
# Disable some unneeded warnings
logger = logging.getLogger("opencadd")
logger.setLevel(logging.ERROR)
warnings.filterwarnings("ignore")
# cache requests -- this will speed up repeated queries to PDB
requests_cache.install_cache("rcsb_pdb", backend="memory")
# define paths
HERE = pathlib.Path(_dh[-1])
DATA = HERE / "data"
```
### Select query protein
We use EGFR as query protein for this talktorial. The UniProt ID of EGFR is `P00533`, which will be used in the following to query the PDB database.
### Get all PDB IDs for query protein
First, we get all PDB structures for our query protein EGFR, using the `pypdb` functions `make_query` and `do_search`.
```
search_dict = pypdb.make_query("P00533")
found_pdb_ids = pypdb.do_search(search_dict)
print("Sample PDB IDs found for query:", *found_pdb_ids[:3], "...")
print("Number of EGFR structures found:", len(found_pdb_ids))
```
### Get statistics on PDB entries for query protein
Next, we ask the question: How many PDB entries are deposited in the PDB for EGFR per year and how many in total?
Using `pypdb`, we can find all deposition dates of EGFR structures from the PDB database. The number of deposited structures was already determined and is needed to set the parameter `max_results` of the function `find_dates`.
```
# Query database
dates = pypdb.find_dates("P00533", max_results=len(found_pdb_ids))
# Example of the first three deposition dates
dates[:3]
```
We extract the year from the deposition dates and calculate a depositions-per-year histogram.
```
# Extract year
years = pd.Series([int(date[:4]) for date in dates])
bins = years.max() - years.min() + 1
axes = years.hist(bins=bins)
axes.set_ylabel("New entries per year")
axes.set_xlabel("Year")
axes.set_title("PDB entries for EGFR");
```
### Get meta information for PDB entries
We use `describe_pdb` to get meta information about the structures, which is stored per structure as a dictionary.
Note: we only fetch meta information on PDB structures here, we do not fetch the structures (3D coordinates), yet.
> The `redo.retriable` line is a _decorator_. This wraps the function and provides extra functionality. In this case, it will retry failed queries automatically (10 times maximum).
```
@redo.retriable(attempts=10, sleeptime=2)
def describe_one_pdb_id(pdb_id):
"""Fetch meta information from PDB."""
described = pypdb.describe_pdb(pdb_id)
if described is None:
print(f"! Error while fetching {pdb_id}, retrying ...")
raise ValueError(f"Could not fetch PDB id {pdb_id}")
return described
pdbs = [describe_one_pdb_id(pdb_id) for pdb_id in found_pdb_ids]
pdbs[0]
```
### Filter and sort meta information on PDB entries
Since we want to use the information to filter for relevant PDB structures, we convert the data set from dictionary to DataFrame for easier handling.
```
pdbs = pd.DataFrame(pdbs)
pdbs.head()
print(f"Number of PDB structures for EGFR: {len(pdbs)}")
```
We start filtering our dataset based on the following criteria:
#### 1. Experimental method: X-ray diffraction
We only keep structures resolved by `X-RAY DIFFRACTION`, the most commonly used structure determination method.
```
pdbs = pdbs[pdbs.expMethod == "X-RAY DIFFRACTION"]
print(f"Number of PDB structures for EGFR from X-ray: {len(pdbs)}")
```
#### 2. Structural resolution
We only keep structures with a resolution equal or lower than 3 Å. The lower the resolution value, the higher is the quality of the structure (-> the higher is the certainty that the assigned 3D coordinates of the atoms are correct). Below 3 Å, atomic orientations can be determined and therefore is often used as threshold for structures relevant for structure-based drug design.
```
pdbs.resolution = pdbs.resolution.astype(float) # convert to floats
pdbs = pdbs[pdbs.resolution <= 3.0]
print(f"Number of PDB entries for EGFR from X-ray with resolution <= 3.0 Angstrom: {len(pdbs)}")
```
We sort the data set by the structural resolution.
```
pdbs = pdbs.sort_values(["resolution"], ascending=True, na_position="last")
```
We check the top PDB structures (sorted by resolution):
```
pdbs.head()[["structureId", "resolution"]]
```
#### 3. Ligand-bound structures
Since we will create ensemble ligand-based pharmacophores in the next talktorial, we remove all PDB structures from our DataFrame, which do not contain a bound ligand: we use the `pypdb` function `get_ligands` to check/retrieve the ligand(s) from a PDB structure. PDB-annotated ligands can be ligands, cofactors, but also solvents and ions. In order to filter only ligand-bound structures, we (i) remove all structures without any annotated ligand and (ii) remove all structures that do not contain any ligands with a molecular weight (MW) greater than 100 Da (Dalton), since many solvents and ions weight less. Note: this is a simple, but not comprehensive exclusion of solvents and ions.
```
# Get all PDB IDs from DataFrame
pdb_ids = pdbs["structureId"].tolist()
# Remove structures
# (i) without ligand and
# (ii) without any ligands with molecular weight (MW) greater than 100 Da (Dalton)
@redo.retriable(attempts=10, sleeptime=2)
def get_ligands(pdb_id):
"""Decorate pypdb.get_ligands so it retries after a failure."""
return pypdb.get_ligands(pdb_id)
mw_cutoff = 100.0 # Molecular weight cutoff in Da
# This database query may take a moment
passed_pdb_ids = []
removed_pdb_ids = []
progressbar = tqdm(pdb_ids)
for pdb_id in progressbar:
progressbar.set_description(f"Processing {pdb_id}...")
ligand_dict = get_ligands(pdb_id)
# (i) Remove structure if no ligand present
if ligand_dict["ligandInfo"] is None:
removed_pdb_ids.append(pdb_id) # Store ligand-free PDB IDs
# (ii) Remove structure if not a single annotated ligand has a MW above mw_cutoff
else:
# Get ligand information
ligands = ligand_dict["ligandInfo"]["ligand"]
# Technicality: if only one ligand, cast dict to list (for the subsequent list comprehension)
if type(ligands) == dict:
ligands = [ligands]
# Get MW per annotated ligand
mw_list = [float(ligand["@molecularWeight"]) for ligand in ligands]
# Remove structure if not a single annotated ligand has a MW above mw_cutoff
if sum([mw > mw_cutoff for mw in mw_list]) == 0:
removed_pdb_ids.append(pdb_id) # Store ligand-free PDB IDs
else:
passed_pdb_ids.append(pdb_id) # Remove ligand-free PDB IDs from list
print(
"PDB structures without a ligand (removed from our data set):",
*removed_pdb_ids,
)
print("Number of structures with ligand:", len(passed_pdb_ids))
```
### Get meta information of ligands from top structures
In the next talktorial, we will build ligand-based ensemble pharmacophores from the top `top_num` structures with the highest resolution.
```
top_num = 8 # Number of top structures
selected_pdb_ids = passed_pdb_ids[:top_num]
selected_pdb_ids
```
The selected highest resolution PDB entries can contain ligands targeting different binding sites, e.g. allosteric and orthosteric ligands, which would hamper ligand-based pharmacophore generation. Thus, we will focus on the following 4 structures, which contain ligands in the orthosteric binding pocket. The code provided later in the notebook can be used to verify this.
```
selected_pdb_ids = ["5UG9", "5HG8", "5UG8", "3POZ"]
```
We fetch the PDB information about the top `top_num` ligands using `get_ligands`, to be stored as *csv* file (as dictionary per ligand).
If a structure contains several ligands, we select the largest ligand. Note: this is a simple, but not comprehensive method to select ligand binding the binding site of a protein. This approach may also select a cofactor bound to the protein. Therefore, please check the automatically selected top ligands visually before further usage.
```
ligands_list = []
for pdb_id in selected_pdb_ids:
ligands = get_ligands(pdb_id)["ligandInfo"]["ligand"]
# Technicality: if only one ligand, cast dict to list (for the subsequent list comprehension)
if isinstance(ligands, dict):
ligands = [ligands]
weight = 0
this_lig = {}
# If several ligands contained, take largest
for ligand in ligands:
if float(ligand["@molecularWeight"]) > weight:
this_ligand = ligand
weight = float(ligand["@molecularWeight"])
ligands_list.append(this_ligand)
# NBVAL_CHECK_OUTPUT
# Change the format to DataFrame
ligands = pd.DataFrame(ligands_list)
ligands
ligands.to_csv(DATA / "PDB_top_ligands.csv", header=True, index=False)
```
### Draw top ligand molecules
```
PandasTools.AddMoleculeColumnToFrame(ligands, "smiles")
Draw.MolsToGridImage(
mols=list(ligands.ROMol),
legends=list(ligands["@chemicalID"] + ", " + ligands["@structureId"]),
molsPerRow=top_num,
)
```
### Create protein-ligand ID pairs
```
# NBVAL_CHECK_OUTPUT
pairs = collections.OrderedDict(zip(ligands["@structureId"], ligands["@chemicalID"]))
pairs
```
### Align PDB structures
Since we want to build ligand-based ensemble pharmacophores in the next talktorial, it is necessary to align all structures to each other in 3D.
We will use one the python package `opencadd` ([repository](https://github.com/volkamerlab/opencadd)), which includes a 3D superposition subpackage to guide the structural alignment of the proteins. The approach is based on superposition guided by sequence alignment provided matched residues. There are other methods in the package, but this simple one will be enough for the task at hand.
#### Get the PDB structure files
We now fetch the PDB structure files, i.e. 3D coordinates of the protein, ligand (and if available other atomic or molecular entities such as cofactors, water molecules, and ions) from the PDB using `opencadd.structure.superposition`.
Available file formats are *pdb* and *cif*, which store the 3D coordinations of atoms of the protein (and ligand, cofactors, water molecules, and ions) as well as information on bonds between atoms. Here, we work with *pdb* files.
```
# Download PDB structures
structures = [Structure.from_pdbid(pdb_id) for pdb_id in pairs]
structures
```
#### Extract protein and ligand
Extract protein and ligand from the structure in order to remove solvent and other artifacts of crystallography.
```
complexes = [
Structure.from_atomgroup(structure.select_atoms(f"protein or resname {ligand}"))
for structure, ligand in zip(structures, pairs.values())
]
complexes
# Write complex to file
for complex_, pdb_id in zip(complexes, pairs.keys()):
complex_.write(DATA / f"{pdb_id}.pdb")
```
#### Align proteins
Align complexes (based on protein atoms).
```
results = align(complexes, method=METHODS["mda"])
```
`nglview` can be used to visualize molecular data within Jupyter notebooks. With the next cell we will visualize out aligned protein-ligand complexes.
```
view = nglview.NGLWidget()
for complex_ in complexes:
view.add_component(complex_.atoms)
view
view.render_image(trim=True, factor=2, transparent=True);
view._display_image()
```
#### Extract ligands
```
ligands = [
Structure.from_atomgroup(complex_.select_atoms(f"resname {ligand}"))
for complex_, ligand in zip(complexes, pairs.values())
]
ligands
for ligand, pdb_id in zip(ligands, pairs.keys()):
ligand.write(DATA / f"{pdb_id}_lig.pdb")
```
We check the existence of all ligand *pdb* files.
```
ligand_files = []
for file in DATA.glob("*_lig.pdb"):
ligand_files.append(file.name)
ligand_files
```
We can also use `nglview` to depict the co-crystallized ligands alone. As we can see, the selected complexes contain ligands populating the same binding pocket and can thus be used in the next talktorial for ligand-based pharmacophore generation.
```
view = nglview.NGLWidget()
for component_id, ligand in enumerate(ligands):
view.add_component(ligand.atoms)
view.remove_ball_and_stick(component=component_id)
view.add_licorice(component=component_id)
view
view.render_image(trim=True, factor=2, transparent=True);
view._display_image()
```
## Discussion
In this talktorial, we learned how to retrieve protein and ligand meta information and structural information from the PDB. We retained only X-ray structures and filtered our data by resolution and ligand availability. Ultimately, we aimed for an aligned set of ligands to be used in the next talktorial for the generation of ligand-based ensemble pharmacophores.
In order to enrich information about ligands for pharmacophore modeling, it is advisable to not only filter by PDB structure resolution, but also to check for ligand diversity (see **Talktorial 005** on molecule clustering by similarity) and to check for ligand activity (i.e. to include only potent ligands).
## Quiz
1. Summarize the kind of data that the Protein Data Bank contains.
2. Explain what the resolution of a structure stands for and how and why we filter for it in this talktorial.
3. Explain what an alignment of structures means and discuss the alignment performed in this talktorial.
|
github_jupyter
|
# Image classification training on a DEBIAI project with a dataset generator
This tutorial shows how to classify images of flowers after inserting the project contextual into DEBIAI.
Based on the tensorflow tutorial : https://www.tensorflow.org/tutorials/images/classification
```
# Import TensorFlow and other libraries
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import os
import PIL
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
# The pythonModule folder need to be in the same folder
from debiai import debiai
```
## Download and explore the dataset
This tutorial uses a dataset of about 3,700 photos of flowers. The dataset contains 5 sub-directories, one per class:
daisy, dandelion, roses, sunflowers and tulips
```
import pathlib
dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"
data_dir = tf.keras.utils.get_file('flower_photos', origin=dataset_url, untar=True)
data_dir = pathlib.Path(data_dir)
```
## Create a dataset
```
# Define some parameters for the loader:
batch_size = 32
img_height = 180
img_width = 180
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="training",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
class_names = train_ds.class_names
print(class_names)
AUTOTUNE = tf.data.AUTOTUNE
train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
```
## Insert the project contextual data in DEBIAI
```
# Creation of the DEBIAI project block structure
DEBIAI_block_structure = [
{
"name": "image_id",
"groundTruth": [
{ "name": "class", "type": "text"},
],
"contexts": [
{ "name": "img_path", "type": "text"},
]
}
]
```
#### Converting some of the project data in a dataframe
In this exemple, it is done with the creation of a dataframe
more details here :
https://git.irt-systemx.fr/ML/DEBIAI/pythonModule#adding-samples
```
# Creation of a dataframe with the same columns as the block structure
data = {"image_id": [], "class": [], "img_path": []}
i = 0
for class_name in class_names:
images = list(data_dir.glob(class_name + '/*'))
for image in images:
data["image_id"].append(i)
data["class"].append(class_name)
data["img_path"].append(str(image))
i += 1
df = pd.DataFrame(data=data)
df
# Creation of a DEBIAI instance
DEBIAI_BACKEND_URL = 'http://localhost:3000/'
DEBIAI_PROJECT_NAME = 'Image classification demo'
my_debiai = debiai.Debiai(DEBIAI_BACKEND_URL)
# Creation of a DEBIAI project if it doesn't exist
debiai_project = my_debiai.get_project(DEBIAI_PROJECT_NAME)
if not debiai_project :
debiai_project = my_debiai.create_project(DEBIAI_PROJECT_NAME)
debiai_project
# Set the project block_structure if not already done
if not debiai_project.block_structure_defined():
debiai_project.set_blockstructure(DEBIAI_block_structure)
debiai_project.get_block_structure()
# Adding the dataframe
debiai_project.add_samples_pd(df, get_hash=False)
```
## Create the model
```
num_classes = len(class_names)
model = Sequential([
layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3)),
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
# Compile the model
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.summary()
```
## Train the model with the DEBIAI Dataset generator
```
# Because DEBIAI doesn't have the images to train the models, we will provide them with a function that take a sample information based on the given block_structure
def model_input_from_debiai_sample(debiai_sample: dict):
# "image_id", "class", "img_path"
img = keras.preprocessing.image.load_img(
debiai_sample['img_path'], target_size=(img_height, img_width))
img_array = keras.preprocessing.image.img_to_array(img)
return tf.expand_dims(img_array, 0) # Create a batch
# TF generated dataset
train_dataset_imported = debiai_project.get_tf_dataset_with_provided_inputs(
model_input_from_debiai_sample,
output_types=(tf.float32, tf.int32),
output_shapes=([None, img_height, img_width, 3], [1, ]),
classes=class_names
)
AUTOTUNE = tf.data.AUTOTUNE
train_dataset_imported = train_dataset_imported.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
# get_tf_dataset_with_provided_inputs Also work with a selection
# Train the model
epochs = 3
model.fit(train_dataset_imported, epochs=epochs)
```
|
github_jupyter
|
```
import os
import sys
import json
import tempfile
import pandas as pd
import numpy as np
import datetime
from CoolProp.CoolProp import PropsSI
from math import exp, factorial, ceil
import matplotlib.pyplot as plt
%matplotlib inline
cwd = os.getcwd()
sys.path.append(os.path.normpath(os.path.join(cwd, '..', '..', '..', 'glhe')))
sys.path.append(os.path.normpath(os.path.join(cwd, '..', '..', '..', 'standalone')))
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = [15, 9]
plt.rcParams['font.size'] = 14
pd.set_option('display.max_columns', None)
# pd.set_option('display.max_rows', None)
df = pd.read_csv("out.csv", index_col=0)
df.head(2)
start_time = datetime.datetime(month=1, day=1, year=2018, hour=0, minute=0, second=0)
l = df['Simulation Time'].tolist()
dt = [datetime.timedelta(seconds=x) for x in l]
df.set_index(pd.to_datetime([start_time + x for x in dt]), inplace=True)
df.plot(y=['GLHE Inlet Temperature [C]', 'GLHE Outlet Temperature [C]'])
dT = df['GLHE Inlet Temperature [C]'].diff()
dt = df['GLHE Inlet Temperature [C]'].index.to_series().diff().dt.total_seconds()
df['dT_in/dt'] = dT/dt
df.plot(y='dT_in/dt')
df = df.loc['01-01-2018 02:50:00':'01-01-2018 03:30:00']
def hanby(time, vol_flow_rate, volume):
"""
Computes the non-dimensional response of a fluid conduit
assuming well mixed nodes. The model accounts for the thermal
capacity of the fluid and diffusive mixing.
Hanby, V.I., J.A. Wright, D.W. Fetcher, D.N.T. Jones. 2002
'Modeling the dynamic response of conduits.' HVAC&R Research 8(1): 1-12.
The model is non-dimensional, so input parameters should have consistent units
for that are able to compute the non-dimensional time parameter, tau.
:math \tau = \frac{\dot{V} \cdot t}{Vol}
:param time: time of fluid response
:param vol_flow_rate: volume flow rate
:param volume: volume of fluid circuit
:return:
"""
tau = vol_flow_rate * time / volume
num_nodes = 20
ret_sum = 1
for i in range(1, num_nodes):
ret_sum += (num_nodes * tau) ** i / factorial(i)
return 1 - exp(-num_nodes * tau) * ret_sum
def hanby_c(time, vol_flow_rate, volume):
return 1 - hanby(time, vol_flow_rate, volume)
delta_t = df['Simulation Time'][1] - df['Simulation Time'][0]
flow = 0.0002
vol = 0.05688
def calc_exft_correction_factors(timestep, flow_rate, volume):
t_tr = volume / flow_rate
time = np.arange(0, t_tr * 2, timestep)
f = np.array([hanby(x, flow_rate, volume) for x in time])
d = np.diff(f)
r = np.diff(f) / sum(d)
# r = np.append(np.zeros(ceil(t_tr/timestep)), r)
if len(r) == 0:
return np.ones(1)
else:
return r
calc_exft_correction_factors(120, flow, vol)
def update_exft_correction_factors(r):
if len(r) == 1:
return r
elif r[0] == 1:
return r
else:
pop_val = r[0]
l = np.count_nonzero(r) - 1
delta = pop_val / l
for i, val in enumerate(r):
if r[i] == 0:
break
else:
r[i] += delta
r = np.roll(r, -1)
r[-1] = 0
return r
cf_0 = calc_exft_correction_factors(delta_t, flow, vol)
cf_0
cf_1 = update_exft_correction_factors(cf_0)
cf_1
cf_2 = update_exft_correction_factors(cf_1)
cf_2
cf_3 = update_exft_correction_factors(cf_2)
cf_3
cf_4 = update_exft_correction_factors(cf_3)
cf_4
def calc_exft(signal, to_correct):
r = calc_exft_correction_factors(delta_t, flow, vol)
# r = np.array(l)
prev_temps = np.ones(len(r)) * to_correct[0]
prev_signal = signal[0]
dT_dt_prev = 0
new_temps = np.empty([0])
for i, t_sig in enumerate(signal):
dT_dt = (t_sig - prev_signal) / delta_t
# print(dT_dt, t_sig, prev_signal)
if abs(dT_dt - dT_dt_prev) > 0.01:
r = calc_exft_correction_factors(delta_t, flow, vol)
# r = np.array(l)
print(r)
prev_temps[0] = to_correct[i]
new_temp = sum(r * prev_temps)
# print(to_correct[i], new_temp)
new_temps = np.append(new_temps, new_temp)
# print(new_temps)
prev_temps = np.roll(prev_temps, 1)
prev_temps[0] = new_temp
r = update_exft_correction_factors(r)
prev_sig = t_sig
dT_dt_prev = dT_dt
# if i == 10:
# break
# else:
# print('\n')
return new_temps
t_c = calc_exft(df['GLHE Inlet Temperature [C]'], df['GLHE Outlet Temperature [C]'])
df['Corrected Temps'] = t_c
df.plot(y=['GLHE Inlet Temperature [C]', 'GLHE Outlet Temperature [C]', 'Corrected Temps', 'Average Fluid Temp [C]'], marker='X')
df.head(20)
```
|
github_jupyter
|
```
%load_ext autoreload
%autoreload 2
import re
import glob
import lzma
import pickle
import pandas as pd
import numpy as np
import requests as r
import seaborn as sns
import warnings
import matplotlib as mpl
import matplotlib.pyplot as plt
from joblib import hash
from collections import Counter
from sklearn.metrics import accuracy_score
from sklearn.pipeline import make_pipeline
from sklearn.neighbors import NearestCentroid
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.linear_model import RidgeClassifier, RidgeClassifierCV, PassiveAggressiveClassifier
warnings.simplefilter('ignore')
mpl.style.use('ggplot')
```
## Source Data
IF source data is missing run Elasticsearch query to extract data and then save it in JSON format to `data` directory
```
# news_json = r.get('http://locslhost:9200/indice/doc/_search?sort=date:desc&size=4000').json()
# with open('./data/news.json', 'w', encoding='utf8') as fh:
# dump(news_json['hits']['hits'], fh)
# df = pd.io.json.json_normalize(news_json['hits']['hits'])
# df.to_json('./data/news.json')
df = pd.read_json('./data/news.json')
```
## Common issues that we generally face during the data preparation phase:
- Format and structure normalization
- Detect and fix missing values
- Duplicates removal
- Units normalization
- Constraints validations
- Anomaly detection and removal
- Study of features importance/relevance
- Dimentional reduction, feature selection & extraction
```
df = df[['_source.body', '_source.date', '_source.subject', '_source.language', '_source.categories']]
df.columns = ['body', 'pubdate', 'subject', 'language', 'categories']
df.drop_duplicates(inplace=True)
df.head(1).T.style
df = df.loc[(df['categories'] != 'News') &
(df['categories'] != 'articles 2015') &
(df['categories'] != 'frontpage') &
(df['categories'] != 'English') &
(df['categories'] != 'Comment') &
(df['categories'] != 'Uncategorized') &
(df['language'] == 'English')]
df['categories'] = df['categories'].str.replace(r'[^a-zA-Z_, ]+', '').replace(', ', '')
df['categories'] = df['categories'].str.replace(r'^, ', '')
df.groupby(['categories']).agg({'count'}).drop_duplicates()
df['cat_id'] = df['categories'].factorize()[0]
df['lang_id'] = df['language'].factorize()[0]
df['char_count'] = df['body'].apply(len)
df['word_count'] = df['body'].apply(lambda x: len(x.split()))
df['word_density'] = df['char_count'] / (df['word_count']+1)
df.shape
sns.set()
sns.pairplot(df, height=3.5, kind="reg", palette="husl", diag_kind="auto")
xtrain, xtest, ytrain, ytest = train_test_split(df['body'], df['categories'], test_size=0.2, random_state=42)
tfidf = TfidfVectorizer(use_idf=False, sublinear_tf=True, min_df=5, norm='l2', encoding='latin-1', ngram_range=(1, 2), stop_words='english')
features = tfidf.fit_transform(df.body).toarray()
labels = df.cat_id
engines = [('PassiveAggressiveClassifier', PassiveAggressiveClassifier(fit_intercept=True, n_jobs=-1, random_state=0)),
('NearestCentroid', NearestCentroid()),
('RandomForestClassifier', RandomForestClassifier(min_samples_leaf=0.01))]
for name, engine in engines:
clf = make_pipeline(tfidf, engine).fit(xtrain, ytrain)
prediction = clf.predict(xtest)
score = clf.score(xtest, prediction)
with lzma.open('./data/{}.pickle.xz'.format(name.lower()), 'wb') as f:
pickle.dump(clf, f, protocol=5)
s = '''
‘Guys, you’ve got to hear this,” I said. I was sitting in front of my computer one day in July 2012, with one eye on a screen of share prices and the other on a live stream of the House of Commons Treasury select committee hearings. As the Barclays share price took a graceful swan dive, I pulled my headphones out of the socket and turned up the volume so everyone could hear. My colleagues left their terminals and came around to watch BBC Parliament with me.
It didn’t take long to realise what was happening. “Bob’s getting murdered,” someone said.
Bob Diamond, the swashbuckling chief executive of Barclays, had been called before the committee to explain exactly what his bank had been playing at in regards to the Libor rate-fixing scandal. The day before his appearance, he had made things very much worse by seeming to accuse the deputy governor of the Bank of England of ordering him to fiddle an important benchmark, then walking back the accusation as soon as it was challenged. He was trying to turn on his legendary charm in front of a committee of angry MPs, and it wasn’t working. On our trading floor, in Mayfair, calls were coming in from all over the City. Investors needed to know what was happening and whether the damage was reparable.
A couple of weeks later, the damage was done. The money was gone, Diamond was out of a job and the market, as it always does, had moved on. We were left asking ourselves: How did we get it so wrong?
'''
result = []
for file in glob.glob('./data/*.pickle.xz'):
clf = pickle.load(lzma.open('{}'.format(file), 'rb'))
ypred = clf.predict([s])
score = clf.score([s], ypred)
print(file, ypred[0], score)
result.append(ypred[0])
print(pd.io.json.dumps(Counter(result), indent=4))
```
|
github_jupyter
|
```
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_excel(r'C:\Users\kundi\Moji_radovi\MVanalysis\datasetup\MV_DataFrame.xlsx')
df['Sat'] = df['Uplaćeno'].astype(str).str.slice(-8,-6)
df['Datum'] = df['Uplaćeno'].astype(str).str.slice(-19,-13)
df.info()
df
df.drop(columns = ['Uplaćeno'], inplace = True)
akontacije = df.loc[df['Opis'] == 'Akontacija platomat']
uplate = df.loc[df['Opis'] != 'Akontacija platomat']
akontacije
uplate
akontacije.describe()
uplate.describe()
veće_od_50lp = akontacije.loc[akontacije['Uplata'] > 0.51]
veće_od_50lp
print('Broj nesukladnosti je:', int(veće_od_50lp['Partner'].count()))
print('Broj akontacija je:', int(akontacije['Partner'].count()))
nesukladnosti = int(veće_od_50lp['Partner'].count()) / int(akontacije['Partner'].count())
print('Postotak detektiranih nesukladnosti je: ', (nesukladnosti * 100), '%')
plt.boxplot(akontacije['Uplata'])
plt.grid()
plt.boxplot(uplate['Uplata'])
plt.grid()
df.columns
akontacije
dani = df['Datum'].unique()
dani = sorted(dani, key = lambda x: x.split('.')[1])
dani
uplate_po_danu = uplate.groupby('Datum').sum()
uplate_po_danu
akontacije_po_danu = akontacije.groupby('Datum').sum()
akontacije_po_danu
fig = plt.figure(figsize=(25,5))
fig.suptitle('Uplaćeni iznosi kroz vrijeme', fontsize = 20, weight = 'bold')
plt.plot(df['Uplata'])
plt.axhline(y = 122.11, color = 'k', linestyle = 'solid')
plt.xlabel('Broj uplata', fontsize = 12, weight = 'semibold')
plt.ylabel('Iznos uplata', fontsize = 12, weight = 'semibold')
plt.grid()
plt.show()
fig = plt.figure(figsize=(25,5))
fig.suptitle('Histogram vrijednosti uplata', fontsize = 20, weight = 'bold')
plt.hist(df['Uplata'], bins=10, ec = 'm')
plt.xlabel('Iznos uplate', fontsize = 12, weight = 'semibold')
plt.ylabel('Broj uplata', fontsize = 12, weight = 'semibold')
plt.grid()
plt.show()
fig = plt.figure(figsize=(25,5))
fig.suptitle('Pojavljivanje nesukladnosti (Vrijednosti iznad iscrtkane linije)', fontsize = 20, weight = 'bold')
plt.plot(akontacije['Uplata'])
plt.axhline(y = 0.51, color = 'k', linestyle = 'dashed')
plt.xlabel('Broj uplata', fontsize = 12, weight = 'semibold')
plt.ylabel('Iznos akontacije', fontsize = 12, weight = 'semibold')
plt.grid()
plt.show()
uplate.info()
sati = df['Sat'].unique()
sati.sort()
uplate_po_satu = uplate.groupby('Sat').sum()
uplate_po_satu
fig = plt.figure(figsize=(25,5))
fig.suptitle('Uplate po satima', fontsize = 20, weight = 'bold')
plt.bar(sati, uplate_po_satu['Uplata'])
plt.xlabel('Sati', fontsize = 12, weight = 'semibold')
plt.ylabel('Suma uplata', fontsize = 12, weight = 'semibold')
plt.grid()
plt.show()
akontacije_po_satu = akontacije.groupby('Sat').sum()
akontacije_po_satu_zbroj = akontacije.groupby('Sat').count()
akontacije_po_satu
akontacije_po_satu_zbroj
fig = plt.figure(figsize=(25,5))
fig.suptitle('Akontacije po satima u kn', fontsize = 20, weight = 'bold')
plt.bar(sati, akontacije_po_satu['Uplata'])
plt.xlabel('Sati', fontsize = 12, weight = 'semibold')
plt.ylabel('Suma akontacija', fontsize = 12, weight = 'semibold')
plt.grid()
plt.show()
fig = plt.figure(figsize=(25,5))
fig.suptitle('Broj izvršenih akontacija po satima', fontsize = 20, weight = 'bold')
plt.bar(sati, akontacije_po_satu_zbroj['Uplata'])
plt.xlabel('Sati', fontsize = 12, weight = 'semibold')
plt.ylabel('Broj akontacija', fontsize = 12, weight = 'semibold')
plt.grid()
plt.show()
fig = plt.figure(figsize=(25,5))
fig.suptitle('Vrijednost akontacija po danima u kn', fontsize = 20, weight = 'bold')
plt.bar(dani, akontacije_po_danu['Uplata'])
plt.xlabel('Dani', fontsize = 12, weight = 'semibold')
plt.ylabel('Suma akontacija', fontsize = 12, weight = 'semibold')
plt.grid()
plt.show()
fig = plt.figure(figsize=(25,5))
fig.suptitle('Vrijednost uplata po danima u kn', fontsize = 20, weight = 'bold')
plt.bar(dani, uplate_po_danu['Uplata'])
plt.xlabel('Dani', fontsize = 12, weight = 'semibold')
plt.ylabel('Suma uplata', fontsize = 12, weight = 'semibold')
plt.grid()
plt.show()
veće_od_50lp_po_danu_zbroj = veće_od_50lp.groupby('Datum').sum()
veće_od_50lp_po_danu = veće_od_50lp.groupby('Datum').count()
veće_od_50lp_po_danu
veće_od_50lp_po_danu_zbroj
dani_nesukladnosti = veće_od_50lp['Datum'].unique()
dani_nesukladnosti.sort()
fig = plt.figure(figsize=(25,5))
fig.suptitle('Izravan trošak nesukladnosti po danu u kn', fontsize = 20, weight = 'bold')
plt.bar(dani_nesukladnosti , veće_od_50lp_po_danu_zbroj['Uplata'])
plt.xlabel('Dani', fontsize = 12, weight = 'semibold')
plt.ylabel('Trošak nesukladnosti', fontsize = 12, weight = 'semibold')
plt.grid()
plt.show()
fig = plt.figure(figsize=(25,5))
fig.suptitle('Broj nesukladnosti po danu u mjesecu', fontsize = 20, weight = 'bold')
plt.bar(dani_nesukladnosti, veće_od_50lp_po_danu['Uplata'])
plt.xlabel('Dani', fontsize = 12, weight = 'semibold')
plt.ylabel('Broj nesukladnosti', fontsize = 12, weight = 'semibold')
plt.grid()
plt.show()
```
|
github_jupyter
|
# Convolutional Neural Network
## Import Dependencies
```
%matplotlib inline
from imp import reload
import itertools
import numpy as np
import utils; reload(utils)
from utils import *
from __future__ import print_function
from sklearn.metrics import confusion_matrix, classification_report, f1_score
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.layers import Embedding, SpatialDropout1D
from keras.layers import LSTM
from keras.layers import Conv1D, GlobalMaxPooling1D
from keras.layers import Flatten
from keras.datasets import imdb
from keras.utils import plot_model
from keras.utils.vis_utils import model_to_dot
from IPython.display import SVG
from IPython.display import Image
```
## Configure Parameters
```
# Embedding
embedding_size = 50
max_features = 5000
maxlen = 400
# Convolution
kernel_size = 3
pool_size = 4
filters = 250
# Dense
hidden_dims = 250
# Training
batch_size = 64
epochs = 4
```
## Data Preparation
```
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
# Pad sequences
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('Train data size:', x_train.shape)
print('Test data size:', x_test.shape)
```
## Modelling
```
model = Sequential()
# we start off with an efficient embedding layer which maps
# our vocab indices into embedding_dims dimensions
model.add(Embedding(max_features,
embedding_size,
input_length=maxlen))
model.add(Dropout(0.2))
model.add(Conv1D(filters,
kernel_size,
padding='valid',
activation='relu',
strides=1))
model.add(GlobalMaxPooling1D())
# We add a vanilla hidden layer:
model.add(Dense(hidden_dims))
model.add(Dropout(0.2))
model.add(Activation('relu'))
# We project onto a single unit output layer, and squash it with a sigmoid:
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.summary()
# plot_model(model, to_file='model.png', show_shapes=True)
# Image(filename = 'model.png')
# SVG(model_to_dot(model).create(prog='dot', format='svg'))
```
## Evaluation
```
# Train the model
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test),
verbose=1)
# Evaluate model
score, acc = model.evaluate(x_test, y_test, batch_size=batch_size)
preds = model.predict_classes(x_test, batch_size=batch_size)
# Save the model weights
model_path = 'data/imdb/models/'
model.save(model_path + 'cnn_model.h5')
model.save_weights(model_path + 'cnn_weights.h5')
# Confusion Matrix
cm = confusion_matrix(y_test, preds)
plot_confusion_matrix(cm, {'negative': 0, 'positive': 1})
# F1 score
f1_macro = f1_score(y_test, preds, average='macro')
f1_micro = f1_score(y_test, preds, average='micro')
print('Test accuracy:', acc)
print('Test score (loss):', score)
print('')
print('F1 Score (Macro):', f1_macro)
print('F1 Score (Micro):', f1_micro)
```
|
github_jupyter
|
# Comparison of the data taken with a long adaptation time
(c) 2019 Manuel Razo. This work is licensed under a [Creative Commons Attribution License CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/). All code contained herein is licensed under an [MIT license](https://opensource.org/licenses/MIT)
---
```
import os
import glob
import re
# Our numerical workhorses
import numpy as np
import scipy as sp
import pandas as pd
# Import matplotlib stuff for plotting
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import matplotlib as mpl
# Seaborn, useful for graphics
import seaborn as sns
# Import the project utils
import sys
sys.path.insert(0, '../../../')
import ccutils
# Magic function to make matplotlib inline; other style specs must come AFTER
%matplotlib inline
# This enables SVG graphics inline
%config InlineBackend.figure_format = 'retina'
tmpdir = '../../tmp/'
datadir = '../../../data/csv_microscopy/'
# Set PBoC plotting format
ccutils.viz.set_plotting_style()
# Increase dpi
mpl.rcParams['figure.dpi'] = 110
```
## Comparing the data
For this dataset taken on `20190814` I grew cells overnight on M9 media, the reason being that I wanted to make sure that cells had no memory of every having been in LB.
```
df_long = pd.read_csv('outdir/20190816_O2__M9_growth_test_microscopy.csv',
comment='#')
df_long[['date', 'operator', 'rbs', 'mean_intensity', 'intensity']].head()
```
Now the rest of the datasets taken with the laser system
```
# Read the tidy-data frame
files = glob.glob(datadir + '/*IPTG*csv')# + mwc_files
df_micro = pd.concat(pd.read_csv(f, comment='#') for f in files if 'Oid' not in f)
## Remove data sets that are ignored because of problems with the data quality
## NOTE: These data sets are kept in the repository for transparency, but they
## failed at one of our quality criteria
## (see README.txt file in microscopy folder)
ignore_files = [x for x in os.listdir('../../image_analysis/ignore_datasets/')
if 'microscopy' in x]
# Extract data from these files
ignore_dates = [int(x.split('_')[0]) for x in ignore_files]
# Remove these dates
df_micro = df_micro[~df_micro['date'].isin(ignore_dates)]
# Keep only the O2 operator
df_micro = df_micro[df_micro.operator == 'O2']
df_micro[['date', 'operator', 'rbs', 'mean_intensity', 'intensity']].head()
```
Let's now look at the O2 $\Delta lacI$ strain data. For this we first have to extract the mean autofluorescence value. First let's process the new data.
```
# Define names for columns in dataframe
names = ['date', 'IPTG_uM','operator', 'binding_energy',
'rbs', 'repressors', 'mean', 'std', 'noise']
# Initialize df_long frame to save the noise
df_noise_long = pd.DataFrame(columns=names)
# Extract the mean autofluorescence
I_auto = df_long[df_long.rbs == 'auto'].intensity.mean()
# Extract the strain fluorescence measurements
strain_df_long = df_long[df_long.rbs == 'delta']
# Group df_long by IPTG measurement
df_long_group = strain_df_long.groupby('IPTG_uM')
for inducer, df_long_inducer in df_long_group:
# Append the require info
strain_info = [20190624, 0, df_long_inducer.operator.unique()[0],
df_long_inducer.binding_energy.unique()[0],
df_long_inducer.rbs.unique()[0],
df_long_inducer.repressors.unique()[0],
(df_long_inducer.intensity - I_auto).mean(),
(df_long_inducer.intensity - I_auto).std(ddof=1)]
# Check if the values are negative for very small noise
if strain_info[int(np.where(np.array(names) == 'mean')[0])] > 0:
# Compute the noise
strain_info.append(strain_info[-1] / strain_info[-2])
# Convert to a pandas series to attach to the df_longframe
strain_info = pd.Series(strain_info, index=names)
# Append to the info to the df_long frame
df_noise_long = df_noise_long.append(strain_info,
ignore_index=True)
df_noise_long.head()
# group by date and by IPTG concentration
df_group = df_micro.groupby(['date'])
# Define names for columns in data frame
names = ['date', 'IPTG_uM','operator', 'binding_energy',
'rbs', 'repressors', 'mean', 'std', 'noise']
# Initialize data frame to save the noise
df_noise_delta = pd.DataFrame(columns=names)
for date, data in df_group:
# Extract the mean autofluorescence
I_auto = data[data.rbs == 'auto'].intensity.mean()
# Extract the strain fluorescence measurements
strain_data = data[data.rbs == 'delta']
# Group data by IPTG measurement
data_group = strain_data.groupby('IPTG_uM')
for inducer, data_inducer in data_group:
# Append the require info
strain_info = [date, inducer, data_inducer.operator.unique()[0],
data_inducer.binding_energy.unique()[0],
data_inducer.rbs.unique()[0],
data_inducer.repressors.unique()[0],
(data_inducer.intensity - I_auto).mean(),
(data_inducer.intensity - I_auto).std(ddof=1)]
# Check if the values are negative for very small noise
if strain_info[int(np.where(np.array(names) == 'mean')[0])] > 0:
# Compute the noise
strain_info.append(strain_info[-1] / strain_info[-2])
# Convert to a pandas series to attach to the dataframe
strain_info = pd.Series(strain_info, index=names)
# Append to the info to the data frame
df_noise_delta = df_noise_delta.append(strain_info,
ignore_index=True)
df_noise_delta.head()
```
It seems that the noise is exactly the same for both illumination systems, ≈ 0.4-0.5.
Let's look at the ECDF of single-cell fluorescence values. For all measurements to be comparable we will plot the fold-change distribution. What this means is that we will extract the mean autofluorescence value and we will normalize by the mean intensity of the $\Delta lacI$ strain.
```
# group laser data by date
df_group = df_micro.groupby('date')
colors = sns.color_palette('Blues', n_colors=len(df_group))
# Loop through dates
for j, (g, d) in enumerate(df_group):
# Extract mean autofluorescence
auto = d.loc[d.rbs == 'auto', 'intensity'].mean()
# Extract mean delta
delta = d.loc[d.rbs == 'delta', 'intensity'].mean()
# Keep only delta data
data = d[d.rbs == 'delta']
fold_change = (data.intensity - auto) / (delta - auto)
# Generate ECDF
x, y = ccutils.stats.ecdf(fold_change)
# Plot ECDF
plt.plot(x, y, lw=0, marker='.', color=colors[j],
alpha=0.3, label='')
## LED
# Extract mean autofluorescence
auto_long = df_long.loc[df_long.rbs == 'auto', 'intensity'].mean()
delta_long = df_long.loc[df_long.rbs == 'delta', 'intensity'].mean()
# Compute fold-change for delta strain
fold_change = (df_long[df_long.rbs == 'delta'].intensity - auto_long) /\
(delta_long - auto_long)
# Generate ECDF
x, y = ccutils.stats.ecdf(fold_change)
# Plot ECDF
plt.plot(x, y, lw=0, marker='v', color='red',
alpha=0.3, label='24 hour', ms=3)
# Add fake plot for legend
plt.plot([], [], marker='.', color=colors[-1],
alpha=0.3, label='8 hour', lw=0)
# Label x axis
plt.xlabel('fold-change')
# Add legend
plt.legend()
# Label y axis of left plot
plt.ylabel('ECDF')
# Change limit
plt.xlim(right=3)
plt.savefig('outdir/ecdf_comparison.png', bbox_inches='tight')
```
There is no difference whatsoever. Maybe it is not the memory of LB, but the memory of having been on a lag phase for quite a while.
## Comparison with theoretical prediction.
Let's compare these datasets with the theoretical prediction we obtained from the MaxEnt approach.
First we need to read the Lagrange multipliers to reconstruct the distribution.
```
# Define directory for MaxEnt data
maxentdir = '../../../data/csv_maxEnt_dist/'
# Read resulting values for the multipliers.
df_maxEnt = pd.read_csv(maxentdir + 'MaxEnt_Lagrange_mult_protein.csv')
df_maxEnt.head()
```
Now let's define the necessary objects to build the distribution from these constraints obtained with the MaxEnt method.
```
# Extract protein moments in constraints
prot_mom = [x for x in df_maxEnt.columns if 'm0' in x]
# Define index of moments to be used in the computation
moments = [tuple(map(int, re.findall(r'\d+', s))) for s in prot_mom]
# Define sample space
mRNA_space = np.array([0])
protein_space = np.arange(0, 1.9E4)
# Extract values to be used
df_sample = df_maxEnt[(df_maxEnt.operator == 'O1') &
(df_maxEnt.repressor == 0) &
(df_maxEnt.inducer_uM == 0)]
# Select the Lagrange multipliers
lagrange_sample = df_sample.loc[:, [col for col in df_sample.columns
if 'lambda' in col]].values[0]
# Compute distribution from Lagrange multipliers values
Pp_maxEnt = ccutils.maxent.maxEnt_from_lagrange(mRNA_space,
protein_space,
lagrange_sample,
exponents=moments).T[0]
mean_p = np.sum(protein_space * Pp_maxEnt)
```
Now we can compare both distributions.
```
# Define binstep for plot, meaning how often to plot
# an entry
binstep = 10
## LED
# Extract mean autofluorescence
auto_long = df_long.loc[df_long.rbs == 'auto', 'intensity'].mean()
delta_long = df_long.loc[df_long.rbs == 'delta', 'intensity'].mean()
# Compute fold-change for delta strain
fold_change = (df_long[df_long.rbs == 'delta'].intensity - auto_long) /\
(delta_long - auto_long)
# Generate ECDF
x, y = ccutils.stats.ecdf(fold_change)
# Plot ECDF
plt.plot(x, y, lw=0, marker='v', color='red',
alpha=0.3, label='20 hour', ms=3)
# Plot MaxEnt results
plt.plot(protein_space[0::binstep] / mean_p, np.cumsum(Pp_maxEnt)[0::binstep],
drawstyle='steps', label='MaxEnt', lw=2)
# Add legend
plt.legend()
# Label axis
plt.ylabel('CDF')
plt.xlabel('fold-change')
plt.savefig('outdir/maxent_comparison.png', bbox_inches='tight')
```
|
github_jupyter
|
# DCGAN - Create Images from Random Numbers!
### Generative Adversarial Networks
Ever since Ian Goodfellow and colleagues [introduced the concept of Generative Adversarial Networks (GANs)](https://arxiv.org/abs/1406.2661), GANs have been a popular topic in the field of AI. GANs are an application of unsupervised learning - you don't need labels for your dataset in order to train a GAN.
The GAN framework composes of two neural networks: a generator network and a discriminator network.
The generator's job is to take a set of random numbers and produce data (such as images or text).
The discriminator then takes in that data as well as samples of that data from a dataset and tries to determine if is "fake" (created by the generator network) or "real" (from the original dataset).
During training, the two networks play a game against each other.
The generator tries to create realistic data, so that it can fool the discriminator into thinking that the data it generated is from the original dataset. At the same time, the discriminator tries to not be fooled - it learns to become better at determining if data is real or fake.
Since the two networks are fighting in this game, they can be seen as as adversaries, which is where the term "Generative Adverserial Network" comes from.
### Deep Convolutional Generative Adversarial Networks
This notebook takes a look at Deep Convolutional Generative Adversarial Networks (DCGAN), which combines Convolutional Neural Networks (CNNs) ands GANs.
We will create a DCGAN that is able to create images of handwritten digits from random numbers.
The tutorial uses the neural net architecture and guidelines outlined in [this paper](https://arxiv.org/abs/1511.06434), and the MNIST dataset.
## How to Use This Tutorial
You can use this tutorial by executing each snippet of python code in order as it appears in the notebook.
In this tutorial, we will train DCGAN on MNIST which will ultimately produces two neural networks:
- The first net is the "generator" and creates images of handwritten digits from random numbers.
- The second net is the "discriminator" and determines if the image created by the generator is real (a realistic looking image of handwritten digits) or fake (an image that doesn't look like it came from the original dataset).
Apart from creating a DCGAN, you'll also learn:
- How to manipulate and iterate through batches images that you can feed into your neural network.
- How to create a custom MXNet data iterator that generates random numbers from a normal distribution.
- How to create a custom training process in MXNet, using lower level functions from the [MXNet Module API](http://mxnet.io/api/python/module.html) such as `.bind()` `.forward()` and `.backward()`. The training process for a DCGAN is more complex than many other neural net's, so we need to use these functions instead of using the higher level `.fit()` function.
- How to visualize images as they are going through the training process
## Prerequisites
This notebook assumes you're familiar with the concept of CNN's and have implemented one in MXNet. If you haven't, check out [this tutorial](https://github.com/dmlc/mxnet-notebooks/blob/master/python/tutorials/mnist.ipynb), which walks you through implementing a CNN in MXNet. You should also be familiar with the concept of logistic regression.
Having a basic understanding for MXNet data iterators helps, since we'll create a custom Data Iterator to iterate though random numbers as inputs to our generator network. Take a look at [this tutorial](https://github.com/dmlc/mxnet-notebooks/blob/master/python/basic/data.ipynb) for a better understanding of how MXNet `DataIter` works.
This example is designed to be trained on a single GPU. Training this network on CPU can be slow, so it's recommended that you use a GPU for training.
To complete this tutorial, you need:
- [MXNet](http://mxnet.io/get_started/setup.html#overview)
- [Python 2.7](https://www.python.org/download/releases/2.7/), and the following libraries for Python:
- [Numpy](http://www.numpy.org/) - for matrix math
- [OpenCV](http://opencv.org/) - for image manipulation
- [Scikit-learn](http://scikit-learn.org/) - to easily get our dataset
- [Matplotlib](https://matplotlib.org/) - to visualize our output
## The Data
We need two pieces of data to train our DCGAN:
1. Images of handwritten digits from the MNSIT dataset
2. Random numbers from a normal distribution
Our generator network will use the random numbers as the input to produce images of handwritten digits, and out discriminator network will use images of handwritten digits from the MNIST dataset to determine if images produced by our generator are realistic.
We are going to use the python library, scikit-learn, to get the MNIST dataset. Scikit-learn comes with a function that gets the dataset for us, which we will then manipulate to create our training and testing inputs.
The MNIST dataset contains 70,000 images of handwritten digits. Each image is 28x28 pixels in size.
To create random numbers, we're going to create a custom MXNet data iterator, which will returns random numbers from a normal distribution as we need then.
## Prepare the Data
### 1. Preparing the MNSIT dataset
Let's start by preparing our handwritten digits from the MNIST dataset. We import the fetch_mldata function from scikit-learn, and use it to get the MNSIT dataset. Notice that it's shape is 70000x784. This contains the 70000 images on every row and 784 pixels of each image in the columns of each row. Each image is 28x28 pixels, but has been flattened so that all 784 images are represented in a single list.
```
from sklearn.datasets import fetch_mldata
mnist = fetch_mldata('MNIST original')
mnist.data.shape
```
Next, we'll randomize the handwritten digits by using numpy to create random permutations on the dataset on our rows (images). We'll then reshape our dataset from 70000x786 to 70000x28x28, so that every image in our dataset is arranged into a 28x28 grid, where each cell in the grid represents 1 pixel of the image.
```
import numpy as np
#Use a seed so that we get the same random permutation each time
np.random.seed(1)
p = np.random.permutation(mnist.data.shape[0])
X = mnist.data[p]
X = X.reshape((70000, 28, 28))
```
Since the DCGAN that we're creating takes in a 64x64 image as the input, we'll use OpenCV to resize the each 28x28 image to 64x64 images:
```
import cv2
X = np.asarray([cv2.resize(x, (64,64)) for x in X])
```
Each pixel in our 64x64 image is represented by a number between 0-255, that represents the intensity of the pixel. However, we want to input numbers between -1 and 1 into our DCGAN, as suggested by the research paper. To rescale our pixels to be in the range of -1 to 1, we'll divide each pixel by (255/2). This put our images on a scale of 0-2. We can then subtract by 1, to get them in the range of -1 to 1.
```
X = X.astype(np.float32)/(255.0/2) - 1.0
```
Ultimately, images are inputted into our neural net from a 70000x3x64x64 array, and they are currently in a 70000x64x64 array. We need to add 3 channels to our images. Typically when we are working with images, the 3 channels represent the red, green, and blue components of each image. Since the MNIST dataset is grayscale, we only need 1 channel to represent our dataset. We will pad the other channels with 0's:
```
X = X.reshape((70000, 1, 64, 64))
X = np.tile(X, (1, 3, 1, 1))
```
Finally, we'll put our images into MXNet's NDArrayIter, which will allow MXNet to easily iterate through our images during training. We'll also split up them images into a batches, with 64 images in each batch. Every time we iterate, we'll get a 4 dimensional array with size `(64, 3, 64, 64)`, representing a batch of 64 images.
```
import mxnet as mx
batch_size = 64
image_iter = mx.io.NDArrayIter(X, batch_size=batch_size)
```
## 2. Preparing Random Numbers
We need to input random numbers from a normal distribution to our generator network, so we'll create an MXNet DataIter that produces random numbers for each training batch. The `DataIter` is the base class of [MXNet's Data Loading API](http://mxnet.io/api/python/io.html). Below, we create a class called `RandIter` which is a subclass of `DataIter`. If you want to know more about how MXNet data loading works in python, please look at [this notebook](https://github.com/dmlc/mxnet-notebooks/blob/master/python/basic/data.ipynb). We use MXNet's built in `mx.random.normal` function in order to return the normally distributed random numbers every time we iterate.
```
class RandIter(mx.io.DataIter):
def __init__(self, batch_size, ndim):
self.batch_size = batch_size
self.ndim = ndim
self.provide_data = [('rand', (batch_size, ndim, 1, 1))]
self.provide_label = []
def iter_next(self):
return True
def getdata(self):
#Returns random numbers from a gaussian (normal) distribution
#with mean=0 and standard deviation = 1
return [mx.random.normal(0, 1.0, shape=(self.batch_size, self.ndim, 1, 1))]
```
When we initalize our `RandIter`, we need to provide two numbers: the batch size and how many random numbers we want to produce a single image from. This number is referred to as `Z`, and we'll set this to 100. This value comes from the research paper on the topic. Every time we iterate and get a batch of random numbers, we will get a 4 dimensional array with shape: `(batch_size, Z, 1, 1)`, which in our example is `(64, 100, 1, 1)`.
```
Z = 100
rand_iter = RandIter(batch_size, Z)
```
## Create the Model
Our model has two networks that we will train together - the generator network and the disciminator network.
Below is an illustration of our generator network:
<img src="dcgan-model.png">
Source: https://arxiv.org/abs/1511.06434
The discriminator works exactly the same way but in reverse - using convolutional layers instead of deconvolutional layers to take an image and determine if it is real or fake.
The DCGAN paper recommends the following best practices for architecting DCGANs:
- Replace any pooling layers with strided convolutions (discriminator) and fractional-strided convolutions (generator).
- Use batchnorm in both the generator and the discriminator.
- Remove fully connected hidden layers for deeper architectures.
- Use ReLU activation in generator for all layers except for the output, which uses Tanh.
- Use LeakyReLU activation in the discriminator for all layers.
Our model will implement these best practices.
### The Generator
Let's start off by defining the generator network:
```
no_bias = True
fix_gamma = True
epsilon = 1e-5 + 1e-12
rand = mx.sym.Variable('rand')
g1 = mx.sym.Deconvolution(rand, name='g1', kernel=(4,4), num_filter=1024, no_bias=no_bias)
gbn1 = mx.sym.BatchNorm(g1, name='gbn1', fix_gamma=fix_gamma, eps=epsilon)
gact1 = mx.sym.Activation(gbn1, name='gact1', act_type='relu')
g2 = mx.sym.Deconvolution(gact1, name='g2', kernel=(4,4), stride=(2,2), pad=(1,1), num_filter=512, no_bias=no_bias)
gbn2 = mx.sym.BatchNorm(g2, name='gbn2', fix_gamma=fix_gamma, eps=epsilon)
gact2 = mx.sym.Activation(gbn2, name='gact2', act_type='relu')
g3 = mx.sym.Deconvolution(gact2, name='g3', kernel=(4,4), stride=(2,2), pad=(1,1), num_filter=256, no_bias=no_bias)
gbn3 = mx.sym.BatchNorm(g3, name='gbn3', fix_gamma=fix_gamma, eps=epsilon)
gact3 = mx.sym.Activation(gbn3, name='gact3', act_type='relu')
g4 = mx.sym.Deconvolution(gact3, name='g4', kernel=(4,4), stride=(2,2), pad=(1,1), num_filter=128, no_bias=no_bias)
gbn4 = mx.sym.BatchNorm(g4, name='gbn4', fix_gamma=fix_gamma, eps=epsilon)
gact4 = mx.sym.Activation(gbn4, name='gact4', act_type='relu')
g5 = mx.sym.Deconvolution(gact4, name='g5', kernel=(4,4), stride=(2,2), pad=(1,1), num_filter=3, no_bias=no_bias)
generatorSymbol = mx.sym.Activation(g5, name='gact5', act_type='tanh')
```
Our generator image starts with random numbers that will be obtained from the `RandIter` we created earlier, so we created the `rand` variable for this input.
We then start creating the model starting with a Deconvolution layer (sometimes called 'fractionally strided layer'). We apply batch normalization and ReLU activation after the Deconvolution layer.
We repeat this process 4 times, applying a `(2,2)` stride and `(1,1)` pad at each Deconvolutional layer, which doubles the size of our image at each layer. By creating these layers, our generator network will have to learn to upsample our input vector of random numbers, `Z` at each layer, so that network output a final image. We also reduce half the number of filters at each layer, reducing dimensionality at each layer. Ultimatley, our output layer is a 64x64x3 layer, representing the size and channels of our image. We use tanh activation instead of relu on the last layer, as recommended by the research on DCGANs. The output of neurons in the final `gout` layer represent the pixels of generated image.
Notice we used 3 parameters to help us create our model: no_bias, fixed_gamma, and epsilon.
Neurons in our network won't have a bias added to them, this seems to work better in practice for the DCGAN.
In our batch norm layer, we set `fixed_gamma=True`, which means `gamma=1` for all of our batch norm layers.
`epsilon` is a small number that gets added to our batch norm so that we don't end up dividing by zero. By default, CuDNN requires that this number is greater than `1e-5`, so we add a small number to this value, ensuring this values stays small.
### The Discriminator
Let's now create our discriminator network, which will take in images of handwritten digits from the MNIST dataset and images created by the generator network:
```
data = mx.sym.Variable('data')
d1 = mx.sym.Convolution(data, name='d1', kernel=(4,4), stride=(2,2), pad=(1,1), num_filter=128, no_bias=no_bias)
dact1 = mx.sym.LeakyReLU(d1, name='dact1', act_type='leaky', slope=0.2)
d2 = mx.sym.Convolution(dact1, name='d2', kernel=(4,4), stride=(2,2), pad=(1,1), num_filter=256, no_bias=no_bias)
dbn2 = mx.sym.BatchNorm(d2, name='dbn2', fix_gamma=fix_gamma, eps=epsilon)
dact2 = mx.sym.LeakyReLU(dbn2, name='dact2', act_type='leaky', slope=0.2)
d3 = mx.sym.Convolution(dact2, name='d3', kernel=(4,4), stride=(2,2), pad=(1,1), num_filter=512, no_bias=no_bias)
dbn3 = mx.sym.BatchNorm(d3, name='dbn3', fix_gamma=fix_gamma, eps=epsilon)
dact3 = mx.sym.LeakyReLU(dbn3, name='dact3', act_type='leaky', slope=0.2)
d4 = mx.sym.Convolution(dact3, name='d4', kernel=(4,4), stride=(2,2), pad=(1,1), num_filter=1024, no_bias=no_bias)
dbn4 = mx.sym.BatchNorm(d4, name='dbn4', fix_gamma=fix_gamma, eps=epsilon)
dact4 = mx.sym.LeakyReLU(dbn4, name='dact4', act_type='leaky', slope=0.2)
d5 = mx.sym.Convolution(dact4, name='d5', kernel=(4,4), num_filter=1, no_bias=no_bias)
d5 = mx.sym.Flatten(d5)
label = mx.sym.Variable('label')
discriminatorSymbol = mx.sym.LogisticRegressionOutput(data=d5, label=label, name='dloss')
```
We start off by creating the `data` variable, which is used to hold our input images to the discriminator.
The discriminator then goes through a series of 5 convolutional layers, each with a 4x4 kernel, 2x2 stride, and 1x1 pad. These layers half the size of the image (which starts at 64x64) at each convolutional layer. Our model also increases dimensionality at each layer by doubling the number of filters per convolutional layer, starting at 128 filters and ending at 1024 filters before we flatten the output.
At the final convolution, we flatten the neural net to get one number as the final output of discriminator network. This number is the probability the image is real, as determined by our discriminator. We use logistic regression to determine this probability. When we pass in "real" images from the MNIST dataset, we can label these as `1` and we can label the "fake" images from the generator net as `0` to perform logistic regression on the discriminator network.
### Prepare the models using the `Module` API
So far we have defined a MXNet `Symbol` for both the generator and the discriminator network.
Before we can train our model, we need to bind these symbols using the `Module` API, which creates the computation graph for our models. It also allows us to decide how we want to initialize our model and what type of optimizer we want to use. Let's set up `Module` for both of our networks:
```
#Hyperperameters
sigma = 0.02
lr = 0.0002
beta1 = 0.5
ctx = mx.gpu(0)
#=============Generator Module=============
generator = mx.mod.Module(symbol=generatorSymbol, data_names=('rand',), label_names=None, context=ctx)
generator.bind(data_shapes=rand_iter.provide_data)
generator.init_params(initializer=mx.init.Normal(sigma))
generator.init_optimizer(
optimizer='adam',
optimizer_params={
'learning_rate': lr,
'beta1': beta1,
})
mods = [generator]
# =============Discriminator Module=============
discriminator = mx.mod.Module(symbol=discriminatorSymbol, data_names=('data',), label_names=('label',), context=ctx)
discriminator.bind(data_shapes=image_iter.provide_data,
label_shapes=[('label', (batch_size,))],
inputs_need_grad=True)
discriminator.init_params(initializer=mx.init.Normal(sigma))
discriminator.init_optimizer(
optimizer='adam',
optimizer_params={
'learning_rate': lr,
'beta1': beta1,
})
mods.append(discriminator)
```
First, we create `Modules` for our networks and then bind the symbols that we've created in the previous steps to our modules.
We use `rand_iter.provide_data` as the `data_shape` to bind our generator network. This means that as we iterate though batches of data on the generator `Module`, our `RandIter` will provide us with random numbers to feed our `Module` using it's `provide_data` function.
Similarly, we bind the discriminator `Module` to `image_iter.provide_data`, which gives us images from MNIST from the `NDArrayIter` we had set up earlier, called `image_iter`.
Notice that we're using the `Normal` initialization, with the hyperparameter `sigma=0.02`. This means our weight initializations for the neurons in our networks will random numbers from a Gaussian (normal) distribution with a mean of 0 and a standard deviation of 0.02.
We also use the adam optimizer for gradient decent. We've set up two hyperparameters, `lr` and `beta1` based on the values used in the DCGAN paper. We're using a single gpu, `gpu(0)` for training.
### Visualizing Our Training
Before we train the model, let's set up some helper functions that will help visualize what our generator is producing, compared to what the real image is:
```
from matplotlib import pyplot as plt
#Takes the images in our batch and arranges them in an array so that they can be
#Plotted using matplotlib
def fill_buf(buf, num_images, img, shape):
width = buf.shape[0]/shape[1]
height = buf.shape[1]/shape[0]
img_width = (num_images%width)*shape[0]
img_hight = (num_images/height)*shape[1]
buf[img_hight:img_hight+shape[1], img_width:img_width+shape[0], :] = img
#Plots two images side by side using matplotlib
def visualize(fake, real):
#64x3x64x64 to 64x64x64x3
fake = fake.transpose((0, 2, 3, 1))
#Pixel values from 0-255
fake = np.clip((fake+1.0)*(255.0/2.0), 0, 255).astype(np.uint8)
#Repeat for real image
real = real.transpose((0, 2, 3, 1))
real = np.clip((real+1.0)*(255.0/2.0), 0, 255).astype(np.uint8)
#Create buffer array that will hold all the images in our batch
#Fill the buffer so to arrange all images in the batch onto the buffer array
n = np.ceil(np.sqrt(fake.shape[0]))
fbuff = np.zeros((int(n*fake.shape[1]), int(n*fake.shape[2]), int(fake.shape[3])), dtype=np.uint8)
for i, img in enumerate(fake):
fill_buf(fbuff, i, img, fake.shape[1:3])
rbuff = np.zeros((int(n*real.shape[1]), int(n*real.shape[2]), int(real.shape[3])), dtype=np.uint8)
for i, img in enumerate(real):
fill_buf(rbuff, i, img, real.shape[1:3])
#Create a matplotlib figure with two subplots: one for the real and the other for the fake
#fill each plot with our buffer array, which creates the image
fig = plt.figure()
ax1 = fig.add_subplot(2,2,1)
ax1.imshow(fbuff)
ax2 = fig.add_subplot(2,2,2)
ax2.imshow(rbuff)
plt.show()
```
## Fit the Model
Training the DCGAN is a complex process that requires multiple steps.
To fit the model, for every batch of data in our dataset:
1. Use the `Z` vector, which contains our random numbers to do a forward pass through our generator. This outputs the "fake" image, since it's created from our generator.
2. Use the fake image as the input to do a forward and backwards pass through the discriminator network. We set our labels for our logistic regression to `0` to represent that this is a fake image. This trains the discriminator to learn what a fake image looks like. We save the gradient produced in backpropogation for the next step.
3. Do a forwards and backwards pass through the discriminator using a real image from our dataset. Our label for logistic regression will now be `1` to represent real images, so our discriminator can learn to recognize a real image.
4. Update the discriminator by adding the result of the gradient generated during backpropogation on the fake image with the gradient from backpropogation on the real image.
5. Now that the discriminator has been updated for the this batch, we still need to update the generator. First, do a forward and backwards pass with the same batch on the updated discriminator, to produce a new gradient. Use the new gradient to do a backwards pass
Here's the main training loop for our DCGAN:
```
# =============train===============
print('Training...')
for epoch in range(1):
image_iter.reset()
for i, batch in enumerate(image_iter):
#Get a batch of random numbers to generate an image from the generator
rbatch = rand_iter.next()
#Forward pass on training batch
generator.forward(rbatch, is_train=True)
#Output of training batch is the 64x64x3 image
outG = generator.get_outputs()
#Pass the generated (fake) image through the discriminator, and save the gradient
#Label (for logistic regression) is an array of 0's since this image is fake
label = mx.nd.zeros((batch_size,), ctx=ctx)
#Forward pass on the output of the discriminator network
discriminator.forward(mx.io.DataBatch(outG, [label]), is_train=True)
#Do the backwards pass and save the gradient
discriminator.backward()
gradD = [[grad.copyto(grad.context) for grad in grads] for grads in discriminator._exec_group.grad_arrays]
#Pass a batch of real images from MNIST through the discriminator
#Set the label to be an array of 1's because these are the real images
label[:] = 1
batch.label = [label]
#Forward pass on a batch of MNIST images
discriminator.forward(batch, is_train=True)
#Do the backwards pass and add the saved gradient from the fake images to the gradient
#generated by this backwards pass on the real images
discriminator.backward()
for gradsr, gradsf in zip(discriminator._exec_group.grad_arrays, gradD):
for gradr, gradf in zip(gradsr, gradsf):
gradr += gradf
#Update gradient on the discriminator
discriminator.update()
#Now that we've updated the discriminator, let's update the generator
#First do a forward pass and backwards pass on the newly updated discriminator
#With the current batch
discriminator.forward(mx.io.DataBatch(outG, [label]), is_train=True)
discriminator.backward()
#Get the input gradient from the backwards pass on the discriminator,
#and use it to do the backwards pass on the generator
diffD = discriminator.get_input_grads()
generator.backward(diffD)
#Update the gradients on the generator
generator.update()
#Increment to the next batch, printing every 50 batches
i += 1
if i % 50 == 0:
print('epoch:', epoch, 'iter:', i)
print
print(" From generator: From MNIST:")
visualize(outG[0].asnumpy(), batch.data[0].asnumpy())
```
Here we have our GAN being trained and we can visualize the progress that we're making as our networks train. After every 25 iterations, we're calling the `visualize` function that we created earlier, which creates the visual plots during training.
The plot on our left is what our generator created (the fake image) in the most recent iteration. The plot on the right is the original (real) image from the MNIST dataset that was inputted to the discriminator on the same iteration.
As training goes on the generator becomes better at generating realistic images. You can see this happening since images on the left become closer to the original dataset with each iteration.
## Summary
We've now sucessfully used Apache MXNet to train a Deep Convolutional GAN using the MNIST dataset.
As a result, we've created two neural nets: a generator, which is able to create images of handwritten digits from random numbers, and a discriminator, which is able to take an image and determine if it is an image of handwritten digits.
Along the way, we've learned how to do the image manipulation and visualization that's associted with training deep neural nets. We've also learned how to some of MXNet's advanced training functionality to fit our model.
## Acknowledgements
This tutorial is based on [MXNet DCGAN codebase](https://github.com/dmlc/mxnet/blob/master/example/gan/dcgan.py), the [original paper on GANs](https://arxiv.org/abs/1406.2661), as well as [this paper](https://arxiv.org/abs/1511.06434) on deep convolutional GANs.
|
github_jupyter
|
# Download Patent DB & Adding Similarity Data
The similarity data on its own provides data on patent doc2vec vectors, and some pre-calculated similarity scores. However, it is much more useful in conjunction with a dataset containing other patent metadata. To achieve this it is useful to download a patent dataset and join it with the similarity data.
There are a number of sources of patent data, if you have a working dataset already it may be easiest to join the similarity data to your own dataset. If however, you do not have a local dataset you can easily download the data from <a href="http://www.patentsview.org/download/">Patentsview</a>
Patentsview offers a lot of data on their bulk download page. For ease of downloading, I have created a Python script that will take care of parsing all those URLs, downloading the CSV files, and reading them into a SQLite database. If you want a local version of the patent data, I recommend you use that script (available <a href = "https://github.com/ryanwhalen/patentsview_data_download">here</a>). Download the 'patentsview_download.py' file to the same folder you have this iPython notebook in and run the code below. Note that downloading may take a significant amount of time. So, run the script using the code below and then go make a cup of coffee. Then go to bed, do whatever you want to do over the course of the next couple of days, and then come back and check up on it.
```
%run ./patentsview_download.py
```
Once you've run the script above, you'll have a local database called 'patent_db.sqlite.' If you want a GUI to check out the contents, I recommend <a href="https://sqlitestudio.pl/">SQLite Studio</a> as a nice open-source option.
The next step is to add the similarity tables to your database. We'll run a separate python script to do so.
```
%run ./write_sim_data_to_db.py
```
# Initial Similarity Explorations
Everything from here on out assumes that you're using the SQLIte database as constructed above. If you've chosen to marry the similarity data to your own dataset, you'll need to adapt the below as required.
First, let's import a few packages and connect to the DB.
```
import pandas as pd
import sqlite3
import seaborn as sns
import numpy as np
import random
import gensim
from matplotlib import pyplot as plt
import networkx as nx
import itertools
import os
from sklearn.metrics.pairwise import cosine_similarity
from scipy import stats
from collections import defaultdict
import json
import csv
db_path ='/mnt/BigDisk1/patent_db_20191231/' #file path to your db file here
conn = sqlite3.connect(db_path+'patent_db.sqlite')
cur = conn.cursor()
cur2 = conn.cursor()
```
Let's make a pandas dataframe containing the similarity scores between citing/cited patents and the date the citations were made. Note that this may take a few moments, but once the dataframe has loaded working with it should be relatively quick provided your machine has sufficient memory.
```
df = pd.read_sql_query('''SELECT cite_similarity.similarity,
patent.date FROM cite_similarity
JOIN patent ON cite_similarity.patent_id = patent.id''', conn)
```
Let's have a quick look at the dataframe to see what we've loaded
```
df.head()
df.describe()
```
### Plotting the similarity distribution
Plotting the distribution of similarity scores for all citations shows that most patents tend to cite to other somewhat similar patents, but that there is also substantial variation.
```
sns.distplot(df['similarity'])
```
We saw above that citing/cited patents have an average similarity of about 0.26. How do we know how to interpret that number? Well, one way is to compare citing/cited similarity with the similarity scores we would expect to see between random patents.
The pre-calculated similarity dataset doesn't contain all pairwise similarity scores, so random pairs are unlikely to have a pre-calculated score. We'll need some code that can take two patent numbers, find their vectors and return the similarity score.
```
def patent_pair_sim(patent1, patent2):
'''takes 2 patent numbers, finds their doc2vec vectors and returns their cosine similarity'''
v1 = cur.execute('''SELECT vector FROM doc2vec WHERE patent_id = ?''',[patent1]).fetchone()
v2 = cur.execute('''SELECT vector FROM doc2vec WHERE patent_id = ?''',[patent2]).fetchone()
if v1 == None or v2 == None: #if either patent has no pre-calculated vector, return None
return None
v1 = json.loads(v1[0])
v2 = json.loads(v2[0])
sim = float(cosine_similarity([v1],[v2])[0])
return sim
```
Let's try that similarity calculting function out. Feel free to tweak the below patent numbers if there's a pair you're interested in comparing.
```
print(patent_pair_sim('9000000','9000001'))
```
To do some sanity checks, let's compare the similarity of patents randomly paired on various criteria. The CPC codes are a handy place to start. The code below will compare the similarity score distributions for patents which share the same Section (highest level), class (second highest level), or Subclass (third highest level) as their primary categorization. We would expect that patents sharing lower-level CPC classifications will have more in common with one another than those that do not.
```
def match_on_cpc(patent, level):
'''takes a patent number and returns a second patent number
that shares the same cpc group codes'''
if level == 'subclass':
group = cur.execute('''SELECT group_id FROM cpc_current WHERE
sequence = '0' and patent_id = ?''',[patent]).fetchone()
if group is None:
return None
group = group[0]
match = cur.execute('''SELECT patent_id FROM cpc_current WHERE
group_id = ? ORDER BY RANDOM() LIMIT 1''',[group]).fetchone()
match = match[0]
if level == 'section':
section = cur.execute('''SELECT section_id FROM cpc_current
WHERE sequence = '0' and patent_id = ?''',[patent]).fetchone()
if section is None:
return None
section = section[0]
match = cur.execute('''SELECT patent_id FROM cpc_current WHERE
section_id = ? ORDER BY RANDOM() LIMIT 1''',[section]).fetchone()
match = match[0]
if level == 'class':
class_id = cur.execute('''SELECT subsection_id FROM cpc_current
WHERE sequence = '0' and patent_id = ?''',[patent]).fetchone()
if class_id is None:
return None
class_id = class_id[0]
match = cur.execute('''SELECT patent_id FROM cpc_current WHERE
subsection_id = ? ORDER BY RANDOM() LIMIT 1''',[class_id]).fetchone()
match = match[0]
return match
def get_cpc_match_sims(n, level):
'''returns n random pairwise similarities where the pairs
share the same primary cpc classification at the hierarchical
level identicated'''
patents = cur2.execute('''SELECT id FROM patent ORDER BY RANDOM()''')
sims = []
for p in patents:
p = p[0]
if not p.isdigit():
continue
match = match_on_cpc(p, level)
if match == None or match == p:
continue
sim = patent_pair_sim(p,match)
if sim == None:
continue
sims.append(sim)
if len(sims) == n:
return sims
```
We can use those functions to get similarity scores for each level of the CPC categorization. This can take some time and requires proper indexing on the DB to work well.
```
n = 1000
section_match_sims = get_cpc_match_sims(n, level='section')
class_match_sims = get_cpc_match_sims(n, level='class')
subclass_match_sims = get_cpc_match_sims(n, level='subclass')
```
For good measure, we can also compare with randomly paired patents. We would expect these patents to have the least in common with one another.
```
def get_random_pairwise_sims(patents, n):
'''returns the similarities between n randomly paired patents'''
sims = []
while len(sims) < n:
patent1, patent2 = random.sample(patents,2)
sim = patent_pair_sim(patent1, patent2)
if sim is None:
continue
sims.append(sim)
return sims
patents = cur2.execute('''SELECT id FROM patent ORDER BY RANDOM()''').fetchall()
patents = [p[0] for p in patents if p[0].isdigit()]
random_sims = get_random_pairwise_sims(patents, n)
```
And now, we can compare each of these types of pairs and how similar they are to one another
```
fig = plt.figure(1, figsize=(9, 6))
ax = fig.add_subplot(111)
bp = ax.boxplot([random_sims, section_match_sims, class_match_sims, subclass_match_sims])
ax.set_xticklabels(['Random','Section', 'Class', 'Subclass'])
fig.savefig('cpc_sim_comparisons_bopxplots.png', bbox_inches='tight', dpi=300)
```
As you can see, the similarity scores track what we would expect to see. So, random patent pairs are least similar, random pairs of patents sharing the same section are somewhat more similar, while those sharing the same class are yet more similar, and those sharing the same subclass are even more similar. As we can see below, all of these differences are statistically significant.
```
print('Random '+str(np.mean(random_sims)))
print('Section '+str(np.mean(section_match_sims)))
t = stats.ttest_ind(random_sims, section_match_sims)
print(t)
print('Class '+str(np.mean(class_match_sims)))
t = stats.ttest_ind(section_match_sims, class_match_sims)
print(t)
print('Subclass '+str(np.mean(subclass_match_sims)))
t = stats.ttest_ind(class_match_sims, subclass_match_sims)
print(t)
```
Now, let's get a list of all of the patents, so that we can select some random pairs to compare.
```
def get_all_patents():
'''returns a list of all patent numbers in the DB'''
patents = cur.execute('''SELECT id FROM patent''').fetchall()
patents = [p[0] for p in patents]
patents = [p for p in patents if p.isdigit()] #this removes non-numerical patents like design, plant, etc.
return patents
patents = get_all_patents()
```
Now let's find the scores for some random pairs and plot that distribution.
```
sims = []
for i in range(10000):
pair = random.choices(patents, k=2)
sim = patent_pair_sim(pair[0],pair[1])
if sim is not None:
sims.append(sim)
sns.distplot(sims)
print(np.mean(sims))
```
### Comparing citing/cited similarity to random pairwise similarity
Plotting the two distributions side-by-side shows that - as we would expect - patents that share a citation relationship tend to be more similar than those that do not.
```
fig, ax = plt.subplots()
sns.kdeplot(df['similarity'], shade=True, ax=ax, label='Citation Similarity', linestyle = '--')
sns.kdeplot(sims, shade=True, ax = ax, label = 'Random Pairwise Similarity')
fig = ax.get_figure()
fig.savefig('cite_vs_random_sim.png', dpi=300)
```
### Citation similarity over time
Plotting the citation similarity by yearly mean reveals a trend towards decreasing similarity between citing and cited patents.
```
df['date'] = pd.to_datetime(df['date'])
yearly_means = df.groupby(df.date.dt.year).mean()
ax = yearly_means.plot()
fig = ax.get_figure()
fig.savefig('yearly_cite_sim.png', dpi=300)
```
# Patent-Level Similarity Metrics
As well as identifying global trends, similarity metrics can also provide insight into single inventions. Many patent metrics use citations in combination with metadata such as technical classifications as proxy measures of either knowledge inputs (e.g. Originality) or as a measure of impact (e.g. Generality)(_See_ Trajtenberg, Jaffe and Henderson, 1997).
The code below can be used to generate a network of forward or backward (e.g. citing or cited) references and their similarity scores. These networks can subsequently be used to define measures of impact or input knowledge diversity. The blue arrows in the diagram below show backwards and forward citation relationships in relation to the focal patent document, while the red arrows represent four different proposed similarity-based citation metrics: (a) knoweldge proximity; (b) knowledge homogeneity; (c) impact proximity; and (d) impact homogeneity.
<img src = "cite_metrics.png">
## Forward and backward distance (knowledge proximity, and impact proximity)
By comparing a patent with its cited or citing prior art, these measures provide insight into the degree to which an invention draws on distant information, or alternately goes on to impact similar or dissimilar inventions.
Knowledge proximity measures the similarity between the focal patent and its cited backward references.To do so, we calculate the similarities between a patent and its cited prior art, and take the minimum of these similarities as the knowledge proximity score. This provides insight into the degree to which the invention integrates any one piece of particularly distant knowledge. A low knowledge proximity score demonstrates that the invention in question cited to prior art from a very dissimilar field.
Impact proximity is calculated in a simliar manner, but instead measures the similarity between the focal patent and its citing forward references. This provides an impact measure that accounts for the degree to which an invention goes on to influnce technical areas that are similar or dissimilar to its own.
For some of our measures, we'll want to both know a patent's granting year and the years of other related patents. The below function will determine the granting year of any patent. Meanwhile, the yearly_max dictionary stores the highest patent number granted in all the years in the dataset.
```
def patent_year(patent):
'''takes a patent number and returns an integer of the year it was granted'''
date = cur.execute('''SELECT date FROM patent WHERE id = ?''',[patent]).fetchone()
year = int(date[0].split('-')[0])
return year
def find_yearly_maxes():
'''returns a dictionary keyed by year, with values for the highest patent number
granted in that year'''
yearly_maxes = {}
years = range(1976,2020)
for year in years:
patents = cur.execute('''SELECT id FROM patent
WHERE strftime('%Y', date) = ?''', [str(year)]).fetchall()
patents = [p[0] for p in patents]
patents = [int(p) for p in patents if p.isdigit()]
yearly_maxes[year] = max(patents)
return yearly_maxes
yearly_maxes = find_yearly_maxes()
def prior_art_proximity(patent):
'''takes a patent number, identifies similarity scores for backwards citations and returns
the min similarity score - a demonstration of the degree to which the invention draws on distant knowledge'''
sims = cur.execute('''SELECT similarity FROM cite_similarity WHERE patent_id = ?''',[patent]).fetchall()
if sims == None:
return None
sims = [s[0] for s in sims]
if len(sims) == 0:
return None
return min(sims)
def impact_proximity(patent):
'''takes a patent number, identifies similarity scores for forward citations and returns
the min similarity score - a demonstration of the degree to which the invention has influenced distant areas'''
year = patent_year(patent)
max_patent = yearly_maxes[year + 10] #the maximum patent number for forward metric comparisons
sims = []
cites = cur.execute('''SELECT patent_id, similarity FROM cite_similarity WHERE citation_id = ?''',[patent]).fetchall()
if cites == None:
return None
for cite in cites:
try:
patent = int(cite[0])
except:
continue #skip design, plant and other non numeric patents
if patent > max_patent: #skip patents granted more than 10-years after focal patent
continue
sims.append(cite[1])
if len(sims) == 0:
return None
return min(sims)
```
We'll want to plot our data by year, which the below function will allow us to do.
```
def plot_yearly_means(data, label):
'''takes dictionary with year keys and mean values and plots change over time'''
xs = sorted(data.keys())
ys = [data[x] for x in xs]
plt.plot(xs,ys)
plt.legend([label])
plt.tight_layout()
plt.savefig(label.replace(' ','')+'.png', dpi=300)
plt.show()
```
To use the above proximity code and assess potential changes over time, we can use a random smaple of patents. The function below will randomly sample _n_ patents per year and return those patents as a lists in a dictionary keyed by year. To address the truncation in citation data availability, we create two different samples, one to demonstrate the backwards-oriented measures and one to demonstrate the forwards-oriented measures.
```
def random_yearly_sample(n, years):
'''takes a vector of years and returns a dict of patents with n randomly sampled per year where year is the key'''
sample = {}
for year in years:
patents = cur.execute('''SELECT id FROM patent WHERE strftime('%Y', date) = ?
ORDER BY RANDOM() LIMIT ?''',[str(year), n]).fetchall()
patents = [p[0] for p in patents]
sample[year]=patents
return sample
backward_sample = random_yearly_sample(10000,range(1986,2020)) #sample for backward citation metrics
forward_sample = random_yearly_sample(10000,range(1976,2010)) #sample for forward citation metrics
```
### Prior Art Proximity
With the sample in hand, we can then calculate the average prior art or impact proximity by year to determine whether there have been changes over time. Note that depending on the size of the sample, this might take some time as it may require many database calls. The cell below will compute the knowledge proximity scores for the random sample we created above to calculate the backwards-focused measures on.
```
data = {}
for year in backward_sample:
kp = [prior_art_proximity(i) for i in backward_sample[year]]
kp = [k for k in kp if k is not None]
data[year] = np.mean(kp)
plot_yearly_means(data, 'Prior Art Proximity')
```
### Impact proximity
Now let's do the same but calculate the forward-oriented impact proximity.
```
data = {}
for year in forward_sample:
kp = [impact_proximity(i) for i in forward_sample[year]]
kp = [k for k in kp if k is not None]
data[year] = np.mean(kp)
plot_yearly_means(data, 'Impact Proximity')
```
### Co-citing and co-cited similarities
Having seen the changes in knowledge and impact proximity over time, let us now look to whether or not knowledge homogeneity or impact homogeneity have changed over time. To do so, we will again use our random sample of yearly patents. This time however, because knowledge homogeneity and impact homogeneity require comparing co-cited or co-citing prior art, we calculate the pairwise similarities between all of the citing or cited prior art for the focal patent. The functions below will perform these calculations and return the minimum similarity between all of the patents cited by the focal patent (knowledge homogeneity) or all of the patents that cite the focal patent (impact homogeneity).
```
def impact_homogeneity(patent, metric = 'min'):
'''takes patent number and returns the minimum similarity
between co-citing prior art (similar to generality)
currently implemented to only work for patents we have pre-modeled vectors for
By default returns minium similarity between citing patents,
passing metric = mean or median will return those instead '''
year = patent_year(patent)
max_patent = yearly_maxes[year + 10] #the maximum patent number for forward metric comparisons
sims = []
cites = cur.execute('''SELECT patent_id FROM uspatentcitation WHERE citation_id = ?''',[patent]).fetchall()
if len(cites) < 2: #undefined if fewer than 2 forward cites
return None
cites = [c[0] for c in cites if c[0].isdigit()] #slice patent numbers out of returned tuples
cites = [c for c in cites if int(c) < max_patent]
for p1, p2 in itertools.combinations(cites, 2):
try: #not all patents will have vectors, so use this try loop here
sim = patent_pair_sim(p1, p2)
sims.append(sim)
except:
continue
sims = [s for s in sims if s is not None]
if len(sims) < 1:
return None
if metric == 'min':
return min(sims)
if metric == 'mean':
return np.mean(sims)
if metric == 'median':
return np.median(sims)
def prior_art_homogeneity(patent, metric = 'min'):
'''takes patent number and returns the minimum similarity
between co-cited prior art (similar to originality)
By default returns minium similarity between citing patents,
passing metric = mean or median will return those instead '''
sims = []
cites = cur.execute('''SELECT citation_id FROM cite_similarity WHERE patent_id = ?''''',[patent]).fetchall()
if len(cites) < 2:
return None
cites = [c[0] for c in cites]
for p1, p2 in itertools.combinations(cites, 2):
sim = patent_pair_sim(p1, p2)
sims.append(sim)
sims = [s for s in sims if s is not None]
if len(sims) < 1:
return None
if metric == 'min':
return min(sims)
if metric == 'mean':
return np.mean(sims)
if metric == 'median':
return np.median(sims)
```
### Prior Art Homogeneity
Now let's apply the homogeneity analyses on our backward sample for the knowledge homogeneity score:
```
data = {}
for year in backward_sample:
kp = [prior_art_homogeneity(patent) for patent in backward_sample[year]]
kp = [k for k in kp if k is not None]
data[year] = np.mean(kp)
plot_yearly_means(data, 'Prior Art Homogeneity')
```
### Impact Homogeneity
And on forward samples for the impact homogeneity score:
```
data = {}
for year in forward_sample:
kp = [impact_homogeneity(patent) for patent in forward_sample[year]]
kp = [k for k in kp if k is not None]
data[year] = np.mean(kp)
plot_yearly_means(data, 'Impact Homogeneity')
```
### Changes in technology space
The above shows both backwards/forward citation similarity and co-cited/co-citing citation similarity have decreased over time. Part of this is likely due to the increasing 'size' of the technological space. As more new inventions are produced, the possible distances between them increases. We can estimate the magnitude of this by randomly sampling patents granted within a given year and plotting their average similarity. If desired, the above raw similarity measures can be adjusted to show their divergence from the similarities we would expect at random.
```
def patents_by_year(year):
'''returns a set of utility patents granted in the year passed
as argument'''
patents = cur.execute('''SELECT id FROM patent
WHERE strftime('%Y', date) = ?''', [str(year)]).fetchall()
patents = [p[0] for p in patents]
patents = [int(p) for p in patents if p.isdigit()]
return patents
data = {}
years = range(1976,2019)
for year in years:
patents = patents_by_year(year)
sims = get_random_pairwise_sims(patents, 10000)
data[year] = np.mean(sims)
plot_yearly_means(data, 'Technological Space Change')
```
### Similarity by citation type
The above four patent-level citation measures provide insight into the how inventions are related to the prior art that they cite, and those that go on to cite them. However, one might also be interested in citations as traces of the patent application and examination process. Research has suggested that the citations added by patent examiners are qualitatively different from those added by the patent applicants themselves. We can use the patent similarity data to get a sense of the degree to which this is reflected in the semantic similarity of the cited prior art.
The below function will return a vector of simiarity scores for a random sample of citations. It takes as an argument either 'cited by examiner' or 'cited by applicant'.
```
def get_sims_by_cite_type(n, cite_type):
'''takes a citation type (cited by applicant, cited by examiner, or cited by other)
and returns n random similarity scores between the cited and citing patent'''
cites = cur.execute('''SELECT patent_id, citation_id FROM uspatentcitation
WHERE category = ? ORDER BY RANDOM() LIMIT ?''', [cite_type, n]).fetchall()
sims = []
for cite in cites:
try:
sims.append(patent_pair_sim(cite[0], cite[1]))
except:
pass #skip combos not in pre-calculated model
return sims
examiner_sims = get_sims_by_cite_type(50000, 'cited by examiner')
applicant_sims = get_sims_by_cite_type(50000, 'cited by applicant')
examiner_sims = [s for s in examiner_sims if s is not None]
applicant_sims = [s for s in applicant_sims if s is not None]
fig, ax = plt.subplots()
sns.kdeplot(examiner_sims, shade=True, ax=ax, label='Examiner')
sns.kdeplot(applicant_sims, shade=True, ax = ax, label = 'Applicant', linestyle = '--')
plt.savefig('examiner_applicant_sims'+'.png', dpi=300)
t = stats.ttest_ind(examiner_sims, applicant_sims)
print(t)
```
## Nearest Neighbors
The patent similarity dataset, also includes data on each patent’s 100 nearest neighbors. These are the 100 patents from the dataset that are closest to the focal patent, and their accompanying similarity scores. These data can be used for a wide variety of analyses, including those that provide perspective on how crowded an invention’s “neighborhood” is.
As an example, consider the neighborhoods of both litigated and non-litigated patents. To examine whether they differ from one another, we begin with the litigated patent data, and identify the similarity between each litigated patent and its nearest neighbor. We then compare these similarity scores with the similarity between non-litigated patents and their nearest neighbors. Having a very similar nearest neighbor, suggests that the patent in question is in a ‘crowded’ intellectual property space, with perhaps many other competing, blocking, or related patents, whereas having only more distant neighbors suggests an invention is relatively unique. By comparing the distributions of the nearest neighbor similarities for both litigated and non-litigated patents, we can see that, on average, litigated patents tend to have much more similar nearest neighbors than their non-litigated counterparts, and a wider distribution of these scores.
```
def make_litigated_patent_set(path):
'''uses data file from Schwartz et. al litigated patent dataset, returns a set of
patent numbers involved in infringement litigation'''
infile = open(path ,encoding = 'utf-8')
reader = csv.DictReader(infile)
infringement_litigated_patents = set()
count = 0
for row in reader:
patent = row['patent']
doc_type = row['patent_doc_type']
case_types = [row['case_type_1'], row['case_type_2'],row['case_type_3']]
if '1' in case_types and doc_type == 'Patent':
count += 1
infringement_litigated_patents.add(patent)
return infringement_litigated_patents
def get_nearest_neighbor_sim(patent):
'''takes a patent number, returns the similarity score for its nearest neighbor
'''
sims = cur.execute('''SELECT top_100 FROM most_similar
WHERE patent_id = ?''',[patent]).fetchone()
if sims is None:
return None
sims = json.loads(sims[0])
sims = [s[1] for s in sims]
return max(sims)
path_to_litigated_dataset = #add path to this dataset file here
litigated_patents = make_litigated_patent_set(path_to_litigated_dataset)
litigated_sims = [get_nearest_neighbor_sim(p) for p in litigated_patents]
litigated_sims = [s for s in litigated_sims if s is not None]
all_patents = get_all_patents()
random_sims = []
while len(random_sims) < len(litigated_sims):
patent = random.choice(all_patents)
sim = get_nearest_neighbor_sim(patent)
if sim is not None:
random_sims.append(sim)
fig, ax = plt.subplots()
sns.kdeplot(litigated_sims, shade = 1, color = 'red', label = 'litigated', linestyle='--')
sns.kdeplot(random_sims, shade = 1, color='blue', label = 'non-litigated')
plt.savefig('litigated_vs_non_litigated.png', dpi=300)
```
# Inventor-Level Metrics
Patent similarity data can also be used to help understand the career of a given inventor. By locating each of an inventor's inventions within semantic space, one can produce a network of their inventions, measure their average, minimum, and maximum similarity scores, identify clusters, or find their mean invention.
The below code demonstrates how to identify and visualize the invention networks for four well known tech company CEOs.
```
def make_inventor_net(inventor, save_path = False):
'''takes inventor ID and returns networkx Graph object containing
nodes represeting each of his/her inventions with links between them
weighted by their doc2vec similarity
if save_path is defined will save a graphml file at the designated path
'''
inventions = cur.execute('''SELECT patent_id FROM patent_inventor
WHERE inventor_id = ?''',[inventor]).fetchall()
g = nx.Graph()
if len(inventions) < 2:
return None
inventions = [i[0] for i in inventions if i[0].isdigit()]
for p1, p2 in itertools.combinations(inventions, 2):
sim = patent_pair_sim(p1, p2)
if sim is None:
continue
g.add_edge(p1, p2, weight = sim)
if save_path != False:
nx.write_graphml(g, save_path)
return g
def make_mst(g):
'''takes a graph object and returns the minimum spanning tree
however, defines MST as the maximum sum of edgeweights for a tree
because the default MST treats weight as distance rather than sim'''
ng = nx.Graph()
for edge in g.edges(data=True):
ng.add_edge(edge[0], edge[1], weight = 1 - edge[2]['weight'])
ng = nx.minimum_spanning_tree(ng)
return ng
def net_stats(g):
'''takes a nx Graph object and returns least similar score (i.e. the similarity
between the most dissimilar inventions) and average pairwise similarity'''
ew = [e[2]['weight'] for e in g.edges(data=True)]
return round(min(ew),3), round(np.mean(ew), 3)
def draw_inventor_net(g, firstname, lastname):
d = dict(g.degree(weight='weight'))
size = [v * 5 for v in d.values()] #rescale weights for visibility
least_sim, mean_sim = net_stats(g)
g = make_mst(g)
pos = nx.spring_layout(g, iterations = 100)
fig, ax = plt.subplots()
nx.draw_networkx_nodes(g, pos, node_size = size,
node_color = 'darkslategrey')
nx.draw_networkx_edges(g, pos)
plt.xticks([])
plt.yticks([])
textstr = '\n'.join((
r"$\bf{"+firstname+"}$"+" "+r"$\bf{"+lastname+"}$",
'Minimum sim=%s' % (least_sim,),
'Mean sim=%s' % (mean_sim,)))
plt.title(textstr)
plt.tight_layout()
plt.savefig(firstname+lastname, dpi=300)
plt.show()
```
The first step is to find the inventor IDs of interest. We can do this by looking through the 'inventor' table of the patent_db. Below are the inventor IDs for four well known tech CEOs. We can use these to plot each of their invention networks.
```
jb_id = '5715399-1'
sj_id = 'D268584-1'
mz_id = '7669123-1'
bg_id = '5552982-2'
jb = make_inventor_net(jb_id)
draw_inventor_net(jb, 'Jeff', 'Bezos')
sj = make_inventor_net(sj_id)
draw_inventor_net(sj, 'Steve', 'Jobs')
bg = make_inventor_net(bg_id)
draw_inventor_net(bg, 'Bill', 'Gates')
mz = make_inventor_net(mz_id)
draw_inventor_net(mz, 'Mark', 'Zuckerberg')
```
These visualized networks show the minimum spanning tree of each inventor's patent similarity network, and some basic statistics. Each of these provides insight into the degree to which an inventor has worked within a single technological domain, or has alternately created a wide variety of dissimilar inventions.
### Inter-inventor similarity
Just as we can visualize a given inventor's invention similarity network, we can also compare inventors to one another by identifying their 'mean' invention (i.e. the mean vector of all their invention vectors) and subsequently calcuating the similarity between those.
```
def find_inventor_mean(inventor):
'''takes inventor ID, finds their patent vectors and returns mean vector'''
inventions = cur.execute('''SELECT patent_inventor.patent_id,
doc2vec.vector FROM patent_inventor
JOIN doc2vec
ON patent_inventor.patent_id = doc2vec.patent_id
WHERE inventor_id = ?''''',[inventor]).fetchall()
inventions = [i[1][1:-1] for i in inventions if i!= None]
inventions = [i.split(',') for i in inventions]
for i in range(len(inventions)):
inventions[i] = [float(i) for i in inventions[i]]
if len(inventions) < 1:
return None
return np.mean(inventions, axis = 0)
def make_mean_sim_net(means):
'''takes a list of tuples (node_id, vector) and constructs a network of nodes
with edges weighted by the similarity between their vectors'''
g = nx.Graph()
for i1, i2 in itertools.combinations(means, 2):
inv1 = i1[0]
v1 = i1[1]
inv2 = i2[0]
v2 = i2[1]
sim = float(cosine_similarity(v1.reshape(1,-1), v2.reshape(1,-1))[0])
g.add_edge(inv1, inv2, weight = sim)
return g
def plot_inventor_sim_net(g, filename):
'''takes network of inventors with edges between them weighted by similarity of their mean invention vectors
plots network'''
pos = nx.spring_layout(g, iterations = 100)
nx.draw(g,pos, with_labels = True, node_size = 2000)
labels = nx.get_edge_attributes(g,'weight')
nx.draw_networkx_edge_labels(g,pos,edge_labels=labels)
plt.savefig(filename, dpi=300)
plt.show()
sj = ('Jobs', find_inventor_mean(sj_id))
bg = ('Gates', find_inventor_mean(bg_id))
jb = ('Bezos', find_inventor_mean(jb_id))
mz = ('Zuckerberg', find_inventor_mean(mz_id))
mean_vectors = [sj, bg, jb, mz]
inter_inv_net = make_mean_sim_net(mean_vectors)
plot_inventor_sim_net(inter_inv_net, 'inventor_net.png')
```
# Team-level metrics
In addition to providing insight into individual patents or inventors, similarity data can be useful at the team-level to characterize different types of collaborative teams. Some teams have are comprised of members largely from the same or similar disciplines, while others feature more expertise diversity in their makeup.
To calculate team-level metrics it is often useful to first typify each individual member's expertise by locating their average semantic location (i.e. the average vector of all of their invention vectors). These mid-points can then be used to typify teams—those with large degrees of similarity between their average vectors are made up of members with similar inventing backgrounds, whereas those with little similarity between them have more knowledge-diverse membership.
In the sample code below, we compare the knowledge diversity of two teams, both inventors on Nest thermostat related patents assigned to Google. The first patent (8,757,507) relates to an easy-to-install thermostat, while the second (9,256,230) relates to scheduling a network-connected thermostat. As we can see from the histogram generated below, the team on the first patent has more concentrated expertise (i.e. generally high similarity scores) whereas the second features more knowledge diversity.
```
def get_inventors(patent):
'''takes patent_id returns inventor_ids for listed inventors'''
inventors = cur.execute('''SELECT inventor_id FROM patent_inventor
WHERE patent_id = ?''',[patent]).fetchall()
inventors = [i[0] for i in inventors]
return inventors
def make_team_network(inventors, save_path =False):
'''takes a list of inventor IDs, finds mean semantic location for each
measures distance between each of their means and returns a network
object w/ inventor nodes and weighted edges between them representing
the similarity of their average inventions'''
averages = [(i, find_inventor_mean(i)) for i in inventors]
g = nx.Graph()
for i1, i2 in itertools.combinations(averages, 2):
inv1, v1 = i1[0], i1[1]
inv2, v2 = i2[0], i2[1]
if v1 is None or v2 is None:
continue
sim = float(cosine_similarity(v1.reshape(1,-1), v2.reshape(1,-1))[0])
g.add_edge(inv1, inv2, weight = sim)
if save_path != False:
nx.write_graphml(g, save_path)
return g
def plot_degree_dists(g1, label1, g2, label2):
'''takes new network objects (g1 and g2) and accompanying labels
plots kde of each network degree distribution'''
ew1 = [e[2]['weight'] for e in g1.edges(data=True)]
ew2 = [e[2]['weight'] for e in g2.edges(data=True)]
print(label1 +' average sim: '+str(np.mean(ew1)))
print(label2 +' average sim: '+str(np.mean(ew2)))
fig, ax = plt.subplots()
sns.kdeplot(ew1, shade = True, ax = ax, label = label1)
sns.kdeplot(ew2, shade = True, ax = ax, label = label2, linestyle = '--')
plt.tight_layout()
plt.savefig(label1.replace(',','')+'.png', dpi = 300)
team_net_1 = make_team_network(get_inventors('8757507'))
team_net_2 = make_team_network(get_inventors('9256230'))
plot_degree_dists(team_net_1, '8,757,507', team_net_2, '9,256,230')
```
# Location and firm-level metrics
Because it interfaces easily with other patent data, the patent similarity dataset can also be used to assess innovation at the firm or location level. The code below does a simple comparison of the similarity between inventions made by inventors in California, compared with those located in Louisiana. We see that although the distributions are almost identical, inventions originating in Louisiana are somewhat more likely to be similar to one another than those from California. Similar analyses can be performed to compare firms, or with slight modifications to track changes over time at the firm or location level.
```
def calc_pairwise_state_sims(state, n):
'''takes a state abbreviation and returns a
returns a list of n random pairwise similarities between patents granted to inventors associated
with that state in the db'''
patents = cur.execute('''SELECT patent_id FROM patent_inventor WHERE patent_inventor.inventor_id in (
SELECT inventor_id FROM location_inventor WHERE location_inventor.location_id in
(SELECT id FROM location WHERE state = ?)) ORDER BY RANDOM() LIMIT ?''',[state, n]).fetchall()
patents = [p[0] for p in patents]
sims = []
while len(sims) < n:
p1, p2 = random.sample(patents,2)
sim = patent_pair_sim(p1, p2)
if sim is not None:
sims.append(sim)
return sims
CA_sims = calc_pairwise_state_sims('CA', 10000)
LA_sims = calc_pairwise_state_sims('LA', 10000)
fig, ax = plt.subplots()
sns.kdeplot(CA_sims, shade=True, ax=ax, label='CA mean = %s' % round(np.mean(CA_sims),4), linestyle = '--')
sns.kdeplot(LA_sims, shade=True, ax = ax, label = 'LA mean = %s' % round(np.mean(LA_sims), 4))
t = stats.ttest_ind(LA_sims, CA_sims)
print(t)
fig.savefig('CA_vs_LA_sim.png', bbox_inches='tight', dpi=300)
conn.close()
```
|
github_jupyter
|
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.

# Train and explain models remotely via Azure Machine Learning Compute and deploy model and scoring explainer
_**This notebook illustrates how to use the Azure Machine Learning Interpretability SDK to train and explain a classification model remotely on an Azure Machine Leanrning Compute Target (AMLCompute), and use Azure Container Instances (ACI) for deploying your model and its corresponding scoring explainer as a web service.**_
Problem: IBM employee attrition classification with scikit-learn (train a model and run an explainer remotely via AMLCompute, and deploy model and its corresponding explainer.)
---
## Table of Contents
1. [Introduction](#Introduction)
1. [Setup](#Setup)
1. [Run model explainer locally at training time](#Explain)
1. Apply feature transformations
1. Train a binary classification model
1. Explain the model on raw features
1. Generate global explanations
1. Generate local explanations
1. [Visualize results](#Visualize)
1. [Deploy model and scoring explainer](#Deploy)
1. [Next steps](#Next)
## Introduction
This notebook showcases how to train and explain a classification model remotely via Azure Machine Learning Compute (AMLCompute), download the calculated explanations locally for visualization and inspection, and deploy the final model and its corresponding explainer to Azure Container Instances (ACI).
It demonstrates the API calls that you need to make to submit a run for training and explaining a model to AMLCompute, download the compute explanations remotely, and visualizing the global and local explanations via a visualization dashboard that provides an interactive way of discovering patterns in model predictions and downloaded explanations, and using Azure Machine Learning MLOps capabilities to deploy your model and its corresponding explainer.
We will showcase one of the tabular data explainers: TabularExplainer (SHAP) and follow these steps:
1. Develop a machine learning script in Python which involves the training script and the explanation script.
2. Create and configure a compute target.
3. Submit the scripts to the configured compute target to run in that environment. During training, the scripts can read from or write to datastore. And the records of execution (e.g., model, metrics, prediction explanations) are saved as runs in the workspace and grouped under experiments.
4. Query the experiment for logged metrics and explanations from the current and past runs. Use the interpretability toolkit’s visualization dashboard to visualize predictions and their explanation. If the metrics and explanations don't indicate a desired outcome, loop back to step 1 and iterate on your scripts.
5. After a satisfactory run is found, create a scoring explainer and register the persisted model and its corresponding explainer in the model registry.
6. Develop a scoring script.
7. Create an image and register it in the image registry.
8. Deploy the image as a web service in Azure.
|  |
|:--:|
## Setup
Make sure you go through the [configuration notebook](../../../../configuration.ipynb) first if you haven't.
```
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
```
## Initialize a Workspace
Initialize a workspace object from persisted configuration
```
from azureml.core import Workspace
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\n')
```
## Explain
Create An Experiment: **Experiment** is a logical container in an Azure ML Workspace. It hosts run records which can include run metrics and output artifacts from your experiments.
```
from azureml.core import Experiment
experiment_name = 'explainer-remote-run-on-amlcompute'
experiment = Experiment(workspace=ws, name=experiment_name)
```
## Introduction to AmlCompute
Azure Machine Learning Compute is managed compute infrastructure that allows the user to easily create single to multi-node compute of the appropriate VM Family. It is created **within your workspace region** and is a resource that can be used by other users in your workspace. It autoscales by default to the max_nodes, when a job is submitted, and executes in a containerized environment packaging the dependencies as specified by the user.
Since it is managed compute, job scheduling and cluster management are handled internally by Azure Machine Learning service.
For more information on Azure Machine Learning Compute, please read [this article](https://docs.microsoft.com/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute)
If you are an existing BatchAI customer who is migrating to Azure Machine Learning, please read [this article](https://aka.ms/batchai-retirement)
**Note**: As with other Azure services, there are limits on certain resources (for eg. AmlCompute quota) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
The training script `run_explainer.py` is already created for you. Let's have a look.
## Submit an AmlCompute run
First lets check which VM families are available in your region. Azure is a regional service and some specialized SKUs (especially GPUs) are only available in certain regions. Since AmlCompute is created in the region of your workspace, we will use the supported_vms () function to see if the VM family we want to use ('STANDARD_D2_V2') is supported.
You can also pass a different region to check availability and then re-create your workspace in that region through the [configuration notebook](../../../configuration.ipynb)
```
from azureml.core.compute import ComputeTarget, AmlCompute
AmlCompute.supported_vmsizes(workspace=ws)
# AmlCompute.supported_vmsizes(workspace=ws, location='southcentralus')
```
### Create project directory
Create a directory that will contain all the necessary code from your local machine that you will need access to on the remote resource. This includes the training script, and any additional files your training script depends on
```
import os
import shutil
project_folder = './explainer-remote-run-on-amlcompute'
os.makedirs(project_folder, exist_ok=True)
shutil.copy('train_explain.py', project_folder)
```
### Provision a compute target
> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.
You can provision an AmlCompute resource by simply defining two parameters thanks to smart defaults. By default it autoscales from 0 nodes and provisions dedicated VMs to run your job in a container. This is useful when you want to continously re-use the same target, debug it between jobs or simply share the resource with other users of your workspace.
* `vm_size`: VM family of the nodes provisioned by AmlCompute. Simply choose from the supported_vmsizes() above
* `max_nodes`: Maximum nodes to autoscale to while running a job on AmlCompute
```
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster"
# Verify that cluster does not exist already
try:
cpu_cluster = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=4)
cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
cpu_cluster.wait_for_completion(show_output=True)
```
### Configure & Run
```
from azureml.core.runconfig import RunConfiguration
from azureml.core.conda_dependencies import CondaDependencies
from azureml.core.runconfig import DEFAULT_CPU_IMAGE
# Create a new runconfig object
run_config = RunConfiguration()
# Set compute target to AmlCompute target created in previous step
run_config.target = cpu_cluster.name
# Set Docker base image to the default CPU-based image
run_config.environment.docker.base_image = DEFAULT_CPU_IMAGE
# Use conda_dependencies.yml to create a conda environment in the Docker image for execution
run_config.environment.python.user_managed_dependencies = False
azureml_pip_packages = [
'azureml-defaults', 'azureml-telemetry', 'azureml-interpret'
]
# Note: this is to pin the scikit-learn version to be same as notebook.
# In production scenario user would choose their dependencies
import pkg_resources
available_packages = pkg_resources.working_set
sklearn_ver = None
pandas_ver = None
for dist in available_packages:
if dist.key == 'scikit-learn':
sklearn_ver = dist.version
elif dist.key == 'pandas':
pandas_ver = dist.version
sklearn_dep = 'scikit-learn'
pandas_dep = 'pandas'
if sklearn_ver:
sklearn_dep = 'scikit-learn=={}'.format(sklearn_ver)
if pandas_ver:
pandas_dep = 'pandas=={}'.format(pandas_ver)
# Specify CondaDependencies obj
# The CondaDependencies specifies the conda and pip packages that are installed in the environment
# the submitted job is run in. Note the remote environment(s) needs to be similar to the local
# environment, otherwise if a model is trained or deployed in a different environment this can
# cause errors. Please take extra care when specifying your dependencies in a production environment.
azureml_pip_packages.extend(['pyyaml', sklearn_dep, pandas_dep])
run_config.environment.python.conda_dependencies = CondaDependencies.create(pip_packages=azureml_pip_packages)
# Now submit a run on AmlCompute
from azureml.core.script_run_config import ScriptRunConfig
script_run_config = ScriptRunConfig(source_directory=project_folder,
script='train_explain.py',
run_config=run_config)
run = experiment.submit(script_run_config)
# Show run details
run
```
Note: if you need to cancel a run, you can follow [these instructions](https://aka.ms/aml-docs-cancel-run).
```
%%time
# Shows output of the run on stdout.
run.wait_for_completion(show_output=True)
# Delete () is used to deprovision and delete the AmlCompute target. Useful if you want to re-use the compute name
# 'cpucluster' in this case but use a different VM family for instance.
# cpu_cluster.delete()
```
## Download Model Explanation, Model, and Data
```
# Retrieve model for visualization and deployment
from azureml.core.model import Model
import joblib
original_model = Model(ws, 'amlcompute_deploy_model')
model_path = original_model.download(exist_ok=True)
original_svm_model = joblib.load(model_path)
# Retrieve global explanation for visualization
from azureml.interpret import ExplanationClient
# get model explanation data
client = ExplanationClient.from_run(run)
global_explanation = client.download_model_explanation()
# Retrieve x_test for visualization
import joblib
x_test_path = './x_test.pkl'
run.download_file('x_test_ibm.pkl', output_file_path=x_test_path)
x_test = joblib.load(x_test_path)
```
## Visualize
Visualize the explanations
```
from interpret_community.widget import ExplanationDashboard
ExplanationDashboard(global_explanation, original_svm_model, datasetX=x_test)
```
## Deploy
Deploy Model and ScoringExplainer
```
from azureml.core.conda_dependencies import CondaDependencies
# WARNING: to install this, g++ needs to be available on the Docker image and is not by default (look at the next cell)
azureml_pip_packages = [
'azureml-defaults', 'azureml-core', 'azureml-telemetry',
'azureml-interpret'
]
# Note: this is to pin the scikit-learn and pandas versions to be same as notebook.
# In production scenario user would choose their dependencies
import pkg_resources
available_packages = pkg_resources.working_set
sklearn_ver = None
pandas_ver = None
for dist in available_packages:
if dist.key == 'scikit-learn':
sklearn_ver = dist.version
elif dist.key == 'pandas':
pandas_ver = dist.version
sklearn_dep = 'scikit-learn'
pandas_dep = 'pandas'
if sklearn_ver:
sklearn_dep = 'scikit-learn=={}'.format(sklearn_ver)
if pandas_ver:
pandas_dep = 'pandas=={}'.format(pandas_ver)
# Specify CondaDependencies obj
# The CondaDependencies specifies the conda and pip packages that are installed in the environment
# the submitted job is run in. Note the remote environment(s) needs to be similar to the local
# environment, otherwise if a model is trained or deployed in a different environment this can
# cause errors. Please take extra care when specifying your dependencies in a production environment.
azureml_pip_packages.extend(['pyyaml', sklearn_dep, pandas_dep])
myenv = CondaDependencies.create(pip_packages=azureml_pip_packages)
with open("myenv.yml","w") as f:
f.write(myenv.serialize_to_string())
with open("myenv.yml","r") as f:
print(f.read())
# Retrieve scoring explainer for deployment
scoring_explainer_model = Model(ws, 'IBM_attrition_explainer')
from azureml.core.webservice import Webservice
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.model import Model
from azureml.core.environment import Environment
from azureml.exceptions import WebserviceException
aciconfig = AciWebservice.deploy_configuration(cpu_cores=1,
memory_gb=1,
tags={"data": "IBM_Attrition",
"method" : "local_explanation"},
description='Get local explanations for IBM Employee Attrition data')
myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml")
inference_config = InferenceConfig(entry_script="score_remote_explain.py", environment=myenv)
# Use configs and models generated above
service = Model.deploy(ws, 'model-scoring-service', [scoring_explainer_model, original_model], inference_config, aciconfig)
try:
service.wait_for_deployment(show_output=True)
except WebserviceException as e:
print(e.message)
print(service.get_logs())
raise
import requests
# Create data to test service with
examples = x_test[:4]
input_data = examples.to_json()
headers = {'Content-Type':'application/json'}
# Send request to service
print("POST to url", service.scoring_uri)
resp = requests.post(service.scoring_uri, input_data, headers=headers)
# Can covert back to Python objects from json string if desired
print("prediction:", resp.text)
service.delete()
```
## Next
Learn about other use cases of the explain package on a:
1. [Training time: regression problem](https://github.com/interpretml/interpret-community/blob/master/notebooks/explain-regression-local.ipynb)
1. [Training time: binary classification problem](https://github.com/interpretml/interpret-community/blob/master/notebooks/explain-binary-classification-local.ipynb)
1. [Training time: multiclass classification problem](https://github.com/interpretml/interpret-community/blob/master/notebooks/explain-multiclass-classification-local.ipynb)
1. Explain models with engineered features:
1. [Simple feature transformations](https://github.com/interpretml/interpret-community/blob/master/notebooks/simple-feature-transformations-explain-local.ipynb)
1. [Advanced feature transformations](https://github.com/interpretml/interpret-community/blob/master/notebooks/advanced-feature-transformations-explain-local.ipynb)
1. [Save model explanations via Azure Machine Learning Run History](../run-history/save-retrieve-explanations-run-history.ipynb)
1. [Run explainers remotely on Azure Machine Learning Compute (AMLCompute)](../remote-explanation/explain-model-on-amlcompute.ipynb)
1. [Inferencing time: deploy a locally-trained model and explainer](./train-explain-model-locally-and-deploy.ipynb)
1. [Inferencing time: deploy a locally-trained keras model and explainer](./train-explain-model-keras-locally-and-deploy.ipynb)
|
github_jupyter
|
# Reconstructing MNIST images using Autoencoder
Now that we have understood how autoencoders reconstruct the inputs, in this section we will learn how autoencoders reconstruct the images of handwritten digits using the MNIST dataset.
In this chapter, we use keras API from the tensorflow for building the models. So that we would be familiarized with how to use high-level APIs.
## Import Libraries
First, let us import the necessary libraries:
```
import warnings
warnings.filterwarnings('ignore')
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Dense
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
#plotting
import matplotlib.pyplot as plt
%matplotlib inline
#dataset
from tensorflow.keras.datasets import mnist
import numpy as np
```
## Prepare the Dataset
Let us load the MNIST dataset. We don't need the labels for autoencoder. Since we are reconstructing the given input we don't need the labels. So, we just load x_train for training and x_test for testing:
```
(x_train, _), (x_test, _) = mnist.load_data()
```
Normalize the data by dividing with max pixel value which is 255:
```
x_train = x_train.astype('float32') / 255
x_test = x_test.astype('float32') / 255
```
Shape of our dataset:
```
print(x_train.shape, x_test.shape)
```
Reshape the images as 2D array:
```
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))
```
Now, the shape of data would become:
```
print(x_train.shape, x_test.shape)
```
# Define the Encoder
Now, we define the encoder which takes the images as an input and returns the encodings.
Define the size of the encodings:
```
encoding_dim = 32
```
Define the placeholders for the input:
```
input_image = Input(shape=(784,))
```
Define the encoder which takes the input_image and returns the encodings:
```
encoder = Dense(encoding_dim, activation='relu')(input_image)
```
# Define the Decoder
Let us define the decoder which takes the encoded values from the encoder and returns the reconstructed image:
```
decoder = Dense(784, activation='sigmoid')(encoder)
```
# Build the model
Now that we defined encoder and decoder, we define the model which takes images as input and returns the output of the decoder which is the reconstructed image:
```
model = Model(inputs=input_image, outputs=decoder)
```
Let us look at summary of the model:
```
model.summary()
```
Compile the model with loss as binary cross entropy and we minimize the loss using AdaDelta optimizer:
```
model.compile(optimizer='adadelta', loss='binary_crossentropy')
```
Now, let us train the model.
Generally, we feed the data to the model as model.fit(x,y) where x is the input and y is the label. But since autoencoders reconstruct its inputs, the input and output to the model should be the same. So we feed the data to the model as model.fit(x_train, x_train)
```
model.fit(x_train, x_train, epochs=50, batch_size=256, shuffle=True, validation_data=(x_test, x_test))
```
## Reconstruct images
Let us see how our model is performing in the test dataset. Feed the test images to the model and get the reconstructed images:
```
reconstructed_images = model.predict(x_test)
```
## Plotting reconstructed images
First let us plot the atcual images i.e input images:
```
n = 7
plt.figure(figsize=(20, 4))
for i in range(n):
ax = plt.subplot(1, n, i+1)
plt.imshow(x_test[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
```
Plot the reconstructed image:
```
n = 7
plt.figure(figsize=(20, 4))
for i in range(n):
ax = plt.subplot(2, n, i + n + 1)
plt.imshow(reconstructed_images[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
```
As you can notice, autoencoders have learned to reconstruct the given input image. In the next section, we will learn about convolutional autoencoder which uses convolutional layers in the encoder and decoder network.
|
github_jupyter
|
```
# default_exp core
```
# Few-shot Learning with GPT-J
> API details.
```
# export
import os
import pandas as pd
#hide
from nbdev.showdoc import *
import toml
s = toml.load("../.streamlit/secrets.toml", _dict=dict)
```
Using `GPT_J` model API from [Nlpcloud](https://nlpcloud.io/home/token)
```
import nlpcloud
client = nlpcloud.Client("gpt-j", s['nlpcloud_token'], gpu=True)
```

## Aoe2 Civ Builder
https://ageofempires.fandom.com/wiki/Civilizations_(Age_of_Empires_II)
```
# example API call
generation = client.generation("""Civilisation: Britons
Specialty: Foot archers
Unique unit: Longbowman
Unique technologies: Yeomen (+1 range for foot archers and +2 attack for towers)
Unique technologies: Warwolf (Trebuchets do blast damage)
Wonder: Chichester Cathedral
Civilization bonuses: Shepherds work 25% faster.
Team bonus: Town Centers cost -50% wood (starting in the Castle Age).
###
Civilisation: Mongols
Specialty: Cavalry archers
Unique unit: Mangudai
Unique technologies: Nomads (Houses retain population when destroyed)
Unique technologies: Drill (Siege Workshop units move 50% faster)
Wonder: Great Tent of Genghis Khan
Civilization bonuses: Hunters work 40% faster.
Team bonus: The Scout Cavalry line has +2 Line of Sight.
###
Civilisation: Celts
Specialty: Infantry and siege weapons
Unique unit: Woad Raider
Unique technologies: Stronghold (Castles and towers fire 25% faster)
Unique technologies: Furor Celtica (Siege Workshop units have +40% HP)
Wonder: Rock of Cashel
Civilization bonuses: Infantry units move 15% faster (starting in the Feudal Age).
Civilization bonuses: Lumberjacks work 15% faster.
Civilization bonuses: Siege weapons fire 25% faster.
Civilization bonuses: Enemy herdables can be converted regardless of enemy units next to them.
Team bonus: Siege Workshops work 20% faster.
###
Civilisation: New Zealand Maori""",
max_length=250,
length_no_input=True,
end_sequence="###",
remove_input=True)
print('Civilisation: New Zealand Maori\n ', generation["generated_text"])
def create_input_string(civname):
return f"""Civilisation: Britons
Specialty: Foot archers
Unique unit: Longbowman
Unique technologies: Yeomen (+1 range for foot archers and +2 attack for towers)
Unique technologies: Warwolf (Trebuchets do blast damage)
Wonder: Chichester Cathedral
Civilization bonuses: Shepherds work 25% faster.
Team bonus: Town Centers cost -50% wood (starting in the Castle Age).
###
Civilisation: Mongols
Specialty: Cavalry archers
Unique unit: Mangudai
Unique technologies: Nomads (Houses retain population when destroyed)
Unique technologies: Drill (Siege Workshop units move 50% faster)
Wonder: Great Tent of Genghis Khan
Civilization bonuses: Hunters work 40% faster.
Team bonus: The Scout Cavalry line has +2 Line of Sight.
###
Civilisation: Celts
Specialty: Infantry and siege weapons
Unique unit: Woad Raider
Unique technologies: Stronghold (Castles and towers fire 25% faster)
Unique technologies: Furor Celtica (Siege Workshop units have +40% HP)
Wonder: Rock of Cashel
Civilization bonuses: Infantry units move 15% faster (starting in the Feudal Age).
Civilization bonuses: Lumberjacks work 15% faster.
Civilization bonuses: Siege weapons fire 25% faster.
Civilization bonuses: Enemy herdables can be converted regardless of enemy units next to them.
Team bonus: Siege Workshops work 20% faster.
###
Civilisation: {civname}"""
def generate_civ(civname, client):
"""
Creates input string and sends to nlpcloud for few-shot learning
"""
print(f'🌐 Generating New Civ: {civname} \n')
input_str = create_input_string(civname)
generation = client.generation(input_str,
max_length=250,
length_no_input=True,
end_sequence='###',
remove_input=True)
civgen = generation["generated_text"].strip('\n')
print(f"🛡️ **{civname}**\n{civgen}")
return civgen
c = generate_civ(civname='New Zealand Maori', client=client)
c = generate_civ(civname='Fijians', client=client)
```

```
c = generate_civ(civname='Canadians', client=client)
c = generate_civ(civname='European Union', client=client)
c = generate_civ(civname='Dutch', client=client)
c = generate_civ(civname='Star Wars Death Star', client=client)
```
|
github_jupyter
|
# Synthetic Images from simulated data
## Authors
Yi-Hao Chen, Sebastian Heinz, Kelle Cruz, Stephanie T. Douglas
## Learning Goals
- Assign WCS astrometry to an image using ```astropy.wcs```
- Construct a PSF using ```astropy.modeling.model```
- Convolve raw data with PSF using ```astropy.convolution```
- Calculate polarization fraction and angle from Stokes I, Q, U data
- Overplot quivers on the image
## Keywords
modeling, convolution, coordinates, WCS, FITS, radio astronomy, matplotlib, colorbar
## Summary
In this tutorial, we will:
[1. Load and examine the FITS file](#1.-Load-and-examine-the-FITS-file)
[2. Set up astrometry coordinates](#2.-Set-up-astrometry-coordinates)
[3. Prepare a Point Spread Function (PSF)](#3.-Prepare-a-Point-Spread-Function-(PSF))
>[3.a How to do this without astropy kernels](#3.a-How-to-do-this-without-astropy-kernels)
[4. Convolve image with PSF](#4.-Convolve-image-with-PSF)
[5. Convolve Stokes Q and U images](#5.-Convolve-Stokes-Q-and-U-images)
[6. Calculate polarization angle and fraction for quiver plot](#6.-Calculate-polarization-angle-and-fraction-for-quiver-plot)
```
from astropy.utils.data import download_file
from astropy.io import fits
from astropy import units as u
from astropy.coordinates import SkyCoord
from astropy.wcs import WCS
from astropy.convolution import Gaussian2DKernel
from astropy.modeling.models import Lorentz1D
from astropy.convolution import convolve_fft
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
```
## 1. Load and examine the FITS file
Here we begin with a 2-dimensional data that were stored in FITS format from some simulations. We have Stokes I, Q, and U maps. We we'll first load a FITS file and examine the header.
```
file_i = download_file(
'http://data.astropy.org/tutorials/synthetic-images/synchrotron_i_lobe_0700_150MHz_sm.fits',
cache=True)
hdulist = fits.open(file_i)
hdulist.info()
hdu = hdulist['NN_EMISSIVITY_I_LOBE_150.0MHZ']
hdu.header
```
We can see this FITS file, which was created in [yt](https://yt-project.org/), has x and y coordinate in physical units (cm). We want to convert it into sky coordinates. Before we proceed, let's find out the range of the data and plot a histogram.
```
print(hdu.data.max())
print(hdu.data.min())
np.seterr(divide='ignore') #suppress the warnings raised by taking log10 of data with zeros
plt.hist(np.log10(hdu.data.flatten()), range=(-3, 2), bins=100);
```
Once we know the range of the data, we can do a visualization with the proper range (```vmin``` and ```vmax```).
```
fig = plt.figure(figsize=(6,12))
fig.add_subplot(111)
# We plot it in log-scale and add a small number to avoid nan values.
plt.imshow(np.log10(hdu.data+1E-3), vmin=-1, vmax=1, origin='lower')
```
## 2. Set up astrometry coordinates
From the header, we know that the x and y axes are in centimeter. However, in an observation we usually have RA and Dec. To convert physical units to sky coordinates, we will need to make some assumptions about where the object is located, i.e. the distance to the object and the central RA and Dec.
```
# distance to the object
dist_obj = 200*u.Mpc
# We have the RA in hh:mm:ss and DEC in dd:mm:ss format.
# We will use Skycoord to convert them into degrees later.
ra_obj = '19h59m28.3566s'
dec_obj = '+40d44m02.096s'
```
Here we convert the pixel scale from cm to degree by dividing the distance to the object.
```
cdelt1 = ((hdu.header['CDELT1']*u.cm/dist_obj.to('cm'))*u.rad).to('deg')
cdelt2 = ((hdu.header['CDELT2']*u.cm/dist_obj.to('cm'))*u.rad).to('deg')
print(cdelt1, cdelt2)
```
Use ```astropy.wcs.WCS``` to prepare a FITS header.
```
w = WCS(naxis=2)
# reference pixel coordinate
w.wcs.crpix = [hdu.data.shape[0]/2,hdu.data.shape[1]/2]
# sizes of the pixel in degrees
w.wcs.cdelt = [-cdelt1.base, cdelt2.base]
# converting ra and dec into degrees
c = SkyCoord(ra_obj, dec_obj)
w.wcs.crval = [c.ra.deg, c.dec.deg]
# the units of the axes are in degrees
w.wcs.cunit = ['deg', 'deg']
```
Now we can convert the WCS coordinate into header and update the hdu.
```
wcs_header = w.to_header()
hdu.header.update(wcs_header)
```
Let's take a look at the header. ```CDELT1```, ```CDELT2```, ```CUNIT1```, ```CUNIT2```, ```CRVAL1```, and ```CRVAL2``` are in sky coordinates now.
```
hdu.header
wcs = WCS(hdu.header)
fig = plt.figure(figsize=(6,12))
fig.add_subplot(111, projection=wcs)
plt.imshow(np.log10(hdu.data+1e-3), vmin=-1, vmax=1, origin='lower')
plt.xlabel('RA')
plt.ylabel('Dec')
```
Now we have the sky coordinate for the image!
## 3. Prepare a Point Spread Function (PSF)
Simple PSFs are included in ```astropy.convolution.kernel```. We'll use ```astropy.convolution.Gaussian2DKernel``` here.
First we need to set the telescope resolution. For a 2D Gaussian, we can calculate sigma in pixels by using our pixel scale keyword ```cdelt2``` from above.
```
# assume our telescope has 1 arcsecond resolution
telescope_resolution = 1*u.arcsecond
# calculate the sigma in pixels.
# since cdelt is in degrees, we use _.to('deg')
sigma = telescope_resolution.to('deg')/cdelt2
# By default, the Gaussian kernel will go to 4 sigma
# in each direction
psf = Gaussian2DKernel(sigma)
# let's take a look:
plt.imshow(psf.array.value)
```
## 3.a How to do this without astropy kernels
Maybe your PSF is more complicated. Here's an alternative way to do this, using a 2D Lorentzian
```
# set FWHM and psf grid
telescope_resolution = 1*u.arcsecond
gamma = telescope_resolution.to('deg')/cdelt2
x_grid = np.outer(np.linspace(-gamma*4,gamma*4,int(8*gamma)),np.ones(int(8*gamma)))
r_grid = np.sqrt(x_grid**2 + np.transpose(x_grid**2))
lorentzian = Lorentz1D(fwhm=2*gamma)
# extrude a 2D azimuthally symmetric PSF
lorentzian_psf = lorentzian(r_grid)
# normalization
lorentzian_psf /= np.sum(lorentzian_psf)
# let's take a look again:
plt.imshow(lorentzian_psf.value, interpolation='none')
```
## 4. Convolve image with PSF
Here we use ```astropy.convolution.convolve_fft``` to convolve image. This routine uses fourier transform for faster calculation. Especially since our data is $2^n$ sized, which makes it particually fast. Using a fft, however, causes boundary effects. We'll need to specify how we want to handle the boundary. Here we choose to "wrap" the data, which means making the data periodic.
```
convolved_image = convolve_fft(hdu.data, psf, boundary='wrap')
# Put a psf at the corner of the image
delta_x_psf=100 # number of pixels from the edges
xmin, xmax = -psf.shape[1]-delta_x_psf, -delta_x_psf
ymin, ymax = delta_x_psf, delta_x_psf+psf.shape[0]
convolved_image[xmin:xmax, ymin:ymax] = psf.array/psf.array.max()*10
```
Now let's take a look at the convolved image.
```
wcs = WCS(hdu.header)
fig = plt.figure(figsize=(8,12))
i_plot = fig.add_subplot(111, projection=wcs)
plt.imshow(np.log10(convolved_image+1e-3), vmin=-1, vmax=1.0, origin='lower')#, cmap=plt.cm.viridis)
plt.xlabel('RA')
plt.ylabel('Dec')
plt.colorbar()
```
## 5. Convolve Stokes Q and U images
```
hdulist.info()
file_q = download_file(
'http://data.astropy.org/tutorials/synthetic-images/synchrotron_q_lobe_0700_150MHz_sm.fits',
cache=True)
hdulist = fits.open(file_q)
hdu_q = hdulist['NN_EMISSIVITY_Q_LOBE_150.0MHZ']
file_u = download_file(
'http://data.astropy.org/tutorials/synthetic-images/synchrotron_u_lobe_0700_150MHz_sm.fits',
cache=True)
hdulist = fits.open(file_u)
hdu_u = hdulist['NN_EMISSIVITY_U_LOBE_150.0MHZ']
# Update the header with the wcs_header we created earlier
hdu_q.header.update(wcs_header)
hdu_u.header.update(wcs_header)
# Convolve the images with the the psf
convolved_image_q = convolve_fft(hdu_q.data, psf, boundary='wrap')
convolved_image_u = convolve_fft(hdu_u.data, psf, boundary='wrap')
```
Let's plot the Q and U images.
```
wcs = WCS(hdu.header)
fig = plt.figure(figsize=(16,12))
fig.add_subplot(121, projection=wcs)
plt.imshow(convolved_image_q, cmap='seismic', vmin=-0.5, vmax=0.5, origin='lower')#, cmap=plt.cm.viridis)
plt.xlabel('RA')
plt.ylabel('Dec')
plt.colorbar()
fig.add_subplot(122, projection=wcs)
plt.imshow(convolved_image_u, cmap='seismic', vmin=-0.5, vmax=0.5, origin='lower')#, cmap=plt.cm.viridis)
plt.xlabel('RA')
plt.ylabel('Dec')
plt.colorbar()
```
## 6. Calculate polarization angle and fraction for quiver plot
Note that rotating Stokes Q and I maps requires changing signs of both. Here we assume that the Stokes q and u maps were calculated defining the y/declination axis as vertical, such that Q is positive for polarization vectors along the x/right-ascention axis.
```
# First, we plot the background image
fig = plt.figure(figsize=(8,16))
i_plot = fig.add_subplot(111, projection=wcs)
i_plot.imshow(np.log10(convolved_image+1e-3), vmin=-1, vmax=1, origin='lower')
# ranges of the axis
xx0, xx1 = i_plot.get_xlim()
yy0, yy1 = i_plot.get_ylim()
# binning factor
factor = [64, 66]
# re-binned number of points in each axis
nx_new = convolved_image.shape[1] // factor[0]
ny_new = convolved_image.shape[0] // factor[1]
# These are the positions of the quivers
X,Y = np.meshgrid(np.linspace(xx0,xx1,nx_new,endpoint=True),
np.linspace(yy0,yy1,ny_new,endpoint=True))
# bin the data
I_bin = convolved_image.reshape(nx_new, factor[0], ny_new, factor[1]).sum(3).sum(1)
Q_bin = convolved_image_q.reshape(nx_new, factor[0], ny_new, factor[1]).sum(3).sum(1)
U_bin = convolved_image_u.reshape(nx_new, factor[0], ny_new, factor[1]).sum(3).sum(1)
# polarization angle
psi = 0.5*np.arctan2(U_bin, Q_bin)
# polarization fraction
frac = np.sqrt(Q_bin**2+U_bin**2)/I_bin
# mask for low signal area
mask = I_bin < 0.1
frac[mask] = 0
psi[mask] = 0
pixX = frac*np.cos(psi) # X-vector
pixY = frac*np.sin(psi) # Y-vector
# keyword arguments for quiverplots
quiveropts = dict(headlength=0, headwidth=1, pivot='middle')
i_plot.quiver(X, Y, pixX, pixY, scale=8, **quiveropts)
```
## Exercise
### Convert the units of the data from Jy/arcsec^2 to Jy/beam
The intensity of the data is given in unit of Jy/arcsec^2. Observational data usually have the intensity unit in Jy/beam. Assuming a beam size or take the psf we created earlier, you can convert the data into Jy/beam.
|
github_jupyter
|
# Candlestick Upside Gap Two Crows
https://www.investopedia.com/terms/u/upside-gap-two-crows.asp
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import talib
import warnings
warnings.filterwarnings("ignore")
# yahoo finance is used to fetch data
import yfinance as yf
yf.pdr_override()
# input
symbol = 'ICLR'
start = '2012-01-01'
end = '2021-10-22'
# Read data
df = yf.download(symbol,start,end)
# View Columns
df.head()
```
## Candlestick with Upside Gap Two Crows
```
from matplotlib import dates as mdates
import datetime as dt
dfc = df.copy()
dfc['VolumePositive'] = dfc['Open'] < dfc['Adj Close']
#dfc = dfc.dropna()
dfc = dfc.reset_index()
dfc['Date'] = pd.to_datetime(dfc['Date'])
dfc['Date'] = dfc['Date'].apply(mdates.date2num)
dfc.head()
from mplfinance.original_flavor import candlestick_ohlc
fig = plt.figure(figsize=(14,10))
ax = plt.subplot(2, 1, 1)
candlestick_ohlc(ax,dfc.values, width=0.5, colorup='g', colordown='r', alpha=1.0)
ax.xaxis_date()
ax.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y'))
ax.grid(True, which='both')
ax.minorticks_on()
axv = ax.twinx()
colors = dfc.VolumePositive.map({True: 'g', False: 'r'})
axv.bar(dfc.Date, dfc['Volume'], color=colors, alpha=0.4)
axv.axes.yaxis.set_ticklabels([])
axv.set_ylim(0, 3*df.Volume.max())
ax.set_title('Stock '+ symbol +' Closing Price')
ax.set_ylabel('Price')
two_crows = talib.CDLUPSIDEGAP2CROWS(df['Open'], df['High'], df['Low'], df['Close'])
two_crows = two_crows[two_crows != 0]
df['two_crows'] = talib.CDLUPSIDEGAP2CROWS(df['Open'], df['High'], df['Low'], df['Close'])
df.loc[df['two_crows'] !=0]
df['Adj Close'].loc[df['two_crows'] !=0]
df['two_crows'].loc[df['two_crows'] !=0].index
two_crows
two_crows.index
df
fig = plt.figure(figsize=(20,16))
ax = plt.subplot(2, 1, 1)
candlestick_ohlc(ax,dfc.values, width=0.5, colorup='g', colordown='r', alpha=1.0)
ax.xaxis_date()
ax.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y'))
ax.grid(True, which='both')
ax.minorticks_on()
axv = ax.twinx()
ax.plot_date(df['Adj Close'].loc[df['two_crows'] !=0].index, df['Adj Close'].loc[df['two_crows'] !=0],
'Dc', # marker style 'o', color 'g'
fillstyle='none', # circle is not filled (with color)
ms=10.0)
colors = dfc.VolumePositive.map({True: 'g', False: 'r'})
axv.bar(dfc.Date, dfc['Volume'], color=colors, alpha=0.4)
axv.axes.yaxis.set_ticklabels([])
axv.set_ylim(0, 3*df.Volume.max())
ax.set_title('Stock '+ symbol +' Closing Price')
ax.set_ylabel('Price')
```
## Plot Certain dates
```
df = df['2019-04-20':'2019-05-05']
dfc = df.copy()
dfc['VolumePositive'] = dfc['Open'] < dfc['Adj Close']
#dfc = dfc.dropna()
dfc = dfc.reset_index()
dfc['Date'] = pd.to_datetime(dfc['Date'])
dfc['Date'] = dfc['Date'].apply(mdates.date2num)
dfc.head()
fig = plt.figure(figsize=(20,16))
ax = plt.subplot(2, 1, 1)
ax.set_facecolor('white')
candlestick_ohlc(ax,dfc.values, width=0.5, colorup='black', colordown='red', alpha=1.0)
ax.xaxis_date()
ax.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y'))
#ax.grid(True, which='both')
#ax.minorticks_on()
axv = ax.twinx()
ax.plot_date(df['Adj Close'].loc[df['two_crows'] !=0].index, df['Adj Close'].loc[df['two_crows'] !=0],
'*y', # marker style 'o', color 'g'
fillstyle='none', # circle is not filled (with color)
ms=40.0)
colors = dfc.VolumePositive.map({True: 'black', False: 'red'})
axv.bar(dfc.Date, dfc['Volume'], color=colors, alpha=0.4)
axv.axes.yaxis.set_ticklabels([])
axv.set_ylim(0, 3*df.Volume.max())
ax.set_title('Stock '+ symbol +' Closing Price')
ax.set_ylabel('Price')
```
# Highlight Candlestick
```
from matplotlib.dates import date2num
from datetime import datetime
fig = plt.figure(figsize=(20,16))
ax = plt.subplot(2, 1, 1)
candlestick_ohlc(ax,dfc.values, width=0.5, colorup='g', colordown='r', alpha=1.0)
ax.xaxis_date()
ax.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y'))
#ax.grid(True, which='both')
#ax.minorticks_on()
axv = ax.twinx()
ax.axvspan(date2num(datetime(2019,4,28)), date2num(datetime(2019,4,30)),
label="Upside Gap Two Crows Bearish",color="red", alpha=0.3)
ax.legend()
colors = dfc.VolumePositive.map({True: 'g', False: 'r'})
axv.bar(dfc.Date, dfc['Volume'], color=colors, alpha=0.4)
axv.axes.yaxis.set_ticklabels([])
axv.set_ylim(0, 3*df.Volume.max())
ax.set_title('Stock '+ symbol +' Closing Price')
ax.set_ylabel('Price')
```
|
github_jupyter
|
# Slope Analysis
This project use the change of holding current slope to identify drug responders.
## Analysis Steps
The `getBaselineAndMaxDrugSlope` function smoothes the raw data by the moving window decided by `filterSize`, and analyzes the smoothed holding current in an ABF and returns baseline slope and drug slope.
The _slope of baseline_ is calculated as the linear regreasion slope during the 3 minutes period before drug onset.
In addition, the smoothed data are separated into segments which n = regressionSize data points are included. The linear regression slope is then calculated for each segment.
The _peak slope of drug_ is the most negative slope during the chosen drug period (1-5 minutes after drug onset, in this case).
## Set-Up the Environment
```
%load_ext autoreload
import sys
sys.path.append("../src")
from os.path import basename
import slopeTools
import plotTools
import statsTools
import matplotlib.pyplot as plt
```
## Define ABF Files and Filter Settings
The user can list the ABF files they want to analyze
```
#opto:
abfFilePaths = [
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21124006.abf",
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21124013.abf",
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21124020.abf",
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21124026.abf",
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21124033.abf",
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21126007.abf",
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21126016.abf",
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21126030.abf",
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21126050.abf",
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21126056.abf",
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21218033.abf",
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21219006.abf"
]
```
#opto+l368:
abfFilePaths = [
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21218077.abf",
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21219013.abf",
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21219039.abf",
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21219069.abf",
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21323006.abf",
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21323036.abf",
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21323047.abf",
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21325007.abf",
"X:/Data/C57/TGOT on PVT/2020-10-12 OT-ChR2/21325019.abf"
]
#10nM TGOT
abfFilePaths = [
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20804007.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20804030.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20804043.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20804048.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20804060.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20804066.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20805008.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20805029.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20805035.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20811011.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20811021.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20817012.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20831011.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20831017.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/2021_05_14_DIC1_0008.abf"
]
#10nM TGOT+L368
abfFilePaths = [
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20805041.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20805047.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20805053.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20806018.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20806036.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20811034.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20811041.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20817020.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20817026.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20817032.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20817039.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20901022.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20901035.abf",
"X:/Data/C57/TGOT on PVT/2020-07-28 10nM TGOT on PVT/20902011.abf",
]
#50nM TGOT
abfFilePaths = [
"X:/Data/C57/TGOT on PVT/2020-07-23 50nM TGOT on PVT/20723038.abf",
"X:/Data/C57/TGOT on PVT/2020-07-23 50nM TGOT on PVT/20723029.abf",
"X:/Data/C57/TGOT on PVT/2020-07-23 50nM TGOT on PVT/20724011.abf",
"X:/Data/C57/TGOT on PVT/2020-07-23 50nM TGOT on PVT/20724017.abf",
"X:/Data/C57/TGOT on PVT/2020-07-23 50nM TGOT on PVT/20724023.abf",
"X:/Data/C57/TGOT on PVT/2020-07-23 50nM TGOT on PVT/20724027.abf",
"X:/Data/C57/TGOT on PVT/2020-07-23 50nM TGOT on PVT/20724033.abf",
"X:/Data/C57/TGOT on PVT/2020-07-23 50nM TGOT on PVT/20724045.abf",
"X:/Data/C57/TGOT on PVT/2020-07-23 50nM TGOT on PVT/2021_05_13_DIC1_0005.abf",
"X:/Data/C57/TGOT on PVT/2020-07-23 50nM TGOT on PVT/2021_05_13_DIC1_0021.abf",
"X:/Data/C57/TGOT on PVT/2020-07-23 50nM TGOT on PVT/2021_05_13_DIC1_0025.abf",
"X:/Data/C57/TGOT on PVT/2020-07-23 50nM TGOT on PVT/2021_05_13_DIC3_0050.abf"
]
#50nM TGOT+L368
abfFilePaths = [
"X:/Data/C57/TGOT on PVT/2020-07-27 50nM TGOT w L368/20727010.abf",
"X:/Data/C57/TGOT on PVT/2020-07-27 50nM TGOT w L368/20727026.abf",
"X:/Data/C57/TGOT on PVT/2020-07-27 50nM TGOT w L368/20727032.abf",
"X:/Data/C57/TGOT on PVT/2020-07-27 50nM TGOT w L368/20727039.abf",
"X:/Data/C57/TGOT on PVT/2020-07-27 50nM TGOT w L368/20728005.abf",
"X:/Data/C57/TGOT on PVT/2020-07-27 50nM TGOT w L368/20728011.abf",
"X:/Data/C57/TGOT on PVT/2020-07-27 50nM TGOT w L368/20728026.abf",
"X:/Data/C57/TGOT on PVT/2020-07-27 50nM TGOT w L368/2021_05_13_DIC3_0043.abf"
]
#50nM TGOT
abfFilePaths = [
"X:/Data/C57/TGOT on PVT/2020-11-18 TGOT on PVT-NAc neurons/20n19022.abf",
"X:/Data/C57/TGOT on PVT/2020-11-18 TGOT on PVT-NAc neurons/20n19029.abf",
"X:/Data/C57/TGOT on PVT/2020-11-18 TGOT on PVT-NAc neurons/20n19036.abf",
"X:/Data/C57/TGOT on PVT/2020-11-18 TGOT on PVT-NAc neurons/20n19052.abf",
"X:/Data/C57/TGOT on PVT/2020-11-18 TGOT on PVT-NAc neurons/20d03006.abf",
"X:/Data/C57/TGOT on PVT/2020-11-18 TGOT on PVT-NAc neurons/20d03032.abf",
"X:/Data/C57/TGOT on PVT/2020-11-18 TGOT on PVT-NAc neurons/20d03055.abf",
"X:/Data/C57/TGOT on PVT/2020-11-18 TGOT on PVT-NAc neurons/20d04012.abf",
"X:/Data/C57/TGOT on PVT/2020-11-18 TGOT on PVT-NAc neurons/20d04023.abf",
"X:/Data/C57/TGOT on PVT/2020-11-18 TGOT on PVT-NAc neurons/20d04030.abf",
"X:/Data/C57/TGOT on PVT/2020-11-18 TGOT on PVT-NAc neurons/20d04038.abf",
"X:/Data/C57/TGOT on PVT/2020-11-18 TGOT on PVT-NAc neurons/20d04045.abf",
"X:/Data/C57/TGOT on PVT/2020-11-18 TGOT on PVT-NAc neurons/20d04052.abf",
"X:/Data/C57/TGOT on PVT/2020-11-18 TGOT on PVT-NAc neurons/20d16012.abf",
"X:/Data/C57/TGOT on PVT/2020-11-18 TGOT on PVT-NAc neurons/20d16020.abf",
"X:/Data/C57/TGOT on PVT/2020-11-18 TGOT on PVT-NAc neurons/20d16035.abf",
"X:/Data/C57/TGOT on PVT/2020-11-18 TGOT on PVT-NAc neurons/20d17022.abf",
"X:/Data/C57/TGOT on PVT/2020-11-18 TGOT on PVT-NAc neurons/20d17028.abf"
]
The users can decide the parameters they want for data analysis.
`filterSize` decides number of points (sweeps) for the moving window average.
`regressionSize` decides the number of smoothed data points used to calculate linear regression slopes during the drug range.
```
filterSize = 10
regressionSize = 17
```
## Analyze All ABFs
```
baselineSlopes = []
drugSlopes = []
abfIDs = []
for abfFilePath in abfFilePaths:
baselineSlope, drugSlope = slopeTools.getBaselineAndMaxDrugSlope(abfFilePath, filterSize, regressionSize)
baselineSlopes.append(baselineSlope)
drugSlopes.append(drugSlope)
abfIDs.append(basename(abfFilePath))
```
## Compare Baseline vs. Drug Slopes
The users can plot the basleine slope and the peak drug slope of each cell, and report the p-value in the title by performing a paired t-test between baseline slopes and peak drug slopes.
```
plotTools.plotPairs(baselineSlopes, drugSlopes, "slopes")
```
## Assess Responsiveness of All Cells
Generate a scatter plot showing the slope difference of each cell.
This plot can assist users to decide the desired threshold (red dotted line) to seperate
```
slopeThreshold = -1.5
drugEffects = []
for i in range(len(abfIDs)):
drugEffects.append(drugSlopes[i] - baselineSlopes[i])
plt.figure (figsize=(6, 4))
plt.ylabel("Slope Difference (pA/min)")
plt.plot(abfIDs, drugEffects, 'o', color = "b")
plt.gca().set_xticklabels(abfIDs, rotation=45, ha='right')
plt.axhline(slopeThreshold, color='r', ls='--')
plt.show()
```
## Define Cells as Responsive vs. Non-Responsive
The users can define the <b>slopeThreshold</b>. The difference between baseline slope and peak drug slope must be more negative than this value to be a responder
slopeThreshold
```
drugEffects=statsTools.responderLessThanThreshold(abfIDs, drugEffects, slopeThreshold)
```
|
github_jupyter
|
# TensorFlow Regression Example
## Creating Data
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# 1 Million Points
x_data = np.linspace(0.0,10.0,1000000)
noise = np.random.randn(len(x_data))
# y = mx + b + noise_levels
b = 5
y_true = (0.5 * x_data ) + 5 + noise
my_data = pd.concat([pd.DataFrame(data=x_data,columns=['X Data']),pd.DataFrame(data=y_true,columns=['Y'])],axis=1)
my_data.head()
my_data.sample(n=250).plot(kind='scatter',x='X Data',y='Y')
```
# TensorFlow
## Batch Size
We will take the data in batches (1,000,000 points is a lot to pass in at once)
```
import tensorflow as tf
# Random 10 points to grab
batch_size = 8
```
** Variables **
```
w_tf = tf.Variable(np.random.uniform())
b_tf = tf.Variable(np.random.uniform(1,10))
```
** Placeholders **
```
x_train = tf.placeholder(tf.float32,shape=(batch_size))
y_train = tf.placeholder(tf.float32,shape=(batch_size))
```
** Graph **
```
y_hat = w_tf * x_train + b_tf
```
** Loss Function **
```
error = tf.reduce_sum((y_train - y_hat)**2)
```
** Optimizer **
```
optimizer = tf.train.GradientDescentOptimizer(0.001)
train = optimizer.minimize(error)
```
** Initialize Variables **
```
init = tf.global_variables_initializer()
```
### Session
```
with tf.Session() as sess:
sess.run(init)
batchs = 1000
for i in range(batchs):
batch_index = np.random.randint(len(x_data),size=(batch_size))
feed = {x_train:x_data[batch_index], y_train:y_true[batch_index]}
sess.run(train,feed_dict = feed)
final_w, final_b = sess.run([w_tf,b_tf])
final_w
final_b
```
### Results
```
my_data.sample(n=250).plot(kind='scatter',x='X Data',y='Y')
plt.plot(x_data, final_w*x_data+final_b,'r')
```
## tf.keras API
```
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model
```
## tf.estimator API
Much simpler API for basic tasks like regression! We'll talk about more abstractions like TF-Slim later on.
```
feat_cols = [tf.feature_column.numeric_column('x',shape=[1])]
estimator = tf.estimator.LinearRegressor(feature_columns=feat_cols)
```
### Train Test Split
We haven't actually performed a train test split yet! So let's do that on our data now and perform a more realistic version of a Regression Task
```
from sklearn.model_selection import train_test_split
x_train, x_eval, y_train, y_eval = train_test_split(x_data,y_true,test_size=0.3, random_state = 101)
print(x_train.shape)
print(y_train.shape)
print(x_eval.shape)
print(y_eval.shape)
```
### Set up Estimator Inputs
```
# Can also do .pandas_input_fn
input_func = tf.estimator.inputs.numpy_input_fn({'x':x_train},y_train,batch_size=4,num_epochs=None,shuffle=True)
train_input_func = tf.estimator.inputs.numpy_input_fn({'x':x_train},y_train,batch_size=4,num_epochs=1000,shuffle=False)
eval_input_func = tf.estimator.inputs.numpy_input_fn({'x':x_eval},y_eval,batch_size=4,num_epochs=1000,shuffle=False)
```
### Train the Estimator
```
estimator.train(input_fn=input_func,steps=1000)
```
### Evaluation
```
train_metrics = estimator.evaluate(input_fn=train_input_func,steps=1000)
eval_metrics = estimator.evaluate(input_fn=eval_input_func,steps=1000)
print("train metrics: {}".format(train_metrics))
print("eval metrics: {}".format(eval_metrics))
```
### Predictions
```
input_fn_predict = tf.estimator.inputs.numpy_input_fn({'x':np.linspace(0,10,10)},shuffle=False)
list(estimator.predict(input_fn=input_fn_predict))
predictions = []# np.array([])
for x in estimator.predict(input_fn=input_fn_predict):
predictions.append(x['predictions'])
predictions
my_data.sample(n=250).plot(kind='scatter',x='X Data',y='Y')
plt.plot(np.linspace(0,10,10),predictions,'r')
```
# Great Job!
|
github_jupyter
|
# Image classification training with image format
1. [Introduction](#Introduction)
2. [Prerequisites and Preprocessing](#Prerequisites-and-Preprocessing)
1. [Permissions and environment variables](#Permissions-and-environment-variables)
2. [Prepare the data](#Prepare-the-data)
3. [Fine-tuning The Image Classification Model](#Fine-tuning-the-Image-classification-model)
1. [Training parameters](#Training-parameters)
2. [Training](#Training)
4. [Deploy The Model](#Deploy-the-model)
1. [Create model](#Create-model)
2. [Batch transform](#Batch-transform)
3. [Realtime inference](#Realtime-inference)
1. [Create endpoint configuration](#Create-endpoint-configuration)
2. [Create endpoint](#Create-endpoint)
3. [Perform inference](#Perform-inference)
4. [Clean up](#Clean-up)
## Introduction
Welcome to our end-to-end example of the image classification algorithm training with image format. In this demo, we will use the Amazon SageMaker image classification algorithm in transfer learning mode to fine-tune a pre-trained model (trained on ImageNet data) to learn to classify a new dataset. In particular, the pre-trained model will be fine-tuned using the [Caltech-256 dataset](http://www.vision.caltech.edu/Image_Datasets/Caltech256/).
To get started, we need to set up the environment with a few prerequisite steps, for permissions, configurations, and so on.
## Prerequisites and Preprocessing
### Permissions and environment variables
Here we set up the linkage and authentication to AWS services. There are three parts to this:
* The roles used to give learning and hosting access to your data. This will automatically be obtained from the role used to start the notebook
* The S3 bucket that you want to use for training and model data
* The Amazon SageMaker image classification docker image which need not be changed
```
%%time
import boto3
import sagemaker
from sagemaker import get_execution_role
from sagemaker import image_uris
role = get_execution_role()
bucket = sagemaker.session.Session().default_bucket()
training_image = image_uris.retrieve(
region=boto3.Session().region_name, framework="image-classification"
)
```
## Fine-tuning the Image classification model
### Prepare the data
The Caltech-256 dataset consist of images from 257 categories (the last one being a clutter category) and has 30k images with a minimum of 80 images and a maximum of about 800 images per category.
The image classification algorithm can take two types of input formats. The first is a [RecordIO format](https://mxnet.incubator.apache.org/tutorials/basic/record_io.html) (content type: application/x-recordio) and the other is a [lst format](https://mxnet.incubator.apache.org/how_to/recordio.html?highlight=im2rec) (content type: application/x-image). Files for both these formats are available at http://data.dmlc.ml/mxnet/data/caltech-256/. In this example, we will use the lst format for training and use the training/validation split [specified here](http://data.dmlc.ml/mxnet/data/caltech-256/).
```
import os
import urllib.request
def download(url):
filename = url.split("/")[-1]
if not os.path.exists(filename):
urllib.request.urlretrieve(url, filename)
# Caltech-256 image files
s3 = boto3.client("s3")
s3.download_file(
"sagemaker-sample-files",
"datasets/image/caltech-256/256_ObjectCategories.tar",
"256_ObjectCategories.tar",
)
!tar -xf 256_ObjectCategories.tar
# Tool for creating lst file
download("https://raw.githubusercontent.com/apache/incubator-mxnet/master/tools/im2rec.py")
%%bash
mkdir -p caltech_256_train_60
for i in 256_ObjectCategories/*; do
c=`basename $i`
mkdir -p caltech_256_train_60/$c
for j in `ls $i/*.jpg | shuf | head -n 60`; do
mv $j caltech_256_train_60/$c/
done
done
python im2rec.py --list --recursive caltech-256-60-train caltech_256_train_60/
python im2rec.py --list --recursive caltech-256-60-val 256_ObjectCategories/
```
A .lst file is a tab-separated file with three columns that contains a list of image files. The first column specifies the image index, the second column specifies the class label index for the image, and the third column specifies the relative path of the image file. The image index in the first column should be unique across all of the images. Here we make an image list file using the [im2rec](https://github.com/apache/incubator-mxnet/blob/master/tools/im2rec.py) tool from MXNet. You can also create the .lst file in your own way. An example of .lst file is shown as follows.
```
!head -n 3 ./caltech-256-60-train.lst > example.lst
f = open("example.lst", "r")
lst_content = f.read()
print(lst_content)
```
When you are bringing your own image files to train, please ensure that the .lst file follows the same format as described above. In order to train with the lst format interface, passing the lst file for both training and validation in the appropriate format is mandatory. Once we have the data available in the correct format for training, the next step is to upload the image and .lst file to S3 bucket.
```
# Four channels: train, validation, train_lst, and validation_lst
s3train = "s3://{}/image-classification/train/".format(bucket)
s3validation = "s3://{}/image-classification/validation/".format(bucket)
s3train_lst = "s3://{}/image-classification/train_lst/".format(bucket)
s3validation_lst = "s3://{}/image-classification/validation_lst/".format(bucket)
# upload the image files to train and validation channels
!aws s3 cp caltech_256_train_60 $s3train --recursive --quiet
!aws s3 cp 256_ObjectCategories $s3validation --recursive --quiet
# upload the lst files to train_lst and validation_lst channels
!aws s3 cp caltech-256-60-train.lst $s3train_lst --quiet
!aws s3 cp caltech-256-60-val.lst $s3validation_lst --quiet
```
Now we have all the data stored in S3 bucket. The image and lst files will be converted to RecordIO file internally by the image classification algorithm. But if you want do the conversion, the following cell shows how to do it using the [im2rec](https://github.com/apache/incubator-mxnet/blob/master/tools/im2rec.py) tool. Note that this is just an example of creating RecordIO files. We are **_not_** using them for training in this notebook. More details on creating RecordIO files can be found in this [tutorial](https://mxnet.incubator.apache.org/how_to/recordio.html?highlight=im2rec).
```
%%bash
python im2rec.py --resize 256 --quality 90 --num-thread 16 caltech-256-60-val 256_ObjectCategories/
python im2rec.py --resize 256 --quality 90 --num-thread 16 caltech-256-60-train caltech_256_train_60/
```
After you created the RecordIO files, you can upload them to the train and validation channels for training. To train with RecordIO format, you can follow "[Image-classification-fulltraining.ipynb](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/imageclassification_caltech/Image-classification-fulltraining.ipynb)" and "[Image-classification-transfer-learning.ipynb](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/imageclassification_caltech/Image-classification-transfer-learning.ipynb)". Again, we will **_not_** use the RecordIO file for the training. The following sections will only show you how to train a model with images and list files.
Before training the model, we need to set up the training parameters. The next section will explain the parameters in detail.
## Fine-tuning the Image Classification Model
### Training parameters
There are two kinds of parameters that need to be set for training. The first one are the parameters for the training job. These include:
* **Input specification**: These are the training and validation channels that specify the path where training data is present. These are specified in the "InputDataConfig" section. The main parameters that need to be set is the "ContentType" which can be set to "application/x-recordio" or "application/x-image" based on the input data format and the S3Uri which specifies the bucket and the folder where the data is present.
* **Output specification**: This is specified in the "OutputDataConfig" section. We just need to specify the path where the output can be stored after training
* **Resource config**: This section specifies the type of instance on which to run the training and the number of hosts used for training. If "InstanceCount" is more than 1, then training can be run in a distributed manner.
Apart from the above set of parameters, there are hyperparameters that are specific to the algorithm. These are:
* **num_layers**: The number of layers (depth) for the network. We use 18 in this sample but other values such as 50, 152 can be used.
* **image_shape**: The input image dimensions,'num_channels, height, width', for the network. It should be no larger than the actual image size. The number of channels should be same as the actual image.
* **num_training_samples**: This is the total number of training samples. It is set to 15240 for the Caltech dataset with the current split.
* **num_classes**: This is the number of output classes for the new dataset. ImageNet was trained with 1000 output classes but the number of output classes can be changed for fine-tuning. For Caltech, we use 257 because it has 256 object categories + 1 clutter class.
* **mini_batch_size**: The number of training samples used for each mini batch. In distributed training, the number of training samples used per batch will be N * mini_batch_size where N is the number of hosts on which training is run.
* **epochs**: Number of training epochs.
* **learning_rate**: Learning rate for training.
* **top_k**: Report the top-k accuracy during training.
* **resize**: Resize the image before using it for training. The images are resized so that the shortest side is of this parameter. If the parameter is not set, then the training data is used as such without resizing.
* **checkpoint_frequency**: Period to store model parameters (in number of epochs).
* **use_pretrained_model**: Set to 1 to use pretrained model for transfer learning.
```
# The algorithm supports multiple network depth (number of layers). They are 18, 34, 50, 101, 152 and 200
# For this training, we will use 18 layers
num_layers = 18
# we need to specify the input image shape for the training data
image_shape = "3,224,224"
# we also need to specify the number of training samples in the training set
num_training_samples = 15240
# specify the number of output classes
num_classes = 257
# batch size for training
mini_batch_size = 128
# number of epochs
epochs = 6
# learning rate
learning_rate = 0.01
# report top_5 accuracy
top_k = 5
# resize image before training
resize = 256
# period to store model parameters (in number of epochs), in this case, we will save parameters from epoch 2, 4, and 6
checkpoint_frequency = 2
# Since we are using transfer learning, we set use_pretrained_model to 1 so that weights can be
# initialized with pre-trained weights
use_pretrained_model = 1
```
### Training
Run the training using Amazon SageMaker CreateTrainingJob API
```
%%time
import time
import boto3
from time import gmtime, strftime
s3 = boto3.client("s3")
# create unique job name
job_name_prefix = "sagemaker-imageclassification-notebook"
timestamp = time.strftime("-%Y-%m-%d-%H-%M-%S", time.gmtime())
job_name = job_name_prefix + timestamp
training_params = {
# specify the training docker image
"AlgorithmSpecification": {"TrainingImage": training_image, "TrainingInputMode": "File"},
"RoleArn": role,
"OutputDataConfig": {"S3OutputPath": "s3://{}/{}/output".format(bucket, job_name_prefix)},
"ResourceConfig": {"InstanceCount": 1, "InstanceType": "ml.p2.xlarge", "VolumeSizeInGB": 50},
"TrainingJobName": job_name,
"HyperParameters": {
"image_shape": image_shape,
"num_layers": str(num_layers),
"num_training_samples": str(num_training_samples),
"num_classes": str(num_classes),
"mini_batch_size": str(mini_batch_size),
"epochs": str(epochs),
"learning_rate": str(learning_rate),
"top_k": str(top_k),
"resize": str(resize),
"checkpoint_frequency": str(checkpoint_frequency),
"use_pretrained_model": str(use_pretrained_model),
},
"StoppingCondition": {"MaxRuntimeInSeconds": 360000},
# Training data should be inside a subdirectory called "train"
# Validation data should be inside a subdirectory called "validation"
# The algorithm currently only supports fullyreplicated model (where data is copied onto each machine)
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": s3train,
"S3DataDistributionType": "FullyReplicated",
}
},
"ContentType": "application/x-image",
"CompressionType": "None",
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": s3validation,
"S3DataDistributionType": "FullyReplicated",
}
},
"ContentType": "application/x-image",
"CompressionType": "None",
},
{
"ChannelName": "train_lst",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": s3train_lst,
"S3DataDistributionType": "FullyReplicated",
}
},
"ContentType": "application/x-image",
"CompressionType": "None",
},
{
"ChannelName": "validation_lst",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": s3validation_lst,
"S3DataDistributionType": "FullyReplicated",
}
},
"ContentType": "application/x-image",
"CompressionType": "None",
},
],
}
print("Training job name: {}".format(job_name))
print(
"\nInput Data Location: {}".format(
training_params["InputDataConfig"][0]["DataSource"]["S3DataSource"]
)
)
# create the Amazon SageMaker training job
sagemaker = boto3.client(service_name="sagemaker")
sagemaker.create_training_job(**training_params)
# confirm that the training job has started
status = sagemaker.describe_training_job(TrainingJobName=job_name)["TrainingJobStatus"]
print("Training job current status: {}".format(status))
try:
# wait for the job to finish and report the ending status
sagemaker.get_waiter("training_job_completed_or_stopped").wait(TrainingJobName=job_name)
training_info = sagemaker.describe_training_job(TrainingJobName=job_name)
status = training_info["TrainingJobStatus"]
print("Training job ended with status: " + status)
except:
print("Training failed to start")
# if exception is raised, that means it has failed
message = sagemaker.describe_training_job(TrainingJobName=job_name)["FailureReason"]
print("Training failed with the following error: {}".format(message))
training_info = sagemaker.describe_training_job(TrainingJobName=job_name)
status = training_info["TrainingJobStatus"]
print("Training job ended with status: " + status)
print(training_info)
```
If you see the message,
> `Training job ended with status: Completed`
then that means training sucessfully completed and the output model was stored in the output path specified by `training_params['OutputDataConfig']`.
You can also view information about and the status of a training job using the AWS SageMaker console. Just click on the "Jobs" tab.
## Deploy The Model
A trained model does nothing on its own. We now want to use the model to perform inference. For this example, that means predicting the class label given an input image.
This section involves several steps,
1. [Create model](#CreateModel) - Create model for the training output
1. [Batch Transform](#BatchTransform) - Create a transform job to perform batch inference.
1. [Host the model for realtime inference](#HostTheModel) - Create an inference endpoint and perform realtime inference.
### Create model
We now create a SageMaker Model from the training output. Using the model we can create an Endpoint Configuration.
```
%%time
import boto3
from time import gmtime, strftime
sage = boto3.Session().client(service_name="sagemaker")
timestamp = time.strftime("-%Y-%m-%d-%H-%M-%S", time.gmtime())
model_name = "image-classification-model" + timestamp
print(model_name)
info = sage.describe_training_job(TrainingJobName=job_name)
model_data = info["ModelArtifacts"]["S3ModelArtifacts"]
print(model_data)
hosting_image = image_uris.retrieve(
region=boto3.Session().region_name, framework="image-classification"
)
primary_container = {
"Image": hosting_image,
"ModelDataUrl": model_data,
}
create_model_response = sage.create_model(
ModelName=model_name, ExecutionRoleArn=role, PrimaryContainer=primary_container
)
print(create_model_response["ModelArn"])
```
### Batch transform
We now create a SageMaker Batch Transform job using the model created above to perform batch prediction.
```
timestamp = time.strftime("-%Y-%m-%d-%H-%M-%S", time.gmtime())
batch_job_name = "image-classification-model" + timestamp
batch_input = s3validation + "001.ak47/"
request = {
"TransformJobName": batch_job_name,
"ModelName": model_name,
"MaxConcurrentTransforms": 16,
"MaxPayloadInMB": 6,
"BatchStrategy": "SingleRecord",
"TransformOutput": {"S3OutputPath": "s3://{}/{}/output".format(bucket, batch_job_name)},
"TransformInput": {
"DataSource": {"S3DataSource": {"S3DataType": "S3Prefix", "S3Uri": batch_input}},
"ContentType": "application/x-image",
"SplitType": "None",
"CompressionType": "None",
},
"TransformResources": {"InstanceType": "ml.p2.xlarge", "InstanceCount": 1},
}
print("Transform job name: {}".format(batch_job_name))
print("\nInput Data Location: {}".format(batch_input))
sagemaker = boto3.client("sagemaker")
sagemaker.create_transform_job(**request)
print("Created Transform job with name: ", batch_job_name)
while True:
response = sagemaker.describe_transform_job(TransformJobName=batch_job_name)
status = response["TransformJobStatus"]
if status == "Completed":
print("Transform job ended with status: " + status)
break
if status == "Failed":
message = response["FailureReason"]
print("Transform failed with the following error: {}".format(message))
raise Exception("Transform job failed")
time.sleep(30)
```
After the job completes, let's check the prediction results.
```
from urllib.parse import urlparse
import json
import numpy as np
s3_client = boto3.client("s3")
object_categories = [
"ak47",
"american-flag",
"backpack",
"baseball-bat",
"baseball-glove",
"basketball-hoop",
"bat",
"bathtub",
"bear",
"beer-mug",
"billiards",
"binoculars",
"birdbath",
"blimp",
"bonsai-101",
"boom-box",
"bowling-ball",
"bowling-pin",
"boxing-glove",
"brain-101",
"breadmaker",
"buddha-101",
"bulldozer",
"butterfly",
"cactus",
"cake",
"calculator",
"camel",
"cannon",
"canoe",
"car-tire",
"cartman",
"cd",
"centipede",
"cereal-box",
"chandelier-101",
"chess-board",
"chimp",
"chopsticks",
"cockroach",
"coffee-mug",
"coffin",
"coin",
"comet",
"computer-keyboard",
"computer-monitor",
"computer-mouse",
"conch",
"cormorant",
"covered-wagon",
"cowboy-hat",
"crab-101",
"desk-globe",
"diamond-ring",
"dice",
"dog",
"dolphin-101",
"doorknob",
"drinking-straw",
"duck",
"dumb-bell",
"eiffel-tower",
"electric-guitar-101",
"elephant-101",
"elk",
"ewer-101",
"eyeglasses",
"fern",
"fighter-jet",
"fire-extinguisher",
"fire-hydrant",
"fire-truck",
"fireworks",
"flashlight",
"floppy-disk",
"football-helmet",
"french-horn",
"fried-egg",
"frisbee",
"frog",
"frying-pan",
"galaxy",
"gas-pump",
"giraffe",
"goat",
"golden-gate-bridge",
"goldfish",
"golf-ball",
"goose",
"gorilla",
"grand-piano-101",
"grapes",
"grasshopper",
"guitar-pick",
"hamburger",
"hammock",
"harmonica",
"harp",
"harpsichord",
"hawksbill-101",
"head-phones",
"helicopter-101",
"hibiscus",
"homer-simpson",
"horse",
"horseshoe-crab",
"hot-air-balloon",
"hot-dog",
"hot-tub",
"hourglass",
"house-fly",
"human-skeleton",
"hummingbird",
"ibis-101",
"ice-cream-cone",
"iguana",
"ipod",
"iris",
"jesus-christ",
"joy-stick",
"kangaroo-101",
"kayak",
"ketch-101",
"killer-whale",
"knife",
"ladder",
"laptop-101",
"lathe",
"leopards-101",
"license-plate",
"lightbulb",
"light-house",
"lightning",
"llama-101",
"mailbox",
"mandolin",
"mars",
"mattress",
"megaphone",
"menorah-101",
"microscope",
"microwave",
"minaret",
"minotaur",
"motorbikes-101",
"mountain-bike",
"mushroom",
"mussels",
"necktie",
"octopus",
"ostrich",
"owl",
"palm-pilot",
"palm-tree",
"paperclip",
"paper-shredder",
"pci-card",
"penguin",
"people",
"pez-dispenser",
"photocopier",
"picnic-table",
"playing-card",
"porcupine",
"pram",
"praying-mantis",
"pyramid",
"raccoon",
"radio-telescope",
"rainbow",
"refrigerator",
"revolver-101",
"rifle",
"rotary-phone",
"roulette-wheel",
"saddle",
"saturn",
"school-bus",
"scorpion-101",
"screwdriver",
"segway",
"self-propelled-lawn-mower",
"sextant",
"sheet-music",
"skateboard",
"skunk",
"skyscraper",
"smokestack",
"snail",
"snake",
"sneaker",
"snowmobile",
"soccer-ball",
"socks",
"soda-can",
"spaghetti",
"speed-boat",
"spider",
"spoon",
"stained-glass",
"starfish-101",
"steering-wheel",
"stirrups",
"sunflower-101",
"superman",
"sushi",
"swan",
"swiss-army-knife",
"sword",
"syringe",
"tambourine",
"teapot",
"teddy-bear",
"teepee",
"telephone-box",
"tennis-ball",
"tennis-court",
"tennis-racket",
"theodolite",
"toaster",
"tomato",
"tombstone",
"top-hat",
"touring-bike",
"tower-pisa",
"traffic-light",
"treadmill",
"triceratops",
"tricycle",
"trilobite-101",
"tripod",
"t-shirt",
"tuning-fork",
"tweezer",
"umbrella-101",
"unicorn",
"vcr",
"video-projector",
"washing-machine",
"watch-101",
"waterfall",
"watermelon",
"welding-mask",
"wheelbarrow",
"windmill",
"wine-bottle",
"xylophone",
"yarmulke",
"yo-yo",
"zebra",
"airplanes-101",
"car-side-101",
"faces-easy-101",
"greyhound",
"tennis-shoes",
"toad",
"clutter",
]
def list_objects(s3_client, bucket, prefix):
response = s3_client.list_objects(Bucket=bucket, Prefix=prefix)
objects = [content["Key"] for content in response["Contents"]]
return objects
def get_label(s3_client, bucket, prefix):
filename = prefix.split("/")[-1]
s3_client.download_file(bucket, prefix, filename)
with open(filename) as f:
data = json.load(f)
index = np.argmax(data["prediction"])
probability = data["prediction"][index]
print("Result: label - " + object_categories[index] + ", probability - " + str(probability))
return object_categories[index], probability
inputs = list_objects(s3_client, bucket, urlparse(batch_input).path.lstrip("/"))
print("Sample inputs: " + str(inputs[:2]))
outputs = list_objects(s3_client, bucket, batch_job_name + "/output")
print("Sample output: " + str(outputs[:2]))
# Check prediction result of the first 2 images
[get_label(s3_client, bucket, prefix) for prefix in outputs[0:2]]
```
### Realtime inference
We now host the model with an endpoint and perform realtime inference.
This section involves several steps,
1. [Create endpoint configuration](#CreateEndpointConfiguration) - Create a configuration defining an endpoint.
1. [Create endpoint](#CreateEndpoint) - Use the configuration to create an inference endpoint.
1. [Perform inference](#PerformInference) - Perform inference on some input data using the endpoint.
1. [Clean up](#CleanUp) - Delete the endpoint and model
#### Create endpoint configuration
At launch, we will support configuring REST endpoints in hosting with multiple models, e.g. for A/B testing purposes. In order to support this, customers create an endpoint configuration, that describes the distribution of traffic across the models, whether split, shadowed, or sampled in some way.
In addition, the endpoint configuration describes the instance type required for model deployment, and at launch will describe the autoscaling configuration.
```
from time import gmtime, strftime
timestamp = time.strftime("-%Y-%m-%d-%H-%M-%S", time.gmtime())
endpoint_config_name = job_name_prefix + "-epc-" + timestamp
endpoint_config_response = sage.create_endpoint_config(
EndpointConfigName=endpoint_config_name,
ProductionVariants=[
{
"InstanceType": "ml.p2.xlarge",
"InitialInstanceCount": 1,
"ModelName": model_name,
"VariantName": "AllTraffic",
}
],
)
print("Endpoint configuration name: {}".format(endpoint_config_name))
print("Endpoint configuration arn: {}".format(endpoint_config_response["EndpointConfigArn"]))
```
#### Create endpoint
Next, the customer creates the endpoint that serves up the model, through specifying the name and configuration defined above. The end result is an endpoint that can be validated and incorporated into production applications. This takes 9-11 minutes to complete.
```
%%time
import time
timestamp = time.strftime("-%Y-%m-%d-%H-%M-%S", time.gmtime())
endpoint_name = job_name_prefix + "-ep-" + timestamp
print("Endpoint name: {}".format(endpoint_name))
endpoint_params = {
"EndpointName": endpoint_name,
"EndpointConfigName": endpoint_config_name,
}
endpoint_response = sagemaker.create_endpoint(**endpoint_params)
print("EndpointArn = {}".format(endpoint_response["EndpointArn"]))
```
Finally, now the endpoint can be created. It may take a few minutes to create the endpoint...
```
# get the status of the endpoint
response = sagemaker.describe_endpoint(EndpointName=endpoint_name)
status = response["EndpointStatus"]
print("EndpointStatus = {}".format(status))
try:
sagemaker.get_waiter("endpoint_in_service").wait(EndpointName=endpoint_name)
finally:
resp = sagemaker.describe_endpoint(EndpointName=endpoint_name)
status = resp["EndpointStatus"]
print("Arn: " + resp["EndpointArn"])
print("Create endpoint ended with status: " + status)
if status != "InService":
message = sagemaker.describe_endpoint(EndpointName=endpoint_name)["FailureReason"]
print("Training failed with the following error: {}".format(message))
raise Exception("Endpoint creation did not succeed")
```
If you see the message,
> `Endpoint creation ended with EndpointStatus = InService`
then congratulations! You now have a functioning inference endpoint. You can confirm the endpoint configuration and status by navigating to the "Endpoints" tab in the AWS SageMaker console.
We will finally create a runtime object from which we can invoke the endpoint.
#### Perform inference
Finally, the customer can now validate the model for use. They can obtain the endpoint from the client library using the result from previous operations, and generate classifications from the trained model using that endpoint.
```
import boto3
runtime = boto3.Session().client(service_name="runtime.sagemaker")
```
##### Download test image
```
file_name = "/tmp/test.jpg"
s3.download_file(
"sagemaker-sample-files",
"datasets/image/caltech-256/256_ObjectCategories/008.bathtub/008_0007.jpg",
file_name,
)
# test image
from IPython.display import Image
Image(file_name)
import json
import numpy as np
with open(file_name, "rb") as f:
payload = f.read()
payload = bytearray(payload)
response = runtime.invoke_endpoint(
EndpointName=endpoint_name, ContentType="application/x-image", Body=payload
)
result = response["Body"].read()
# result will be in json format and convert it to ndarray
result = json.loads(result)
# the result will output the probabilities for all classes
# find the class with maximum probability and print the class index
index = np.argmax(result)
object_categories = [
"ak47",
"american-flag",
"backpack",
"baseball-bat",
"baseball-glove",
"basketball-hoop",
"bat",
"bathtub",
"bear",
"beer-mug",
"billiards",
"binoculars",
"birdbath",
"blimp",
"bonsai-101",
"boom-box",
"bowling-ball",
"bowling-pin",
"boxing-glove",
"brain-101",
"breadmaker",
"buddha-101",
"bulldozer",
"butterfly",
"cactus",
"cake",
"calculator",
"camel",
"cannon",
"canoe",
"car-tire",
"cartman",
"cd",
"centipede",
"cereal-box",
"chandelier-101",
"chess-board",
"chimp",
"chopsticks",
"cockroach",
"coffee-mug",
"coffin",
"coin",
"comet",
"computer-keyboard",
"computer-monitor",
"computer-mouse",
"conch",
"cormorant",
"covered-wagon",
"cowboy-hat",
"crab-101",
"desk-globe",
"diamond-ring",
"dice",
"dog",
"dolphin-101",
"doorknob",
"drinking-straw",
"duck",
"dumb-bell",
"eiffel-tower",
"electric-guitar-101",
"elephant-101",
"elk",
"ewer-101",
"eyeglasses",
"fern",
"fighter-jet",
"fire-extinguisher",
"fire-hydrant",
"fire-truck",
"fireworks",
"flashlight",
"floppy-disk",
"football-helmet",
"french-horn",
"fried-egg",
"frisbee",
"frog",
"frying-pan",
"galaxy",
"gas-pump",
"giraffe",
"goat",
"golden-gate-bridge",
"goldfish",
"golf-ball",
"goose",
"gorilla",
"grand-piano-101",
"grapes",
"grasshopper",
"guitar-pick",
"hamburger",
"hammock",
"harmonica",
"harp",
"harpsichord",
"hawksbill-101",
"head-phones",
"helicopter-101",
"hibiscus",
"homer-simpson",
"horse",
"horseshoe-crab",
"hot-air-balloon",
"hot-dog",
"hot-tub",
"hourglass",
"house-fly",
"human-skeleton",
"hummingbird",
"ibis-101",
"ice-cream-cone",
"iguana",
"ipod",
"iris",
"jesus-christ",
"joy-stick",
"kangaroo-101",
"kayak",
"ketch-101",
"killer-whale",
"knife",
"ladder",
"laptop-101",
"lathe",
"leopards-101",
"license-plate",
"lightbulb",
"light-house",
"lightning",
"llama-101",
"mailbox",
"mandolin",
"mars",
"mattress",
"megaphone",
"menorah-101",
"microscope",
"microwave",
"minaret",
"minotaur",
"motorbikes-101",
"mountain-bike",
"mushroom",
"mussels",
"necktie",
"octopus",
"ostrich",
"owl",
"palm-pilot",
"palm-tree",
"paperclip",
"paper-shredder",
"pci-card",
"penguin",
"people",
"pez-dispenser",
"photocopier",
"picnic-table",
"playing-card",
"porcupine",
"pram",
"praying-mantis",
"pyramid",
"raccoon",
"radio-telescope",
"rainbow",
"refrigerator",
"revolver-101",
"rifle",
"rotary-phone",
"roulette-wheel",
"saddle",
"saturn",
"school-bus",
"scorpion-101",
"screwdriver",
"segway",
"self-propelled-lawn-mower",
"sextant",
"sheet-music",
"skateboard",
"skunk",
"skyscraper",
"smokestack",
"snail",
"snake",
"sneaker",
"snowmobile",
"soccer-ball",
"socks",
"soda-can",
"spaghetti",
"speed-boat",
"spider",
"spoon",
"stained-glass",
"starfish-101",
"steering-wheel",
"stirrups",
"sunflower-101",
"superman",
"sushi",
"swan",
"swiss-army-knife",
"sword",
"syringe",
"tambourine",
"teapot",
"teddy-bear",
"teepee",
"telephone-box",
"tennis-ball",
"tennis-court",
"tennis-racket",
"theodolite",
"toaster",
"tomato",
"tombstone",
"top-hat",
"touring-bike",
"tower-pisa",
"traffic-light",
"treadmill",
"triceratops",
"tricycle",
"trilobite-101",
"tripod",
"t-shirt",
"tuning-fork",
"tweezer",
"umbrella-101",
"unicorn",
"vcr",
"video-projector",
"washing-machine",
"watch-101",
"waterfall",
"watermelon",
"welding-mask",
"wheelbarrow",
"windmill",
"wine-bottle",
"xylophone",
"yarmulke",
"yo-yo",
"zebra",
"airplanes-101",
"car-side-101",
"faces-easy-101",
"greyhound",
"tennis-shoes",
"toad",
"clutter",
]
print("Result: label - " + object_categories[index] + ", probability - " + str(result[index]))
```
#### Clean up
When we're done with the endpoint, we can just delete it and the backing instances will be released. Uncomment and run the following cell to delete the endpoint and model
```
sage.delete_endpoint(EndpointName=endpoint_name)
```
|
github_jupyter
|
## PySpark Data Engineering Practice (Sandboxing)
### Olympic Athlete Data
This notebook is for data engineering practicing purposes.
During this notebook I want to explore data by using and learning PySpark.
The data is from: https://www.kaggle.com/mysarahmadbhat/120-years-of-olympic-history
```
## Imports
from pyspark.sql import SparkSession ## Create session
from pyspark.sql.types import StructType, StructField, StringType, IntegerType ## Create schema
## Create spark sessions
spark = (SparkSession.builder.appName("AthletesAnalytics").getOrCreate())
```
### Import the data
```
## Create schema
schema = StructType([
StructField("ID", StringType(), True),
StructField("Name", StringType(), True),
StructField("Sex", StringType(), True),
StructField("Age", StringType(), True),
StructField("Height", StringType(), True),
StructField("Weight", StringType(), True),
StructField("Team", StringType(), True),
StructField("NOC", StringType(), True),
StructField("Games", StringType(), True),
StructField("Year", StringType(), True),
StructField("Season", StringType(), True),
StructField("City", StringType(), True),
StructField("Sport", StringType(), True),
StructField("Event", StringType(), True),
StructField("Medal", StringType(), True),
])
## Read CSV into dataframe
file_path = "./data/athlete_events.csv"
athletes_df = (spark.read.format("csv")
.option("header", True)
.schema(schema)
.load(file_path))
## Showing first 10 rows
athletes_df.show(10, False)
## Print out schema details
athletes_df.printSchema()
athletes_df.show(3, vertical=True)
```
### Exploration & Cleansing
```
### Check for NA values by exploring columns
from pyspark.sql.functions import col
athletes_df.filter(col("Medal") == "NA").show(10)
## NA values in:
## Age, Height, Weight, Team, NOC National Olympics Committee, and Medal.
```
#### Drop rows where age, height or weight have NA values.
```
athletes_df = athletes_df.filter((col("Age") != "NA") & (col("Height") != "NA") & (col("Weight") != "NA"))
## Check if correct
athletes_df.filter((col("Age") == "NA")).show(5)
athletes_df.filter((col("Height") == "NA")).show(5)
athletes_df.filter((col("Weight") == "NA")).show(5)
```
#### Check if other columns have the right values
```
### Check if ID, Age, Height, Weight and Year are indeed all integer values
### Checking ID first on non numeric values
from pyspark.sql.types import DataType, StructField, StructType, IntegerType, StringType
test_df = athletes_df.select('ID',col('ID').cast(IntegerType()).isNotNull().alias("Value"))
test_df.filter((col("Value") == False)).show(5)
### Checking Age on non numeric values
from pyspark.sql.types import DataType, StructField, StructType, IntegerType, StringType
test_df = athletes_df.select('Age',col('Age').cast(IntegerType()).isNotNull().alias("Value"))
test_df.filter((col("Value") == False)).show(5)
### As seen something isn't going well. There are gender and even name values in Age.
### Let's see how many rows have this problem
test_df.filter((col("Value") == True)).count()
### 500 out of 206188 values have this problem
test_df.filter((col("Value") == False)).count()
### Percentage of broken rows
print(str(round(500 / 206188 * 100,2)) + '%')
athletes_df.filter((col("Age") == "M")).show(5)
### The reason for this error is that there is a , in some of the names.
### For now I'll drop these rows. This can be done with the following filter function
athletes_df = athletes_df.filter("CAST(Age AS INTEGER) IS NOT NULL")
athletes_df.filter((col("Age"))=="M").show()
### By fixing the rows, there are also no wrong values anymore in Height
test_df = athletes_df.select('Height',col('Height').cast(IntegerType()).isNotNull().alias("Value"))
test_df.filter((col("Value") == False)).show(5)
### As you can see, 500 rows where deleted.
athletes_df.count()
### Check the distinct values for seasons.
### As seen there are no odd values in this column.
athletes_df.select("Season").distinct().show()
### Check the length of NOC, as seen in the result this is always 3, so that is good.
from pyspark.sql.functions import length
test_df = athletes_df.withColumn("length_NOC", length("NOC")).filter((col("length_NOC") != 3))
test_df.show()
### Check if sex is only M and F, as seen this is correct.
athletes_df.filter((col("Sex")!="F") & (col("Sex")!="M")).show()
```
### Masking the name
To practice the idea of private information I want to explore masking the name.
#### Masking
```
### Masks name showing the first and last two characters.
### If name is less than 5 characters, it will only show the first character.
from pyspark.sql.functions import udf
def mask_name(columnValue):
if len(columnValue) < 5:
nameList=list(columnValue)
start = "".join(nameList[:1])
masking = 'x'*(len(nameList)-1)
masked_name = start+masking
else:
nameList=list(columnValue)
start = "".join(nameList[:2])
end = "".join(nameList[-2:])
masking = 'x'*(len(nameList)-4)
masked_name = start+masking+end
return masked_name
### Make the function work with PySpark
mask_name_udf = udf(mask_name, StringType())
### Test function
athletes_df.select("Name",mask_name_udf(athletes_df["Name"])).distinct().show(5, truncate=False)
athletes_df = athletes_df.withColumn("MaskedName",mask_name_udf(athletes_df["Name"])).drop(col("Name"))
athletes_df.show(1,vertical=True)
```
### Fixing Schema
```
athletes_df.printSchema()
### ID, Age Height, Weight and Year should be integer
athletes_final_df = (athletes_df.withColumn("PlayerID", col("ID").cast(IntegerType()))
.drop(col("ID"))
.withColumn("Name", col("MaskedName").cast(StringType()))
.withColumn("Age", col("Age").cast(IntegerType()))
.withColumn("Height", col("Height").cast(IntegerType()))
.withColumn("Weight", col("Weight").cast(IntegerType()))
.withColumn("Year", col("Year").cast(IntegerType()))
)
athletes_final_df.printSchema()
### Sort column order
athletes_sorted_df = athletes_final_df.select(
[athletes_final_df.columns[-2]]
+ [athletes_final_df.columns[-1]]
+ athletes_final_df.columns[:-3])
athletes_sorted_df.show(1, vertical=True)
athletes_sorted_df.printSchema()
```
### Save to parquet
```
## Write to parquet file, but this crashes laptop
#output_path = './output/athlete_data'
#athletes_sorted_df.write.partitionBy("Games").mode("overwrite").parquet(output_path)
```
### Aggregations
```
from pyspark.sql.functions import min, max, sum, sumDistinct, avg, col, expr, round, count
```
#### Medals per year
```
### Get year and medal
medals_per_year_df = athletes_sorted_df.select(
col("Year"),
col("Medal")
)
medals_per_year_df.show(5)
### Filter out all rows with NA
medals_per_year_df = medals_per_year_df.filter(col("Medal")!="NA")
medals_per_year_df.show(5)
### show amount of medals per Year
medals_per_year_df.groupBy("Year").agg(count("Medal").alias("Medals Amount")).orderBy("Year", ascending=False).show(5)
```
#### Medals per country
```
### Show distinct medal values.
athletes_sorted_df.select("Medal").distinct().show()
### create new dataframe and filter out NA values for the medal column.
medals_per_country_df = athletes_sorted_df.select(
col("Team"),
col("Medal")
)
medals_per_country_df = medals_per_country_df.filter(col("Medal")!="NA")
medals_per_country_df.show(5)
### Aggregate and order by medal amount
medals_per_country_df = medals_per_country_df.groupBy("Team","Medal").agg(count("Medal").alias("Amount")).orderBy("Amount", ascending=False)
medals_per_country_df.show(10)
```
#### Show information about height and weight
```
### This could also be used to make sure there are no odd values in the columns
athletes_sorted_df.select("Height", "Weight").describe().show()
### Weight of only 25?? Let's check out why that is.
athletes_sorted_df.select("Weight","Height","Age","PlayerID","Name","Team").filter(col("Weight")==25).distinct().show()
```
#### Which country has the most medals in basketball?
```
athletes_sorted_df.show(2)
best_in_basketball_df = athletes_sorted_df.select(
col("Team"),
col("Sport"),
col("Medal")
)
best_in_basketball_df = best_in_basketball_df.filter(col("Sport")=="Basketball")
best_in_basketball_df.show(3)
best_in_basketball_df = best_in_basketball_df.groupBy("Team","Sport").agg(count("Medal").alias("Amount")).orderBy("Amount", ascending=False)
best_in_basketball_df.show(5)
```
As you could expect, US has the most medals in Basketball.
|
github_jupyter
|
## The Golden Standard
In the previous session, we saw why and how association is different from causation. We also saw what is required to make association be causation.
$
E[Y|T=1] - E[Y|T=0] = \underbrace{E[Y_1 - Y_0|T=1]}_{ATET} + \underbrace{\{ E[Y_0|T=1] - E[Y_0|T=0] \}}_{BIAS}
$
To recap, association becomes causation if there is no bias. There will be no bias if \\(E[Y_0|T=0]=E[Y_0|T=1]\\). In words, association will be causation if the treated and control are equal, or comparable, unless for the treatment they receive. Or, in more technical words, when the outcome of the untreated is equal to the counterfactual outcome of the treated. Remember that this counterfactual outcome is the outcome of the treated group if they had not received the treatment.
I think we did an OK job explaining in math terms how to make association equal to causation. But that was only in theory. Now, we look at the first tool we have to make the bias vanish: **Randomised Experiments**. Randomised experiments consist of randomly assigning individuals in a population to the treatment or to a control group. The proportion that receives the treatment doesn't have to be 50%. You could have an experiment where only 10% of your samples get the treatment.
Randomisation annihilates bias by making the potential outcomes independent of the treatment.
$
(Y_0, Y_1) \perp\!\!\!\perp T
$
This can be confusing at first. If the outcome is independent of the treatment, doesn't it mean that the treatment has no effect? Well, yes! but notice I'm not talking about the outcomes. Rather, I'm talking about the **potential** outcomes. The potential outcomes is how the outcome **would have been** under the treatment (\\(Y_1\\)) or under the control (\\(Y_0\\)). In randomized trials, we **don't** want the outcome to be dependent on the treatment, since we think the treatment causes the outcome. But we want the **potential** outcomes to be independent from the treatment.

Saying that the potential outcomes are independent from the treatment is saying that they would be, in expectation, the same in the treatment or the control group. In simpler terms, it means that treatment and control are comparable. Or that knowing the treatment assignment doesn't give me any information on how the outcome was previous to the treatment. Consequently, \\((Y_0, Y_1)\perp T\\) means that the treatment is the only thing that is generating a difference between the outcome in the treated and in the control. To see this, notice that independence implies precisely that that
$
E[Y_0|T=0]=E[Y_0|T=1]=E[Y_0]
$
Which, as we've seen, makes it so that
$
E[Y|T=1] - E[Y|T=0] = E[Y_1 - Y_0]=ATE
$
So, randomization gives us a way to use a simple difference in means between treatment and control and call that the treatment effect.
## In a School Far, Far Away
In the year of 2020, the Coronavirus Pandemic forced business to adapt to social distancing. Delivery services became widespread, big corporations shifted to a remote work strategy. With schools, it wasn't different. Many started their own online repository of classes.
Four months into the crises and many are wondering if the introduced changes could be maintained. There is no question that online learning has its benefits. For once, it is cheaper, since it can save on real estate and transportation. It can also me more digital, leveraging world class content from around the globe, not just from a fixed set of teachers. In spite all of that, we still need to answer if online learning has or not a negative or positive impact in the student's academic performance.
One way to answer that is to take students from schools that give mostly online classes and compare them with students from schools that give lectures in traditional classrooms. As we know by now this is not the best approach. It could be that online schools attract only the well disciplined students that do better than average even if the class where presential. In this case, we would have a positive bias, where the treated are academically better than the untreated: \\(E[Y_0|T=1] > E[Y_0|T=0]\\).
Or, on the flip side, it could be that online classes are cheaper and are composed mostly of less wealthy students, who might have to work besides studying. In this case, these students would do worse than those from the presidential schools even if they took presential classes. If this was the case, we would have bias in the other direction, where the treated are academically worse than the untreated: \\(E[Y_0|T=1] < E[Y_0|T=0]\\).
So, although we could do simple comparisons, it wouldn't be very convincing. One way or another, we could never be sure if there wasn't any bias lurking around and masking our causal effect.

To solve that, we need to make the treated and untreated comparable \\(E[Y_0|T=1] = E[Y_0|T=0]\\). One way to force this, is by randomly assigning the online and presential classes to students. If we managed to do that, the treatment and untreated would be, on average, the same, unless for the treatment they receive.
Fortunately, some economists have done that for us. They randomized not the students, but the classes. Some of them were randomly assigned to have face-to-face lectures, others, to have only online lectures and a third group, to have a blended format of both online and face-to-face lectures. At the end of the semester, they collected data on a standard exam.
Here is what the data looks like:
```
import pandas as pd
import numpy as np
data = pd.read_csv("./data/online_classroom.csv")
print(data.shape)
data.head()
```
We can see that we have 323 samples. It's not exactly big data, but is something we can work with. To estimate the causal effect, we can simply compute the mean score for each of the treatment groups.
```
(data
.assign(class_format = np.select(
[data["format_ol"].astype(bool), data["format_blended"].astype(bool)],
["online", "blended"],
default="face_to_face"
))
.groupby(["class_format"])
.mean())
```
Yup. It's that simple. We can see that face to face classes yield a 78.54 average score, while online classes yield a 73.63 average score. Not so good news for the proponents of online learning. The \\(ATE\\) for online class is thus -4.91. This means that online classes cause students to perform about 5 points lower, on average. That's it. You don't need to worry that online classes might have poorer students that can't afford face to face classes or, for that matter, you don't have to worry that the students from the different treatments are different in any way other than the treatment they received. By design, the random experiment is made to wipe out those differences.
For this reason, a good sanity check to see if the randomisation was done right (or if you are looking at the right data) is to check if the treated are equal to the untreated in pre-treatment variables. In our data, we have information on gender and ethnicity, so we can see if they are equal across groups. For the `gender`, `asian`, `hispanic` and `white` variables, we can say that they look pretty similar. The `black` variable, however, looks a little bit different. This draws attention to what happens with a small dataset. Even under randomisation, it could be that, by chance, one group is different from another. In large samples, this difference tends to disappear.
## The Ideal Experiment
Randomised experiments are the most reliable way to get causal effects. It is a ridiculously simple technique and absurdly convincing. It is so powerful that most countries have it as a requirement for showing the effectiveness of new medicine. To make a terrible analogy, you can think of RCT and Aang, from Avatar: The Last Airbender, while other techniques are more like Sokka. He is cool and can pull some neat tricks here and there, but Aang can bend the four elements and connect with the spiritual world. Think of it this way, if we could, RCT would be all we would ever do to uncover causality. A well designed RCT is the dream of any scientist.

Unfortunately, they tend to be either very expensive or just plain unethical. Sometimes, we simply can't control the assignment mechanism. Imagine yourself as a physician trying to estimate the effect of smoking during pregnancy on baby weight at birth. You can't simply force a random portion of moms to smoke during pregnancy. Or say you work for a big bank and you need to estimate the impact of the credit line on customer churn. It would be too expensive to give random credit lines to your customers. Or that you want to understand the impact of increasing minimum wage on unemployment. You can't simply assign countries to have one or another minimum wage.
We will later see how to lower the randomisation cost by using conditional randomisation, but there is nothing we can do about unethical or unfeasible experiments. Still, whenever we deal with causal questions, it is worth thinking about the **ideal experiment**. Always ask yourself, if you could, **what would be the ideal experiment you would run to uncover this causal effect?**. This tends to shed some light in the way of how we can uncover the causal effect even without the ideal experiment.
## The Assignment Mechanism
In a randomised experiment, the mechanism that assigns unit to one treatment or the other is, well, random. As we will see later, all causal inference techniques will somehow try to identify the assignment mechanisms of the treatments. When we know for sure how this mechanism behaves, causal inference will be much more certain, even if the assignment mechanism isn't random.
Unfortunately, the assignment mechanism can't be discovered by simply looking at the data. For example, if you have a dataset where higher education correlates with wealth, you can't know for sure which one caused which by just looking at the data. You will have to use your knowledge about how the world works to argue in favor of a plausible assignment mechanism: is it the case that schools educate people, making them more productive and hence leading them to higher paying jobs. Or, if you are pessimistic about education, you can say that schools do nothing to increase productivity and this is just a spurious correlation because only wealthy families can afford to have a kid getting a higher degree.
In causal questions, we usually have the possibility to argue in both ways: that X causes Y, or that it is a third variable Z that causes both X and Y, and hence the X and Y correlation is just spurious. It is for this reason that knowing the assignment mechanism leads to a much more convincing causal answer.
## Key Ideas
We looked at how randomised experiments are the simplest and most effective way to uncover causal impact. It does this by making the treatment and control group comparable. Unfortunately, we can't do randomised experiments all the time, but it is still useful to think about what is the ideal experiment we would do if we could.
Someone that is familiar with statistics might be protesting right now that I didn't look at the variance of my causal effect estimate. How can I know that a 4.91 points decrease is not due to chance? In other words, how can I know if the difference is statistically significant? And they would be right. Don't worry. I intend to review some statistical concepts next.
## References
I like to think of this entire series as a tribute to Joshua Angrist, Alberto Abadie and Christopher Walters for their amazing Econometrics class. Most of the ideas here are taken from their classes at the American Economic Association. Watching them is what is keeping me sane during this tough year of 2020.
* [Cross-Section Econometrics](https://www.aeaweb.org/conference/cont-ed/2017-webcasts)
* [Mastering Mostly Harmless Econometrics](https://www.aeaweb.org/conference/cont-ed/2020-webcasts)
I'll also like to reference the amazing books from Angrist. They have shown me that Econometrics, or 'Metrics as they call it, is not only extremely useful but also profoundly fun.
* [Mostly Harmless Econometrics](https://www.mostlyharmlesseconometrics.com/)
* [Mastering 'Metrics](https://www.masteringmetrics.com/)
My final reference is Miguel Hernan and Jamie Robins' book. It has been my trustworthy companion in the most thorny causal questions I had to answer.
* [Causal Inference Book](https://www.hsph.harvard.edu/miguel-hernan/causal-inference-book/)
The data used here is from a study of Alpert, William T., Kenneth A. Couch, and Oskar R. Harmon. 2016. ["A Randomized Assessment of Online Learning"](https://www.aeaweb.org/articles?id=10.1257/aer.p20161057). American Economic Review, 106 (5): 378-82.

|
github_jupyter
|
```
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
from scipy.stats import poisson, norm
def compute_scaling_ratio(mu_drain,mu_demand,drift_sd,init_state):
drain_time = init_state/(mu_drain-mu_demand)
accum_std = drift_sd*np.sqrt(drain_time)
ratio = accum_std/init_state
return ratio
def compute_workloads(arrival_buffer,inter_buffer,drain_buffer):
workload_1= arrival_buffer/(mu_drain/2)+(inter_buffer+drain_buffer)/(mu_drain)
workload_2 = (inter_buffer+arrival_buffer)/(mu_fast)
return workload_1, workload_2
def compute_draining_times(arrival_buffer,inter_buffer,drain_buffer):
workload_1, workload_2 = compute_workloads(arrival_buffer,inter_buffer,drain_buffer)
drain_time_1= workload_1/(1-mu_demand*2/mu_drain)
drain_time_2 = workload_2/(1-mu_demand/mu_fast)
return drain_time_1, drain_time_2
def simulate_single_buffer_pull(feed_sequence,
demand_sequence,
h_thres,
init_state,
flow,
init_with_zeros = False):
demand_buffer = np.zeros(len(feed_sequence)+1)
demand_buffer[0] = init_state if not init_with_zeros else 0
for i,(f,d) in enumerate(zip(feed_sequence,demand_sequence)):
if demand_buffer[i] > h_thres:
f = 0
demand_buffer[i+1] = demand_buffer[i]+f-d
return demand_buffer
def simulate_double_buffer_pull(feed_sequence_1,
feed_sequence_2,
demand_sequence_1,
demand_sequence_2,
h_thres_1,
h_thres_2,
sf_thres_1,
sf_thres_2,
sf_1):
buffer_1 = np.zeros(len(feed_sequence_1)+1)
buffer_2 = np.zeros(len(feed_sequence_1)+1)
buffer_1[0] = 300
buffer_2[0] = 200
for i,(f1,f2,d1,d2) in enumerate(zip(feed_sequence_1,feed_sequence_2,demand_sequence_1,demand_sequence_2)):
z1 = 0
z2 = 0
if sf_1:
if buffer_2[i] <= sf_thres_2:
z1 = 0
z2 = 1
if buffer_1[i] <= sf_thres_1:
z1 = 1
z2 = 0
else:
if buffer_1[i] <= sf_thres_1:
z1 = 1
z2 = 0
if buffer_2[i] <= sf_thres_2:
z1 = 0
z2 = 1
if buffer_2[i] <= h_thres_2 and z1 == 0:
z2 = 1
if buffer_1[i] <= h_thres_1 and z2 == 0:
z1 = 1
#if i % 2 == 0:
# z1 = 1
# z2 = 0
#else:
# z1 = 0
# z2 = 1
#if buffer_2[i] > h_thres_2:
# z2 = 0
#if buffer_1[i] > h_thres_1:
# z1 = 0
assert z1+z2 < 2
buffer_1[i+1] = buffer_1[i]+z1*f1-d1
buffer_2[i+1] = buffer_2[i]+z2*f2-d2
return buffer_1,buffer_2
def simulate_tandem_buffer_pull(feed_sequence_1,
feed_sequence_2,
demand_sequence,
h_thres_1,
h_thres_2):
buffer_1 = np.zeros(len(feed_sequence_1)+1)
buffer_2 = np.zeros(len(feed_sequence_1)+1)
buffer_1[0] = h_thres_1
buffer_2[0] = 0
for i,(f1,f2,d) in enumerate(zip(feed_sequence_1,feed_sequence_2,demand_sequence)):
z1 = 1
z2 = 1
if buffer_2[i] > h_thres_2:
z2 = 0
if buffer_1[i] > h_thres_1:
z1 = 0
f2 = min(f2,buffer_1[i])
assert z1*f1 <= 1
assert z2*f2 <= 1
buffer_1[i+1] = buffer_1[i]+z1*f1-z2*f2
assert buffer_1[i+1] >= 0
buffer_2[i+1] = buffer_2[i]+z2*f2-d
return buffer_1,buffer_2
mu_demand = 0.33
mu_feed_1 = 0.34
mu_feed_2 = 0.34
duration = int(1e6)
np.random.seed(100)
demand_seq = np.random.binomial(1,mu_demand,duration)
feed_seq_1 = np.random.binomial(1,mu_feed_1,duration)
feed_seq_2 = np.random.binomial(1,mu_feed_2,duration)
c_s = 1
c_d = 10
0.33/0.34
0.33/(0.005),0.33/(0.5-0.33)
buffer_1,buffer_2 = simulate_tandem_buffer_pull(feed_seq_1,feed_seq_2,demand_seq,50,55)
h_optimal = np.percentile(-buffer_2,1000/11)
h_range = range(20,80,5)
deficit_cost = np.zeros_like(h_range)
surplus_cost = np.zeros_like(h_range)
sf_cost = np.zeros_like(h_range)
for i,h1 in enumerate(h_range):
print(i)
buffer_1,buffer_2 = simulate_tandem_buffer_pull(feed_seq_1,feed_seq_2,demand_seq,h1,0)
h_optimal = np.percentile(-buffer_2,1000/11)
surplus = np.where(buffer_2+h_optimal >= 0,buffer_2+h_optimal,0)
deficit = np.where(buffer_2+h_optimal < 0,buffer_2+h_optimal,0)
deficit_cost[i] = np.sum(-deficit)*c_d
surplus_cost[i] = np.sum(surplus)*c_s
sf_cost[i] = np.sum(buffer_1)*c_s
mu_range = np.arange(0.34,0.55,0.01)
deficit_cost = np.zeros_like(mu_range)
surplus_cost = np.zeros_like(mu_range)
sf_cost = np.zeros_like(mu_range)
for i,mu in enumerate(mu_range):
print(i)
np.random.seed(100)
feed_seq_1 = np.random.binomial(1,mu,duration)
buffer_1,buffer_2 = simulate_tandem_buffer_pull(feed_seq_1,feed_seq_2,demand_seq,500,0)
a = np.percentile(buffer_1,1)
print(a)
h_optimal = np.percentile(-buffer_2,1000/11)
surplus = np.where(buffer_2+h_optimal >= 0,buffer_2+h_optimal,0)
deficit = np.where(buffer_2+h_optimal < 0,buffer_2+h_optimal,0)
deficit_cost[i] = np.sum(-deficit)*c_d
surplus_cost[i] = np.sum(surplus)*c_s
sf_cost[i] = np.sum(np.maximum(buffer_1-a,0))
#cost = np.sum(surplus)*c_s + np.sum(-deficit)*c_d + np.sum(buffer_1)*c_s
mu_range
plt.plot((mu_range-0.33)/0.01,sf_cost/min(sf_cost))
plt.plot((mu_range-0.33)/0.01,sf_cost/min(sf_cost),".")
plt.figure(figsize=(8,6))
plt.plot(0.33/mu_range,sf_cost/min(sf_cost))
plt.plot(0.33/mu_range,sf_cost/min(sf_cost),".")
plt.vlines(0.97,0,10,label="mu_2 load = 0.97")
plt.xlabel("mu_1 load")
plt.ylabel("Relative cost")
plt.legend()
h_optimal = np.percentile(-buffer_2,1000/11)
a = []
for i in range(-10,10):
surplus = np.where(buffer_2+h_optimal+i >= 0,buffer_2+h_optimal+i,0)
deficit = np.where(buffer_2+h_optimal+i < 0,buffer_2+h_optimal+i,0)
a.append(np.sum(-deficit)*c_d + np.sum(surplus)*c_s)
plt.plot(range(-10,10),a)
sf_cost
h_optimal = np.percentile(-buffer_2,1000/11)
#plt.plot(h_range,sf_cost)
#norm = np.min(deficit_cost+surplus_cost)
plt.figure(figsize=(10,8))
plt.fill_between(h_range,sf_cost/norm,label="safety_stocks_cost")
plt.plot(h_range,sf_cost/norm,"k.")
plt.fill_between(h_range,(surplus_cost+sf_cost)/norm,sf_cost/norm,label="surplus cost")
plt.plot(h_range,(surplus_cost+sf_cost)/norm,"k.")
plt.fill_between(h_range,(surplus_cost+sf_cost)/norm,(deficit_cost+surplus_cost+sf_cost)/norm,label="deficit cost")
plt.plot(h_range,(deficit_cost+surplus_cost+sf_cost)/norm,"k.")
plt.hlines(1,20,75,"k",label="infinite supply reference")
plt.legend()
max(buffer_2)
np.percentile(buffer_2,1)
a = plt.hist(-buffer_2,bins=range(-1,600))
a = plt.hist(-buffer_2,bins=range(-1,200))
h_optimal
plt.figure(figsize=(10,4))
plt.plot(buffer_2,label="buffer 2")
plt.plot(buffer_1,label="buffer 1")
#plt.plot(buffer_2,label="buffer 2")
plt.legend()
x3 = buffer_2
x2 = buffer_2
x1 = buffer_2
c,d = np.histogram(x2,bins=range(-150,0))
#plt.plot(b[:-1],np.log(a))
plt.plot(d[:-1],np.log10(c))
plt.plot(e[:-1],np.log10(f))
plt.figure(figsize=(10,4))
#a = plt.hist(buffer_2,bins=range(-150,10),label = "30")
a = plt.hist(-x3,bins=range(-1,150),alpha = 1,label="non-limiting")
a = plt.hist(-x2,bins=range(-1,150),alpha = 0.75,label="45")
a = plt.hist(-x1,bins=range(-1,200),alpha=0.5,label="25")
plt.legend()
plt.figure(figsize=(10,4))
#a = plt.hist(buffer_2,bins=range(-150,10),label = "30")
a = plt.hist(-x1,bins=range(-1,250),alpha=1,label="25")
a = plt.hist(-x2,bins=range(-1,150),alpha = 0.75,label="45")
a = plt.hist(-x3,bins=range(-1,150),alpha = 0.5,label="non-limiting")
#a = plt.hist(buffer_2,bins=range(-100,50))
plt.
mu_demand = 0.33
mu_feed = 0.68
duration = int(1e5)
demand_seq_1 = np.random.binomial(1,mu_demand,duration)
demand_seq_2 = np.random.binomial(1,mu_demand,duration)
feed_seq_1 = np.random.binomial(1,mu_feed,duration)
feed_seq_2 = np.random.binomial(1,mu_feed,duration)
buffer_1,buffer_2 = simulate_double_buffer_pull(feed_seq_1,feed_seq_2,
demand_seq_1, demand_seq_2,
30,3,3,3,sf_1=True)
plt.figure(figsize=(10,4))
plt.plot(buffer_1,label="buffer 1")
plt.plot(buffer_2,label="buffer 2")
plt.legend()
buffer_1,buffer_2 = simulate_double_buffer_pull(feed_seq_1,feed_seq_2,
demand_seq_1, demand_seq_2,
30,3,3,3,False)
plt.figure(figsize=(10,4))
plt.plot(buffer_1,label="buffer 1")
plt.plot(buffer_2,label="buffer 2")
plt.legend()
buffer_1,buffer_2 = simulate_double_buffer_pull(feed_seq_1,feed_seq_2,
demand_seq_1, demand_seq_2,
3,30,3,3,True)
plt.figure(figsize=(10,4))
plt.plot(buffer_1,label="buffer 1")
plt.plot(buffer_2,label="buffer 2")
plt.legend()
buffer_1,buffer_2 = simulate_double_buffer_pull(feed_seq_1,feed_seq_2,
demand_seq_1, demand_seq_2,
3,30,3,3,False)
plt.figure(figsize=(10,4))
plt.plot(buffer_1,label="buffer 1")
plt.plot(buffer_2,label="buffer 2")
plt.legend()
mu_demand = 0.33
mu_feed = 0.34
c_s = 1
c_d = 10
duration = int(1e5)
demand_seq = np.random.binomial(1,mu_demand,duration)
feed_seq = np.random.binomial(1,mu_feed,duration)
demand_buffer = simulate_single_buffer_pull(feed_seq,demand_seq,60,0,False)
surplus = np.where(demand_buffer >= 0,demand_buffer,0)
deficit = np.where(demand_buffer < 0,demand_buffer,0)
plt.plot(demand_buffer)
#plt.plot(demand_buffer[:100000])
plt.figure(figsize=(8,6))
plt.fill_between(np.arange(len(surplus)),surplus,0)
plt.fill_between(np.arange(len(surplus)),deficit,0)
cost = np.sum(surplus)*c_s + np.sum(-deficit)*c_d
cost_record = []
hedging = np.arange(-5,140,5)
hedging = np.arange(40,70,1)
for h in hedging:
demand_buffer = simulate_single_buffer_pull(feed_seq,demand_seq,h,h,False)
surplus = np.where(demand_buffer >= 0,demand_buffer,0)
deficit = np.where(demand_buffer < 0,demand_buffer,0)
cost = np.sum(surplus)*c_s + np.sum(-deficit)*c_d
cost_record.append(cost)
f,ax = plt.subplots(2,1,figsize=(10,8),sharex=True)
ax[0].hist(-demand_buffer,bins=range(-20,140),normed=True)
ax[0].vlines(h_optimal,0,0.04)
ax[1].plot(hedging,cost_record/min(cost_record))
ax[1].plot(hedging,cost_record/min(cost_record),"o")
ax[1].vlines(h_optimal,1,1.1)
f,ax = plt.subplots(2,1,figsize=(10,8),sharex=True)
ax[0].hist(-demand_buffer,bins=range(-20,140),normed=True)
ax[0].vlines(h_optimal,0,0.04)
ax[1].plot(hedging,cost_record/min(cost_record))
ax[1].plot(hedging,cost_record/min(cost_record),"o")
ax[1].vlines(h_optimal,1,5)
1000/11
h_optimal
h_optimal = np.percentile(-demand_buffer,1000/11)
plt.hist(-demand_buffer,bins=range(120),normed=True)
plt.vlines(h_optimal,0,0.04)
h_optimal = np.percentile(-demand_buffer,1000/11)
#np.percentile(-demand_buffer,1000/11)
c1 = 1
c2 = 2
c3 = 1
c1 = 1.5
c2 = 1
c3 = 2
c1 = 0.1
c2 = 1
c3 = 1
costs = {}
betas = {}
sc_ratios = {}
eff_rates = {}
slopes = {}
hedging_levels = {}
percentile = 4
hedging = np.concatenate((np.arange(0,20,2),np.arange(20,150,10)))
from sklearn.linear_model import LinearRegression
hedging = np.arange(2,40,2)
arrivals = []
#scale_list = [0.1,0.3,1,3]
#scale_list = [0.2,0.4,0.5,0.6,0.7]
scale_list = np.arange(0.35,0.37,0.001)
scale_list = np.arange(0.32,0.333,0.001)
scale_list = np.arange(0.335,0.345,0.001)
scale_list = [0.33]
hedging = np.concatenate((np.arange(0,20,2),np.arange(20,150,10)))
#hedging = np.arange(0,150,10)
hedging = np.arange(50,600,50)
hedging = np.arange(5,100,5)
#hedging = np.arange(7,8,1)
#hedging = [beta_h]
#hedging = np.arange(30,200,10)
#hedging = np.arange(20,500,50)
#hedging = np.concatenate((np.arange(50,500,50),np.arange(500,11000,2000)))
hedging = np.arange(100,11000,1000)
#hedging = np.arange(2,100,5)
#hedging = np.arange(0,100,5)
#offset = -100000
#hedging = [100]
# settings for scale = 3
dur_star = 10000
omega_star = 7.5645
#init_state_star = 210000
#dur_star = int(4500000*1)
duration = dur_star
for scale in reversed(scale_list):
print(scale)
scale_costs = []
scale_rates = []
#init_state = 7e4*scale
mu_demand = 0.33
mu_drain = mu_transfer = 0.35*2
mu_fast = 0.34
slack_capacity_h = mu_fast-mu_drain/2
std_h = np.sqrt(mu_drain*(1-mu_drain)+mu_fast*(1-mu_fast))
omega_h = std_h/slack_capacity_h
print(slack_capacity_h,std_h,omega_h)
print()
slack_capacity_ss = mu_fast-mu_drain
std_ss = np.sqrt(mu_fast*(1-mu_fast)+mu_drain*(1-mu_drain))
omega_ss = std_ss/slack_capacity_ss
duration = int(1000000 * 1.5 * 0.5)
print(scale,duration)
#print(scale,omega)
#continue
#print(omega/omega_star)
#duration = int((omega/omega_star)**2*dur_star)
init_state = 10000
#init_state = 0
n_seeds = 1#100
beta_h = (1/4)*(percentile**2)*omega_h# + slack_capacity/std
beta_ss = (1/4)*(percentile**2)*omega_ss
scaling_ratio = compute_scaling_ratio(mu_drain,mu_demand,std_h,init_state)
print(scaling_ratio)
hedge = True
for h in reversed(hedging):
print(h)
if hedge:
h_thres = h
ss_thres = mu_drain+beta_ss*std_ss
else:
h_thres = beta_h*std_ss
ss_thres = mu_drain+h*std_ss
print(h_thres)
#thres = 2*mu_drain+h*np.sqrt(mu_drain+mu_fast)
#thres = h*10
buf_1_samples = []
buf_2_samples = []
buf_3_samples = []
np.random.seed(7)
for _ in range(n_seeds):
demand_seq = np.random.binomial(1,mu_demand,duration)
transfer_seq = np.random.binomial(1,mu_transfer,duration)
fast_seq = np.random.binomial(1,mu_fast,duration)
drain_seq = np.random.binomial(1,mu_drain,duration)
arrival_buffer,inter_buffer,drain_buffer = simulate_simple_reentrant_line(
demand_seq[:duration],
transfer_seq[:duration],
fast_seq[:duration],
drain_seq[:duration],
h_thres=h_thres,
ss_thres=5,
init_state=init_state,
flow=False,
init_with_zeros=False)
#try:
# end = np.where((arrival_buffer < 10) & (inter_buffer < 10))[0][0]
#except:
end = len(arrival_buffer)
buf_1_samples.append(sum(arrival_buffer[0:end]*c1))
buf_2_samples.append(sum(inter_buffer[0:end]*c2))
buf_3_samples.append(sum(drain_buffer[0:end]*c3))
#arrivals.append(arrival_buffer)
scale_costs.append((np.mean(buf_1_samples),np.mean(buf_2_samples),np.mean(buf_3_samples)))
#scale_rates.append(zeta*mu_transfer)
#scale_costs.append(sum(arrival_buffer*c1))
'''
a,b = np.histogram(inter_buffer,bins=40,normed=True)
b = b.reshape(-1,1)
clf = LinearRegression()
clf.fit(b[:-15,:],np.log(a[:-14]))
plt.plot(b[:-15],np.log(a[:-14]),label=scale)
slopes[scale] = clf.coef_
'''
costs[scale] = np.array(scale_costs[::-1])
betas[scale] = beta_h
sc_ratios[scale] = scaling_ratio
eff_rates[scale] = np.array(scale_rates[::-1])
plt.legend()
costs
#arrivals_2 = arrivals
plt.plot(np.cumsum(np.array(arrivals_10).mean(axis=0)))
plt.plot(np.cumsum(np.array(arrivals).mean(axis=0)),"r")
#arrivals_10 = arrivals
#plt.plot(np.array(arrivals_30).mean(axis=0)[:2000])
plt.plot(np.array(arrivals_10).mean(axis=0)[:200000])
plt.plot(np.array(arrivals).mean(axis=0)[:20000],"r")
no_h_cost = ref_cost
no_h_cost
min_t_cost/no_h_cost
no_h_cost/min_t_cost
bad_cost = ref_cost
bad_cost/ref_cost
scale = 0.33
beta = beta_ss#betas[scale]
sc_ratio = sc_ratios[scale]
cost_1,cost_2,cost_3 = zip(*costs[scale])
cost_1=np.array(cost_1)
cost_2=np.array(cost_2)
cost_3=np.array(cost_3)
t_cost = np.array(cost_1)+np.array(cost_2)+np.array(cost_3)
#t_cost = np.array(cost_2)+np.array(cost_3)
#t_cost = np.array(cost_3)
min_t_cost = min(t_cost)
#t_cost = t_cost/min_t_cost
#ref_cost = no_ss_cost
ref_cost = min_t_cost
#ref_cost = no_h_cost
t_cost = t_cost/ref_cost
cost_1=np.array(cost_1)/ref_cost
cost_2=np.array(cost_2)/ref_cost
cost_3=np.array(cost_3)/ref_cost
indexes = np.where(t_cost < 100)[0]
plt.figure(figsize=(16,8))
plt.plot(hedging[indexes],cost_1[indexes],label="Buffer 1 cost")
#plt.plot(hedging[indexes],cost_1[indexes],"o")
#plt.plot(hedging[indexes],cost_2[indexes])
plt.fill_between(hedging[indexes],cost_1[indexes]+cost_2[indexes],cost_1[indexes],alpha=0.5,label="Buffer 2 cost")
plt.fill_between(hedging[indexes],t_cost[indexes],cost_1[indexes]+cost_2[indexes],alpha=0.5, label="Buffer 3 cost")
plt.plot(hedging[indexes],t_cost[indexes],label="Total cost")
plt.plot(hedging[indexes],t_cost[indexes],".")
#plt.vlines(10,min(t_cost[indexes]),max(t_cost[indexes]),label="empirical hedging")
plt.hlines(1.03,min(hedging[indexes]),max(hedging[indexes]),color="r",label="+3% margin")
#plt.hlines(0.97,min(hedging[indexes]),max(hedging[indexes]),color="r",label="-3% margin")
#plt.title("{:.3f}".format(sc_ratio))
plt.ylabel("Relative cumulative cost")
plt.xlabel("Hedging threshold h2")
plt.legend()
set(np.array([1,2]))
scale = 0.33
beta = beta_ss#betas[scale]
sc_ratio = sc_ratios[scale]
cost_1,cost_2,cost_3 = zip(*costs[scale])
cost_1=np.array(cost_1)
cost_2=np.array(cost_2)
cost_3=np.array(cost_3)
t_cost = np.array(cost_1)+np.array(cost_2)+np.array(cost_3)
#t_cost = np.array(cost_2)+np.array(cost_3)
#t_cost = np.array(cost_3)
min_t_cost = min(t_cost)
#t_cost = t_cost/min_t_cost
#ref_cost = no_ss_cost
#ref_cost = min_t_cost
t_cost = t_cost/ref_cost
cost_1=np.array(cost_1)/ref_cost
cost_2=np.array(cost_2)/ref_cost
cost_3=np.array(cost_3)/ref_cost
indexes = np.where(t_cost < 100)[0]
plt.figure(figsize=(12,8))
plt.plot(hedging[indexes],cost_1[indexes],label="Buffer 1 cost")
#plt.plot(hedging[indexes],cost_1[indexes],"o")
#plt.plot(hedging[indexes],cost_2[indexes])
plt.fill_between(hedging[indexes],cost_1[indexes]+cost_2[indexes],cost_1[indexes],alpha=0.5,label="Buffer 2 cost")
plt.fill_between(hedging[indexes],t_cost[indexes],cost_1[indexes]+cost_2[indexes],alpha=0.5, label="Buffer 3 cost")
plt.plot(hedging[indexes],t_cost[indexes],label="Total cost")
plt.plot(hedging[indexes],t_cost[indexes],".")
#plt.vlines(10,min(t_cost[indexes]),max(t_cost[indexes]),label="empirical hedging")
plt.hlines(1.03*min(t_cost),min(hedging[indexes]),max(hedging[indexes]),color="r",label="+3% margin")
#plt.hlines(0.97,min(hedging[indexes]),max(hedging[indexes]),color="r",label="-3% margin")
#plt.title("{:.3f}".format(sc_ratio))
plt.ylabel("Relative cumulative cost")
plt.xlabel("Hedging threshold")
plt.legend()
ref_cost
scale = 0.33
beta = beta_ss#betas[scale]
sc_ratio = sc_ratios[scale]
cost_1,cost_2,cost_3 = zip(*costs[scale])
cost_1=np.array(cost_1)
cost_2=np.array(cost_2)
cost_3=np.array(cost_3)
t_cost = np.array(cost_1)+np.array(cost_2)+np.array(cost_3)
t_cost = np.array(cost_2)+np.array(cost_3)
#t_cost = np.array(cost_3)
min_t_cost = min(t_cost)
#t_cost = t_cost/min_t_cost
#ref_cost = no_ss_cost
ref_cost = min_t_cost
t_cost = t_cost/ref_cost
cost_1=np.array(cost_1)/ref_cost
cost_2=np.array(cost_2)/ref_cost
cost_3=np.array(cost_3)/ref_cost
indexes = np.where(t_cost < 100)[0]
plt.figure(figsize=(12,4))
#plt.plot(hedging[indexes],cost_1[indexes],label="Buffer 1 cost")
#plt.plot(hedging[indexes],cost_1[indexes],"o")
#plt.plot(hedging[indexes],cost_2[indexes])
#plt.fill_between(hedging[indexes],cost_1[indexes]+cost_2[indexes],cost_1[indexes],alpha=0.5,label="Buffer 2 cost")
#plt.fill_between(hedging[indexes],t_cost[indexes],cost_1[indexes]+cost_2[indexes],alpha=0.5, label="Buffer 3 cost")
plt.plot(hedging[indexes],t_cost[indexes],label="Total cost")
plt.plot(hedging[indexes],t_cost[indexes],".")
#plt.vlines(10,min(t_cost[indexes]),max(t_cost[indexes]),label="empirical hedging")
plt.hlines(1.03,min(hedging[indexes]),max(hedging[indexes]),color="r",label="+3% margin")
#plt.hlines(0.97,min(hedging[indexes]),max(hedging[indexes]),color="r",label="-3% margin")
#plt.title("{:.3f}".format(sc_ratio))
plt.ylabel("Relative cumulative cost")
plt.xlabel("Hedging threshold")
plt.legend()
(2120/(1-0.33/0.345))/(2770/(1-0.33/0.35))
np.sum(costs[0.33])/no_ss_cost
no_ss_cost = np.sum(costs[0.33])
no_ss_cost
plt.plot(inter_buffer[:10000], label="buffer 3")
np.sum(inter_buffer == 0)
np.sum(inter_buffer == 0)
-1.02*2977.9+1.05*2874.3
-1.02*2972.+1.05*2868.6
2874.3*0.35,2868.6*0.35
988+18,983+21
2/0.35
plt.plot(inter_buffer[8000:10000], label="buffer 3")
end = 100000
plt.figure(figsize=(16,6))
#plt.plot(arrival_buffer[:end],label="buffer 1")
plt.plot(inter_buffer[30000:end], label="buffer 2")
plt.plot(drain_buffer[30000:end], label="buffer 3")
plt.legend()
plt.hist(inter_buffer,bins=np.arange(150))
plt.hist(drain_buffer,bins=np.arange(150))
end = 80000
plt.figure(figsize=(16,6))
#plt.plot(arrival_buffer[:end],label="buffer 1")
plt.plot(inter_buffer[:end], label="buffer 2")
#plt.plot(drain_buffer[:end], label="buffer 3")
plt.legend()
plt.figure(figsize=(16,6))
plt.plot(arrival_buffer,label="buffer 1")
plt.plot(inter_buffer, label="buffer 2")
plt.plot(drain_buffer, label="buffer 3")
#plt.hlines(3,0,15000, label = "ss")
#plt.hlines(5,0,15000, label = "ss")
plt.legend()
plt.figure(figsize=(16,6))
plt.plot(arrival_buffer,label="buffer 1")
plt.plot(inter_buffer, label="buffer 2")
plt.plot(drain_buffer, label="buffer 3")
#plt.hlines(3,0,15000, label = "ss")
#plt.hlines(5,0,15000, label = "ss")
plt.legend()
plt.figure(figsize=(16,6))
plt.plot(arrival_buffer,label="buffer 1")
plt.plot(inter_buffer, label="buffer 2")
plt.plot(drain_buffer, label="buffer 3")
#plt.hlines(3,0,15000, label = "ss")
#plt.hlines(5,0,15000, label = "ss")
plt.legend()
f,ax = plt.subplots(2,1,figsize=(16,10))
ax[0].plot(arrival_buffer,label="buffer 1")
ax[0].plot(inter_buffer, label="buffer 2")
ax[0].plot(drain_buffer, label="buffer 3")
ax[0].set_ylabel("Buffer level")
ax[0].legend()
drain_time_1,drain_time_2=compute_draining_times(arrival_buffer,inter_buffer,drain_buffer)
ax[1].plot(drain_time_1,label="resource 1")
ax[1].plot(drain_time_2,label="resource 2")
ax[1].set_ylabel("Draining time")
ax[1].legend()
#ax[1].gca().set_aspect("equal")
drain_time_1,drain_time_2=compute_draining_times(arrival_buffer,inter_buffer,drain_buffer)
workload_1,workload_2 = compute_workloads(arrival_buffer,inter_buffer,drain_buffer)
np.array([i for i in range(10)])
np.where(np.array([i for i in range(10)]) > 5)[0]
plt.figure(figsize=(8,8))
plt.plot(drain_time_1,label="1")
plt.plot(drain_time_2)
plt.legend()
plt.gca().set_aspect("equal")
plt.plot(workload_1)
plt.plot(workload_2)
#plt.figure(figsize=(16,6))
f,ax = plt.subplots(2,1,figsize=(16,8))
ax[0].plot(arrival_buffer,label="buffer 1")
ax[0].plot(inter_buffer, label="buffer 2")
ax[0].plot(drain_buffer, label="buffer 3")
ax[1].plot(arrival_buffer*c1+inter_buffer*c2+drain_buffer*c3,label="Total cost")
#plt.hlines(3,0,15000, label = "ss")
#plt.hlines(5,0,15000, label = "ss")
ax[0].legend()
ax[1].legend()
#plt.figure(figsize=(16,6))
f,ax = plt.subplots(2,1,figsize=(16,8))
ax[0].plot(arrival_buffer,label="buffer 1")
ax[0].plot(inter_buffer, label="buffer 2")
ax[0].plot(drain_buffer, label="buffer 3")
ax[1].plot(arrival_buffer*c1+inter_buffer*c2+drain_buffer*c3,label="Total cost")
#plt.hlines(3,0,15000, label = "ss")
#plt.hlines(5,0,15000, label = "ss")
ax[0].legend()
ax[1].legend()
cost_2 = arrival_buffer*c1+inter_buffer*c2+drain_buffer*c3
plt.plot(arrival_buffer*c1+inter_buffer*c2+drain_buffer*c3)
plt.plot(cost_2)
plt.plot(cost_1)
plt.plot(cost_2)
plt.figure(figsize=(16,6))
plt.plot(arrival_buffer,label="buffer 1")
plt.plot(inter_buffer, label="buffer 2")
plt.plot(drain_buffer, label="buffer 3")
#plt.hlines(3,0,15000, label = "ss")
#plt.hlines(5,0,15000, label = "ss")
plt.legend()
workload = arrival_buffer/(mu_drain/2)+(inter_buffer+drain_buffer)/(mu_drain)
workload_2 = (inter_buffer+arrival_buffer)/(mu_fast)
plt.plot(workload[:100000],workload_2[:100000])
plt.plot(workload[:100000],workload_2[:100000])
min_drain_time = workload/(1-mu_demand*2/mu_drain)
np.mean(min_drain_time),np.median(min_drain_time)
np.mean(min_drain_time > 1000)
a,b,_ = plt.hist(min_drain_time,bins=np.arange(0,14000,50),normed=True)
np.argmax(a)
a[:20]
b[:20]
b[17]
np.mean(arrival_buffer)
np.mean(inter_buffer)
plt.figure(figsize=(10,8))
dur = np.arange(54000,65000)
#dur = np.arange(300000)
plt.fill_between(dur,drain_buffer[dur],label = "buffer 3")
#plt.plot(dur,drain_buffer[dur])
plt.fill_between(dur,-inter_buffer[dur],label='-buffer 2')
#plt.fill_between(dur,-inter_buffer[dur],np.minimum(-inter_buffer[dur],-offset),label='-buffer 2')
#plt.plot(dur,-inter_buffer[dur])
#plt.plot(dur,a[dur]-offset,"k",alpha=0.5)
plt.ylim(top=50,bottom=-100)
plt.legend()
np.mean(arrival_buffer)
a = drain_buffer
std_h
np.percentile(inter_buffer,33)
350*0.16
inter_buffer_ss = inter_buffer
plt.figure(figsize=(10,6))
plt.hist(inter_buffer,bins=np.arange(150),normed=True,label="long drain")
plt.vlines(np.percentile(inter_buffer,33),0,0.04,label="long_drain")
plt.figure(figsize=(10,6))
plt.hist(inter_buffer,bins=np.arange(150),normed=True,label="long drain")
plt.hist(inter_buffer_ss,bins=np.arange(150),normed=True,label="steady state",alpha=0.7)
plt.xlabel("Buffer 2 level")
plt.ylabel("Occupancy probability")
h = np.percentile(inter_buffer,33)
plt.vlines(np.percentile(inter_buffer,33),0,0.04,label="long_drain")
plt.vlines(np.percentile(inter_buffer_ss,33),0,0.04,label="steady state",color="r")
plt.legend()
np.percentile(150-drain_buffer,33)
1/(omega_h*std_h)
plt.plot(drain_buffer)
-np.log(0.33)/(0.01*3.5)
b,a = zip(*slopes.items())
clf = LinearRegression()
clf.fit(np.array(b).reshape(-1,1),a)
clf.coef_
plt.plot(np.array(b),a,".")
plt.plot(np.array(b),clf.predict(np.array(b).reshape(-1,1)))
np.histogram(inter_buffer,bins=50)
beta_ss = (1/4)*(percentile**2)*omega_ss
beta_ss
mu_demand,mu_transfer,mu_fast,mu_drain
std_h**2*(1-omega_h*2*(c3/c2))/(4*slack_capacity_h)
plt.plot(arrival_buffer[:1000000])
np.sum(drain_buffer)/(26*len(drain_buffer))
#
#plt.plot(arrival_buffer[:1000000])
#plt.plot(inter_buffer[:1000000])
plt.plot(drain_buffer[:1000000])
plt.plot(inter_buffer[:1000000],label='safety stocks')
plt.legend()
#
#plt.plot(arrival_buffer[:1000000])
#plt.plot(inter_buffer[:1000000])
plt.plot(drain_buffer[:1000000])
plt.plot(inter_buffer[:1000000],label='safety stocks')
plt.legend()
#
#plt.plot(arrival_buffer[:1000000])
#plt.plot(inter_buffer[:1000000])
#plt.plot(drain_buffer[:1000000])
plt.plot(inter_buffer[:100000000],label='safety stocks')
plt.legend()
max(drain_buffer)- np.percentile(drain_buffer,66)
np.percentile(inter_buffer,33)
plt.plot(inter_buffer)
plt.plot(np.arange(199,-1,-1),0.035*np.exp(np.arange(200)*-0.035))
std_h
(0.7*omega_h*std_h)
s = 1/(0.7*omega_h*std_h)
s
1/clf.coef_
plt.hist(drain_buffer,bins=40,normed=True)
#plt.plot(b[15:,:],clf.predict(b[15:,:]))
np.log(0.66)/s
plt.figure(figsize=(10,6))
a,b,_ = plt.hist(drain_buffer,bins=30,normed=True)
b = b.reshape(-1,1)
clf = LinearRegression()
clf.fit(b[15:,:],np.log(a[14:]))
print(clf.coef_)
#plt.plot(np.arange(149,-1,-1),clf.coef_[0]*np.exp(np.arange(150)*-clf.coef_[0]))
plt.plot(np.arange(149,-1,-1),s*np.exp(np.arange(150)*-s),linewidth=2)
plt.vlines(150+np.log(0.66)/s,0,0.04,color="r")
plt.xlabel("Buffer 3 level")
plt.ylabel("Occupancy probability")
np.percentile(a,66)
1/omega_h
len(a)
len(b)
0.33-0.34
3/200
mu_demand/mu_fast
mu_transfer/2/mu_fast
5/140
-np.log(1-0.33)/(3.5*0.015)
plt.plot(b[10:],np.log(a[9:]))
#
#plt.plot(arrival_buffer[:1000000])
#plt.plot(inter_buffer[:1000000])
plt.plot(-drain_buffer[:1000000])
plt.plot(inter_buffer[:1000000],label='safety stocks')
plt.legend()
beta_h*std_h/(beta_ss*std_ss)
beta_h
plt.figure(figsize=(14,8))
run = np.arange(10000)
plt.fill_between(run,inter_buffer[run],label="buffer 2")
plt.fill_between(run,drain_buffer[run],label="buffer 3")
plt.legend()
omega_h
cost_3
scale = 0.33
beta = beta_ss#betas[scale]
sc_ratio = sc_ratios[scale]
cost_1,cost_2,cost_3 = zip(*costs[scale])
cost_1=np.array(cost_1)
cost_2=np.array(cost_2)
cost_3=np.array(cost_3)
t_cost = np.array(cost_1)+np.array(cost_2)+np.array(cost_3)
min_t_cost = min(t_cost)
t_cost = t_cost/min_t_cost
cost_1=np.array(cost_1)/min_t_cost
cost_2=np.array(cost_2)/min_t_cost
cost_3=np.array(cost_3)/min_t_cost
indexes = np.where(t_cost < 5)[0]
plt.figure(figsize=(12,8))
plt.plot(hedging[indexes],cost_1[indexes],label="Buffer 1 cost")
#plt.plot(hedging[indexes],cost_1[indexes],"o")
#plt.plot(hedging[indexes],cost_2[indexes])
plt.fill_between(hedging[indexes],cost_1[indexes]+cost_2[indexes],cost_1[indexes],alpha=0.1)
plt.fill_between(hedging[indexes],t_cost[indexes],cost_1[indexes]+cost_2[indexes],alpha=0.1)
plt.plot(hedging[indexes],t_cost[indexes],label="Total cost")
plt.plot(hedging[indexes],t_cost[indexes],".")
plt.vlines(beta,min(t_cost[indexes]),max(t_cost[indexes]),label="beta")
plt.hlines(1.03,min(hedging[indexes]),max(hedging[indexes]),color="r",label="3% margin")
plt.title("{:.3f}".format(sc_ratio))
plt.ylabel("Relative cumulative cost")
plt.xlabel("Threshold (xSTD)")
plt.legend()
scale = 0.33
beta = betas[scale]
sc_ratio = sc_ratios[scale]
cost_1,cost_2,cost_3 = zip(*costs[scale])
cost_1=np.array(cost_1)
cost_2=np.array(cost_2)
cost_3=np.array(cost_3)
t_cost = np.array(cost_1)+np.array(cost_2)+np.array(cost_3)
min_t_cost = min(t_cost)
t_cost = t_cost/min_t_cost
cost_1=np.array(cost_1)/min_t_cost
cost_2=np.array(cost_2)/min_t_cost
cost_3=np.array(cost_3)/min_t_cost
indexes = np.where(t_cost < 2e6)[0]
plt.figure(figsize=(12,8))
plt.plot(hedging[indexes],cost_1[indexes],label="Buffer 1 cost")
#plt.plot(hedging[indexes],cost_1[indexes],"o")
#plt.plot(hedging[indexes],cost_2[indexes])
plt.fill_between(hedging[indexes],cost_1[indexes]+cost_2[indexes],cost_1[indexes],alpha=0.1)
plt.fill_between(hedging[indexes],t_cost[indexes],cost_1[indexes]+cost_2[indexes],alpha=0.1)
plt.plot(hedging[indexes],t_cost[indexes],label="Total cost")
plt.plot(hedging[indexes],t_cost[indexes],".")
plt.vlines(beta,min(t_cost[indexes]),max(t_cost[indexes]),label="beta")
plt.hlines(1.03,min(hedging[indexes]),max(hedging[indexes]),color="r",label="3% margin")
plt.title("{:.3f}".format(sc_ratio))
plt.ylabel("Relative cumulative cost")
plt.xlabel("Threshold (xSTD)")
plt.legend()
scale = 3
beta = betas[scale]
sc_ratio = sc_ratios[scale]
cost = costs[scale]
r_cost = cost/min(cost)
indexes = np.where(r_cost < 1.2)[0]
plt.plot(hedging[indexes],r_cost[indexes])
plt.plot(hedging[indexes],r_cost[indexes],".")
plt.vlines(beta,min(r_cost[indexes]),max(r_cost[indexes]))
plt.hlines(1.03,min(hedging[indexes]),max(hedging[indexes]),color="r")
plt.title("{:.3f}".format(sc_ratio))
plt.plot(hedging,costs[1])
mu_demand
percentile = 3.1
scale = 0.1
cost = []
rates = []
hedging = np.arange(30,200,100)
f,ax = plt.subplots(3,1,figsize=(16,8))
duration = 10000
plot_range = range(0,duration)
mu_demand = 30*scale
mu_drain = mu_demand*1.02
mu_transfer = mu_drain + (mu_drain-mu_demand)*1
slack_capacity = mu_transfer-mu_drain
std = np.sqrt(mu_drain+mu_transfer)
omega = std/slack_capacity
beta = (1/4)*(percentile**2)*(std/slack_capacity)
hedging=[beta/4,beta/2,beta]
#hedging=[beta]
init_state = (mu_drain-mu_demand)*duration*0.6
np.random.seed(5)
demand_seq = np.random.poisson(mu_demand,duration)
transfer_seq = np.random.poisson(mu_transfer,duration)
drain_seq = np.random.poisson(mu_drain,duration)
cumul =False
for h in reversed(hedging):
thres = 2*mu_drain+h*np.sqrt(mu_drain+mu_transfer)
#thres = h*10
arrival_buffer,drain_buffer,zeta = simulate_reflected_random_walk_repeat(
demand_seq[:duration],
transfer_seq[:duration],
drain_seq[:duration],
thres,
init_state=init_state,
flow=False)
#print(np.where(drain_buffer == 0))
cost.append(sum(arrival_buffer*c1)+sum(drain_buffer*c2))
rates.append(zeta*mu_transfer)
#plt.plot(drain_buffer[j*1000:(j+1)*1000]*c2+arrival_buffer[j*1000:(j+1)*1000]*c1)
if cumul:
ax[1].plot(np.cumsum(drain_buffer)[plot_range],label=int(h))
ax[0].plot(np.cumsum(arrival_buffer)[plot_range])
ax[2].plot(np.cumsum(arrival_buffer*c1+drain_buffer*c2)[plot_range])
else:
ax[1].plot((drain_buffer)[plot_range])
#ax[1].plot(np.ones(len(plot_range))*thres,".-")
ax[0].plot((arrival_buffer)[plot_range],label="{} * {}".format(int(h),int(std)))
ax[2].plot((arrival_buffer*c1+drain_buffer*c2)[plot_range])
#print(np.min(np.diff((arrival_buffer[1500:2000]*c1+drain_buffer[1500:2000]*c2))))
ax[0].set_ylabel("Items in buffer 1")
ax[1].set_ylabel("Items in buffer 2")
ax[2].set_ylabel("Total cost")
f.legend()
slack_capacity
std/slack_capacity
mu_drain*c2
thres*c2
np.sum(drain_buffer == 0)
mu_demand
rates
mu_demand
mu_transfer
time_horizon
offset/std
offset
percentile = 1.645
#percentile = 0
percentile = 1.96
#percentile = 2.33
percentile = 3.1
#percentile = 1
#percentile = 7
slack_capacity = mu_transfer-mu_drain
std = np.sqrt(mu_drain+mu_transfer)
time_horizon = (percentile*std)**2/(2*slack_capacity)**2
offset = time_horizon*(-slack_capacity) + percentile*std*np.sqrt(time_horizon)
time_horizon = int(np.ceil(time_horizon))
offset = int(np.ceil(offset))
percentile*np.sqrt(3)
slack_capacity = mu_transfer-mu_drain
std = np.sqrt(mu_drain+mu_transfer)
beta = (1/4)*(percentile**2)*(std/slack_capacity) + slack_capacity/std
offset
std
slack_capacity
slack_capacity/std
slack_capacity
0.5*percentile*std/np.sqrt(time_horizon)
offset/std + slack_capacity/std
scaling_ratio = compute_scaling_ratio(mu_drain,mu_demand,std,init_state)
beta
min_cost = min(cost)
hedging = np.array(hedging)
r_cost = np.array([c/min_cost for c in cost[::-1]])
indexes = np.where(r_cost < 1.2)[0]
plt.plot(hedging[indexes],r_cost[indexes])
plt.plot(hedging[indexes],r_cost[indexes],".")
plt.vlines(beta,min(r_cost[indexes]),max(r_cost[indexes]))
plt.title("{:.3f}".format(scaling_ratio))
min_cost = min(cost)
hedging = np.array(hedging)
r_cost = np.array([c/min_cost for c in cost[::-1]])
indexes = np.where(r_cost < 1.2)[0]
plt.plot(hedging[indexes],r_cost[indexes])
plt.plot(hedging[indexes],r_cost[indexes],".")
plt.vlines(beta,min(r_cost[indexes]),max(r_cost[indexes]))
plt.title("{:.3f}".format(scaling_ratio))
cost = []
hedging = np.arange(30,60,5)
init_state = 7e4
#hedging = np.arange(1,7)
j = 1
f,ax = plt.subplots(3,1,figsize=(16,8))
#plot_range = range(4000,5000)
duration = 100000
plot_range = range(0,10000)
plot_range = range(0,200)
cumul =False
for h in reversed(hedging):
thres = mu_drain+h*np.sqrt(mu_drain+mu_transfer)
#thres = h*10
arrival_buffer,drain_buffer,zeta = simulate_reflected_random_walk_repeat(demand_seq[:duration],
transfer_seq[:duration],
drain_seq[:duration],
thres,init_state=init_state,
flow=False)
cost.append(sum(arrival_buffer*c1)+sum(drain_buffer*c2))
#plt.plot(drain_buffer[j*1000:(j+1)*1000]*c2+arrival_buffer[j*1000:(j+1)*1000]*c1)
if cumul:
ax[1].plot(np.cumsum(drain_buffer*c2)[plot_range],label=h)
ax[0].plot(np.cumsum(arrival_buffer*c1)[plot_range])
ax[2].plot(np.cumsum(arrival_buffer*c1+drain_buffer*c2)[plot_range])
else:
ax[1].plot((drain_buffer*c2)[plot_range],label=h)
ax[0].plot((arrival_buffer*c1)[plot_range])
ax[2].plot((arrival_buffer*c1+drain_buffer*c2)[plot_range])
#print(np.min(np.diff((arrival_buffer[1500:2000]*c1+drain_buffer[1500:2000]*c2))))
f.legend()
min_cost = min(cost)
plt.plot(hedging,[c/min_cost for c in cost[::-1]])
plt.plot(hedging,[c/min_cost for c in cost[::-1]],".")
cost = []
hedging = np.arange(5,70,5)
init_state = 1e4
#hedging = np.arange(1,7)
j = 1
f,ax = plt.subplots(3,1,figsize=(16,8))
#plot_range = range(4000,5000)
duration = 6000
plot_range = range(0,6000)
#plot_range = range(0,300)
cumul =False
for h in reversed(hedging):
thres = mu_drain+h*np.sqrt(mu_drain)
#thres = h*10
arrival_buffer,drain_buffer,zeta = simulate_reflected_random_walk(demand_seq[:duration],transfer_seq[:duration],drain_seq[:duration],thres,init_state=init_state)
cost.append(sum(arrival_buffer*c1)+sum(drain_buffer*c2))
#plt.plot(drain_buffer[j*1000:(j+1)*1000]*c2+arrival_buffer[j*1000:(j+1)*1000]*c1)
if cumul:
ax[1].plot(np.cumsum(drain_buffer*c2)[plot_range],label=h)
ax[0].plot(np.cumsum(arrival_buffer*c1)[plot_range])
ax[2].plot(np.cumsum(arrival_buffer*c1+drain_buffer*c2)[plot_range])
else:
ax[1].plot((drain_buffer*c2)[plot_range],label=h)
ax[0].plot((arrival_buffer*c1)[plot_range])
ax[2].plot((arrival_buffer*c1+drain_buffer*c2)[plot_range])
#print(np.min(np.diff((arrival_buffer[1500:2000]*c1+drain_buffer[1500:2000]*c2))))
thres = 1e6
#thres = h*10
arrival_buffer,drain_buffer,_ = simulate_reflected_random_walk(demand_seq[:duration],transfer_seq[:duration],drain_seq[:duration],thres,init_state=init_state)
#plt.plot(drain_buffer[j*1000:(j+1)*1000]*c2+arrival_buffer[j*1000:(j+1)*1000]*c1)
if cumul:
#ax[1].plot(np.cumsum(drain_buffer*c2)[plot_range],label="e")
ax[0].plot(np.cumsum(arrival_buffer*c1)[plot_range],label="e")
#ax[2].plot(np.cumsum(arrival_buffer*c1+drain_buffer*c2)[plot_range])
else:
#ax[1].plot((drain_buffer*c2)[plot_range],label="e")
ax[0].plot((arrival_buffer*c1)[plot_range],label="e")
#ax[2].plot((arrival_buffer*c1+drain_buffer*c2)[plot_range])
f.legend()
(mu_transfer-mu_demand)/((zeta*mu_transfer)-mu_demand)
min_cost = min(cost)
plt.plot(hedging,[c/min_cost for c in cost[::-1]])
plt.plot(hedging,[c/min_cost for c in cost[::-1]],".")
min_cost = min(cost)
plt.plot(hedging,[c/min_cost for c in cost[::-1]])
plt.plot(hedging,[c/min_cost for c in cost[::-1]],".")
h = []
for i in np.arange(0.94,0.949,0.001):
h.append(1/(1-i))
plt.plot(np.arange(0.94,0.949,0.001)/0.94,[i/min(h) for i in h])
min_cost = min(cost)
cost[0]-cost[1]
plt.plot(drain_buffer[:300])
plt.plot(arrival_buffer[:600])
plt.plot(buffer_seq[:1000])
sum(buffer_seq)
sum(buffer_seq)
np.percentile((supply_seq-demand_seq)[(supply_seq-demand_seq) < 0],0.01)
plt.plot(np.cumsum(supply_seq)-np.cumsum(demand_seq))
percentile = 1.645
#percentile = 0
#percentile = 1.96
#percentile = 2.33
slack_capacity = mu_supply-mu_demand
time_horizon = (percentile**2)*mu_supply/(2*slack_capacity**2)
offset = time_horizon*(-slack_capacity) + percentile* np.sqrt(mu_supply*2*time_horizon)
print(time_horizon*2)
time_horizon = int(np.ceil(time_horizon))
offset = int(np.ceil(offset))
time_horizon = (percentile**2)*mu_supply*2/slack_capacity**2
time_horizon = int(np.ceil(time_horizon))
y = []
for d in range(time_horizon):
y.append(d*(slack_capacity) - percentile* np.sqrt(mu_supply*2*d))
y_1 = y
time_horizon_1 = time_horizon
y_2 = y
time_horizon_2 = time_horizon
time_horizon/time_horizon_1
1.96/1.645
plt.plot(range(time_horizon),y)
plt.plot(range(time_horizon_1),y_1)
plt.plot(range(time_horizon_2),y_2)
y
time_horizon
offset
thres = poisson.ppf(0.95,mu_demand)
#thres = 0
thres = poisson.ppf(0.5,mu_demand)
def idle_supply(demand_seq,supply_seq,offset):
inv_pos = offset
idle_supply_seq = np.zeros_like(supply_seq)
idle_count = 0
for i,(d,s) in enumerate(zip(demand_seq,supply_seq)):
if inv_pos > thres+offset:
s = 0
idle_count += 1
idle_supply_seq[i] = s
inv_pos += s-d
#print(idle_count/len(supply_seq))
return idle_supply_seq
def idle_supply_time_horizon(demand_seq,supply_seq,offset,time_horizon):
inv_pos = offset
inv_pos_seq = np.zeros_like(supply_seq)
days_count = 0
for i,(d,s) in enumerate(zip(demand_seq,supply_seq)):
if (inv_pos > thres+offset) and days_count >= time_horizon:
s = 0
days_count = 0
idle_supply_seq[i] = s
inv_pos += s-d
inv_pos_seq[i] = inv_pos
days_count += 1
return inv_pos_seq
def idle_supply_time_horizon_smooth(demand_seq,supply_seq,offset,time_horizon):
inv_pos = offset
inv_pos_seq = np.zeros_like(supply_seq)
days_count = 0
just_idled = False
for i,(d,s) in enumerate(zip(demand_seq,supply_seq)):
surplus = inv_pos - offset
if surplus > 0 and ((days_count >= time_horizon) or just_idled):
if d > surplus:
s = d-surplus
else:
s = 0
days_count=0
just_idled = True
else:
just_idled = False
inv_pos += s-d
inv_pos_seq[i] = inv_pos
if not just_idled:
days_count += 1
return inv_pos_seq
def work_supply_time_horizon_smooth(demand_seq,supply_seq,offset,time_horizon):
inv_pos = offset
inv_pos_seq = np.zeros_like(supply_seq)
days_count = 0
just_idled = True
for i,(d,s) in enumerate(zip(demand_seq,supply_seq)):
surplus = inv_pos - offset
if surplus > 0 and ((days_count >= time_horizon) or just_idled):
days_count = 0
if d > surplus:
s = d-surplus
else:
s = 0
days_count=0
just_idled = True
else:
days_count += 1
just_idled = False
inv_pos += s-d
inv_pos_seq[i] = inv_pos
return inv_pos_seq
def idle_supply_smooth(demand_seq,supply_seq,offset):
inv_pos = offset
idle_supply_seq = np.zeros_like(supply_seq)
idle_count = 0
inv_pos_array = np.zeros_like(supply_seq)
for i,(d,s) in enumerate(zip(demand_seq,supply_seq)):
surplus = inv_pos - offset
if surplus > 0:
if d > surplus:
s = d-surplus
else:
s = 0
idle_count += 1
idle_supply_seq[i] = s
inv_pos += s-d
inv_pos = min(inv_pos,offset)
inv_pos_array[i] = inv_pos
#print(idle_count/len(supply_seq))
print(inv_pos)
return inv_pos_array
slack_capacity/np.sqrt(2*mu_demand)
point = 1400
plt.plot(inv_pos_seq[point-100:point+500])
point = 1400
plt.plot(inv_pos_seq[point-100:point+500])
point = 1400
plt.plot(inv_pos_seq[point-100:point+100])
offset
time_horizon*slack_capacity/2
slack_capacity
inv_pos_seq = work_supply_time_horizon_smooth(demand_seq,supply_seq,53,12)
print(np.mean(inv_pos_seq < 0))
inv_pos_seq = idle_supply_time_horizon_smooth(demand_seq,supply_seq,53,12)
print(np.mean(inv_pos_seq < 0))
stocks = inv_pos_seq.copy()
stocks[inv_pos_seq < 0] = 0
np.mean(stocks)
inv_pos_seq = idle_supply_time_horizon_smooth(demand_seq,supply_seq,41,69)
print(np.mean(inv_pos_seq < 0))
stocks = inv_pos_seq.copy()
stocks[inv_pos_seq < 0] = 0
np.mean(stocks)
inv_pos_seq = idle_supply_time_horizon(demand_seq,supply_seq,offset,time_horizon)
print(np.mean(inv_pos_seq < 0))
#plt.plot(inv_pos_seq[827341-10:827341+10])
#plt.plot(inv_pos_seq[827341-10:827341+10],".")
stocks = inv_pos_seq.copy()
stocks[inv_pos_seq < 0] = 0
np.mean(stocks)
idle_supply_seq,inv_pos_seq = idle_supply_smooth(demand_seq,supply_seq, np.ceil(offset))
#inv_pos_seq = offset + np.cumsum(idle_supply_seq)-np.cumsum(demand_seq)
print(np.mean(inv_pos_seq < 0))
#plt.plot(inv_pos_seq[827341-10:827341+10])
#plt.plot(inv_pos_seq[827341-10:827341+10],".")
plt.plot(inv_pos_seq[:1200])
n_sims = 100000
demand_sum = np.random.poisson(mu_demand*np.ceil(time_horizon),n_sims)
supply_sum = np.random.poisson(mu_supply*np.ceil(time_horizon),n_sims)
print(np.mean((demand_sum-supply_sum) > np.ceil(offset)))
offset+time_horizon*slack_capacity
1001 % 100
offset
time_horizon*slack_capacity/2
np.random.seed(500)
n_sims = 100000
#n_sims = 20
stockouts = []
last_day_stockouts = []
last_day_stockouts_vals = []
ave_inventories = []
sim_time_horizon = time_horizon
for i in range(n_sims):
demand = np.random.poisson(mu_demand,sim_time_horizon)
supply = np.random.poisson(mu_supply,sim_time_horizon)
inv_pos_seq = offset + np.cumsum(supply)-np.cumsum(demand)
stockouts.append(np.sum(inv_pos_seq < 0))
last_day_stockouts.append(inv_pos_seq[-1] < offset)
if last_day_stockouts[-1]:
last_day_stockouts_vals.append(inv_pos_seq[-1]-offset)
ave_inventories.append(np.mean(inv_pos_seq))
if i % 10000 == 0:
plt.plot(inv_pos_seq)
sum(stockouts)/(sim_time_horizon*n_sims),np.sum(last_day_stockouts)/(n_sims),np.mean(ave_inventories)
offset
np.median(last_day_stockouts_vals)
for offset in range(200):
stock_out_probs = []
for d in range(1,time_horizon+1):
stock_out_prob = norm.cdf(-offset,slack_capacity*d,np.sqrt(2*mu_supply*d))
stock_out_probs.append(stock_out_prob)
overal_stockout_prob = np.mean(stock_out_probs)
#print(overal_stockout_prob)
if overal_stockout_prob < 0.05:
break
time_horizon
def get_percentile_deficit(cycle_dur,slack_capacity,variance,percentile = 0.5):
mu = slack_capacity*cycle_dur
std = np.sqrt(variance*cycle_dur)
cum_deficit_prob = norm.cdf(0,mu,std)
cum_percentile = 0
prev_cum_prob = cum_deficit_prob
for i in range(10000):
cum_prob = norm.cdf(-i,mu,std)
prob = (prev_cum_prob - cum_prob)/cum_deficit_prob
cum_percentile += prob
if cum_percentile >= percentile:
return i
prev_cum_prob = cum_prob
a = get_percentile_deficit(time_horizon/4,slack_capacity,2*mu_supply)
#get_percentile_deficit(slack_capacity,2*mu_supply,time_horizon)
print(a)
def compute_recovery_time(slack_capacity,variance,deficit,bound = 2.33):
dur = ((bound*np.sqrt(variance)+np.sqrt(bound**2*variance+4*slack_capacity*deficit))/(2*slack_capacity))**2
return int(np.ceil(dur))
print(compute_recovery_time(slack_capacity,2*mu_supply,a))
def get_average_stockout_prob(duration,slack_capacity,variance,start):
stock_out_probs = []
for d in range(1,duration+1):
stock_out_prob = norm.cdf(0,start+slack_capacity*d,np.sqrt(variance*d))
stock_out_probs.append(stock_out_prob)
average_stockout_prob = np.mean(stock_out_probs)
return average_stockout_prob
def compute_stockout_prob_and_inventory_cost(cycle_dur,slack_capacity,variance,offset):
mu = slack_capacity*cycle_dur
std = np.sqrt(variance*cycle_dur)
cum_deficit_prob = norm.cdf(0,mu,std)
#print(cum_deficit_prob)
deficit = get_percentile_deficit(cycle_dur,slack_capacity,variance,0.95)
#print(deficit)
rec_dur = compute_recovery_time(slack_capacity,variance,deficit)
#print(rec_dur)
cycle_stockout_prob = get_average_stockout_prob(cycle_dur,slack_capacity,variance,offset)
rec_dur = int(np.ceil(deficit/slack_capacity))
print(rec_dur)
rec_stockout_prob = get_average_stockout_prob(rec_dur,slack_capacity,variance,offset-deficit)
#print(cycle_stockout_prob,rec_stockout_prob)
effective_duration = (cycle_dur+cum_deficit_prob*rec_dur)
#print(cycle_dur/effective_duration)
overall_stockout_prob = (cycle_dur*cycle_stockout_prob+cum_deficit_prob*rec_dur*rec_stockout_prob)/effective_duration
overall_inventory_cost = (cycle_dur*(0.5*slack_capacity*cycle_dur+offset)+cum_deficit_prob*rec_dur*(0.5*slack_capacity*rec_dur+offset-deficit))/effective_duration
#print(overall_inventory_cost)
return overall_stockout_prob,overall_inventory_cost
time_horizon/4
variance = 2*mu_supply
min_inv_cost = np.inf
min_cycle_dur = None
min_offset = None
for cycle_dur in range(1,int(time_horizon)):
for offset in range(200):
overall_stockout_prob,inv_cost = compute_stockout_prob_and_inventory_cost(cycle_dur,slack_capacity,variance,offset)
#print(overall_stockout_prob)
if overall_stockout_prob < 0.05:
break
print(cycle_dur,inv_cost)
if inv_cost < min_inv_cost:
print(cycle_dur)
min_inv_cost = inv_cost
min_cycle_dur = cycle_dur
min_offset = offset
print(offset)
min_offset
min_cycle_dur
min_inv_cost
time_horizon
int(time_horizon)*(0.5*slack_capacity)
inv_cost
print(overal_stockout_prob)
overal_stockout_prob
probs = []
deficit = 10000
for i in range(deficit):
v = -offset-i
mu = slack_capacity*time_horizon
std = np.sqrt(2*mu_supply*time_horizon)
probs.append(norm.cdf(v,mu,std))
#print(i,probs[-1])
np.sum(-np.diff(probs)*np.arange(1,deficit)/norm.cdf(-offset,mu,std))
offsets = []
for dur in range(1,time_horizon+1):
for offset in range(200):
stock_out_probs = []
for d in range(1,dur+1):
stock_out_prob = norm.cdf(-offset,slack_capacity*d,np.sqrt(2*mu_supply*d))
stock_out_probs.append(stock_out_prob)
overal_stockout_prob = np.mean(stock_out_probs)
#print(overal_stockout_prob)
if overal_stockout_prob < 0.05:
break
#print(dur,offset)
offsets.append(offset)
plt.plot(offsets)
norm.cdf(-offset,mu,std)
offset
mu
(-np.diff(probs)/norm.cdf(-offset,mu,std))[:50]
-np.diff(probs)/norm.cdf(-offset,mu,std)
offset
np.sum(last_day_stockouts)/(n_sims)
sum(stockouts)/(int(np.ceil(time_horizon))*n_sims)
np.sum(last_day_stockouts)
np.sum(last_day_stockouts)/sum(stockouts)
np.mean(stockouts)
stockouts = np.array(stockouts)
np.median(stockouts[stockouts > 0])
plt.hist(stockouts[stockouts > 0])
plt.hist(stockouts,bins=range(0,50,2))
2*time_horizon
norm.cdf(-offset,slack_capacity*10,np.sqrt(mu_supply*10))
int(np.ceil(time_horizon))
```
|
github_jupyter
|
# Ridge Regressor with StandardScaler
### Required Packages
```
import warnings
import numpy as np
import pandas as pd
import seaborn as se
import matplotlib.pyplot as plt
from sklearn.linear_model import Ridge
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path= ""
```
List of features which are required for model training .
```
#x_values
features=[]
```
Target feature for prediction.
```
#y_value
target=''
```
### Data fetching
pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X=df[features]
Y=df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
```
Calling preprocessing functions on the feature and target set.
```
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
```
### Model
Ridge regression addresses some of the problems of Ordinary Least Squares by imposing a penalty on the size of the coefficients. The ridge coefficients minimize a penalized residual sum of squares:
\begin{equation*}
\min_{w} || X w - y||_2^2 + \alpha ||w||_2^2
\end{equation*}
The complexity parameter controls the amount of shrinkage: the larger the value of , the greater the amount of shrinkage and thus the coefficients become more robust to collinearity.
This model solves a regression model where the loss function is the linear least squares function and regularization is given by the l2-norm. Also known as Ridge Regression or Tikhonov regularization. This estimator has built-in support for multi-variate regression (i.e., when y is a 2d-array of shape (n_samples, n_targets)).
#### Model Tuning Parameters
> **alpha** -> Regularization strength; must be a positive float. Regularization improves the conditioning of the problem and reduces the variance of the estimates. Larger values specify stronger regularization.
> **solver** -> Solver to use in the computational routines {‘auto’, ‘svd’, ‘cholesky’, ‘lsqr’, ‘sparse_cg’, ‘sag’, ‘saga’}
```
Input=[("standard",StandardScaler()),("model",Ridge(random_state=123))]
model=Pipeline(Input)
model.fit(x_train,y_train)
```
#### Model Accuracy
We will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.
> **score**: The **score** function returns the coefficient of determination <code>R<sup>2</sup></code> of the prediction.
```
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
```
> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions.
> **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model.
> **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
```
y_pred=model.predict(x_test)
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
```
#### Prediction Plot
First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.
For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
```
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(x_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
```
#### Creator: Thilakraj Devadiga , Github: [Profile](https://github.com/Thilakraj1998)
|
github_jupyter
|
```
import numpy as np
import matplotlib.pyplot as plt
from latency import run_latency, run_latency_changing_topo, run_latency_per_round, run_latency_per_round_changing_topo, nodes_latency
import sys
sys.path.append('..')
from utils import create_mixing_matrix, load_data, run, consensus
```
# Base case
```
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs = run(train_loader, test_loader, comm_matrix, num_rounds, epochs, num_clients)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs)
plt.show()
```
# Latency with fixed topology
```
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 2)
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs2 = run_latency(train_loader, test_loader, comm_matrix,
num_rounds, epochs, num_clients, latency_nodes)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs2)
plt.show()
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 4)
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs4 = run_latency(train_loader, test_loader, comm_matrix,
num_rounds, epochs, num_clients, latency_nodes)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs4)
plt.show()
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 8)
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs8 = run_latency(train_loader, test_loader, comm_matrix,
num_rounds, epochs, num_clients, latency_nodes)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs8)
plt.show()
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 16)
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs16 = run_latency(train_loader, test_loader, comm_matrix,
num_rounds, epochs, num_clients, latency_nodes)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs16)
plt.show()
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 32)
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs32 = run_latency(train_loader, test_loader, comm_matrix,
num_rounds, epochs, num_clients, latency_nodes)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs32)
plt.show()
fig, ax = plt.subplots(1, figsize=(12, 9))
ax.set_ylim([0, 1])
x = np.array(range(10))
ax.plot(x, accs, color="red", label="base case")
ax.plot(x, accs2, color="lime", label="two delayed nodes")
ax.plot(x, accs4, color="green", label="four delayed nodes")
ax.plot(x, accs8, color="purple", label="eight delayed nodes")
ax.plot(x, accs16, color="blue", label="sixteen delayed nodes")
ax.plot(x, accs32, color="cyan", label="thirty-two delayed nodes")
plt.legend(loc="lower right", title="Number of delayed nodes")
plt.title("Accuracy curve depending on number of delayed nodes")
plt.xlabel("Round")
plt.ylabel("Accuracy")
plt.show()
```
# Latency with changing topology
```
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 2)
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs2_ = run_latency_changing_topo(train_loader, test_loader,
num_rounds, epochs, num_clients, latency_nodes)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs2_)
plt.show()
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 4)
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs4_ = run_latency_changing_topo(train_loader, test_loader,
num_rounds, epochs, num_clients, latency_nodes)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs4_)
plt.show()
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 8)
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs8_ = run_latency_changing_topo(train_loader, test_loader,
num_rounds, epochs, num_clients, latency_nodes)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs8_)
plt.show()
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 16)
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs16_ = run_latency_changing_topo(train_loader, test_loader,
num_rounds, epochs, num_clients, latency_nodes)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs16_)
plt.show()
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 32)
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs32_ = run_latency_changing_topo(train_loader, test_loader,
num_rounds, epochs, num_clients, latency_nodes)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs32_)
plt.show()
fig, ax = plt.subplots(1, figsize=(12, 9))
ax.set_ylim([0, 1])
x = np.array(range(10))
ax.plot(x, accs, color="red", label="base case")
ax.plot(x, accs2_, color="lime", label="two delayed nodes")
ax.plot(x, accs4_, color="green", label="four delayed nodes")
ax.plot(x, accs8_, color="purple", label="eight delayed nodes")
ax.plot(x, accs16_, color="blue", label="sixteen delayed nodes")
ax.plot(x, accs32_, color="cyan", label="thirty-two delayed nodes")
plt.legend(loc="lower right", title="Number of delayed nodes")
plt.title("Accuracy curve depending on number of delayed nodes with changing topology")
plt.xlabel("Round")
plt.ylabel("Accuracy")
plt.show()
```
# Latency on a few rounds
```
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 2)
latency_rounds = np.array([3, 7])
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs1 = run_latency_per_round(train_loader, test_loader, comm_matrix,
num_rounds, epochs, num_clients, latency_nodes, latency_rounds)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs1)
plt.show()
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 4)
latency_rounds = np.array([3, 7])
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs2 = run_latency_per_round(train_loader, test_loader, comm_matrix,
num_rounds, epochs, num_clients, latency_nodes, latency_rounds)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs2)
plt.show()
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 8)
latency_rounds = np.array([3, 7])
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs3 = run_latency_per_round(train_loader, test_loader, comm_matrix,
num_rounds, epochs, num_clients, latency_nodes, latency_rounds)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs3)
plt.show()
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 16)
latency_rounds = np.array([3, 7])
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs4 = run_latency_per_round(train_loader, test_loader, comm_matrix,
num_rounds, epochs, num_clients, latency_nodes, latency_rounds)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs4)
plt.show()
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 32)
latency_rounds = np.array([3, 7])
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs5 = run_latency_per_round(train_loader, test_loader, comm_matrix,
num_rounds, epochs, num_clients, latency_nodes, latency_rounds)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs5)
plt.show()
fig, ax = plt.subplots(1, figsize=(12, 9))
ax.set_ylim([0, 1])
x = np.array(range(10))
ax.plot(x, accs, color="red", label="base case")
ax.plot(x, accs1, color="lime", label="two delayed nodes")
ax.plot(x, accs2, color="green", label="four delayed nodes")
ax.plot(x, accs3, color="purple", label="eight delayed nodes")
ax.plot(x, accs4, color="blue", label="sixteen delayed nodes")
ax.plot(x, accs5, color="cyan", label="thirty-two delayed nodes")
plt.legend(loc="lower right", title="Number of delayed nodes")
plt.title("Accuracy curve depending on number of delayed nodes with delays only on specific rounds")
plt.xlabel("Round")
plt.ylabel("Accuracy")
plt.show()
```
# Latency on a few rounds with changing topology
```
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 2)
latency_rounds = np.array([3, 7])
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs1_ = run_latency_per_round_changing_topo(train_loader, test_loader,
num_rounds, epochs, num_clients, latency_nodes, latency_rounds)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs1_)
plt.show()
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 4)
latency_rounds = np.array([3, 7])
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs2_ = run_latency_per_round_changing_topo(train_loader, test_loader,
num_rounds, epochs, num_clients, latency_nodes, latency_rounds)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs2_)
plt.show()
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 8)
latency_rounds = np.array([3, 7])
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs3_ = run_latency_per_round_changing_topo(train_loader, test_loader,
num_rounds, epochs, num_clients, latency_nodes, latency_rounds)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs3_)
plt.show()
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 16)
latency_rounds = np.array([3, 7])
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs4_ = run_latency_per_round_changing_topo(train_loader, test_loader,
num_rounds, epochs, num_clients, latency_nodes, latency_rounds)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs4_)
plt.show()
# IID case: all the clients have images of all the classes
# Grid graph topology: each client is connected to exactly 4 neighbours
# Hyperparameters
num_clients = 100
num_rounds = 10
epochs = 1
batch_size = 32
latency_nodes = nodes_latency(num_clients, 32)
latency_rounds = np.array([3, 7])
# Communication matrix
comm_matrix = create_mixing_matrix('grid', num_clients)
# Creating decentralized datasets
train_loader, test_loader = load_data(batch_size, num_clients)
# Instantiate models and optimizers and run decentralized training
global_model, client_models, accs5_ = run_latency_per_round_changing_topo(train_loader, test_loader,
num_rounds, epochs, num_clients, latency_nodes, latency_rounds)
cons = consensus(global_model, client_models)
print(cons)
axes = plt.gca()
axes.set_ylim([0,1])
plt.plot(range(num_rounds), accs5_)
plt.show()
fig, ax = plt.subplots(1, figsize=(12, 9))
ax.set_ylim([0, 1])
x = np.array(range(10))
ax.plot(x, accs, color="red", label="base case")
ax.plot(x, accs1_, color="lime", label="two delayed nodes")
ax.plot(x, accs2_, color="green", label="four delayed nodes")
ax.plot(x, accs3_, color="purple", label="eight delayed nodes")
ax.plot(x, accs4_, color="blue", label="sixteen delayed nodes")
ax.plot(x, accs5_, color="cyan", label="thirty-two delayed nodes")
plt.legend(loc="lower right", title="Number of delayed nodes")
plt.title("Accuracy curve depending on number of delayed nodes with changing topology and delays only on specific rounds")
plt.xlabel("Round")
plt.ylabel("Accuracy")
plt.show()
```
|
github_jupyter
|
```
# import customizing_motif_vec
import extract_motif
import motif_class
import __init__
import json_utility
from importlib import reload
reload(__init__)
reload(extract_motif)
# reload(customizing_motif_vec)
reload(motif_class)
import plot_glycan_utilities
reload(plot_glycan_utilities)
import matplotlib.pyplot as plt
from glypy.io import glycoct
from glypy.structure.glycan import fragment_to_substructure, Glycan
import glycan_io
from glypy.structure.glycan_composition import GlycanComposition, FrozenGlycanComposition
%matplotlib inline
```
A4FG4S4 = """
RES
1b:x-dglc-HEX-1:5
2s:n-acetyl
3b:b-dglc-HEX-1:5
4s:n-acetyl
5b:a-dman-HEX-1:5
6b:a-dman-HEX-1:5
7b:b-dglc-HEX-1:5
8s:n-acetyl
9b:b-dgal-HEX-1:5
10b:a-dgro-dgal-NON-2:6|1:a|2:keto|3:d
11s:n-acetyl
12b:b-dglc-HEX-1:5
13s:n-acetyl
14b:b-dgal-HEX-1:5
15b:a-dgro-dgal-NON-2:6|1:a|2:keto|3:d
16s:n-acetyl
17b:a-dman-HEX-1:5
18b:b-dglc-HEX-1:5
19s:n-acetyl
20b:b-dgal-HEX-1:5
21b:a-dgro-dgal-NON-2:6|1:a|2:keto|3:d
22s:n-acetyl
23b:b-dglc-HEX-1:5
24s:n-acetyl
25b:b-dgal-HEX-1:5
26b:a-dgro-dgal-NON-2:6|1:a|2:keto|3:d
27s:n-acetyl
28b:a-lgal-HEX-1:5|6:d
LIN
1:1d(2+1)2n
2:1o(4+1)3d
3:3d(2+1)4n
4:3o(4+1)5d
5:5o(3+1)6d
6:6o(2+1)7d
7:7d(2+1)8n
8:7o(4+1)9d
9:9o(3+2)10d
10:10d(5+1)11n
11:6o(4+1)12d
12:12d(2+1)13n
13:12o(4+1)14d
14:14o(3+2)15d
15:15d(5+1)16n
16:5o(6+1)17d
17:17o(2+1)18d
18:18d(2+1)19n
19:18o(4+1)20d
20:20o(3+2)21d
21:21d(5+1)22n
22:17o(6+1)23d
23:23d(2+1)24n
24:23o(4+1)25d
25:25o(3+2)26d
26:26d(5+1)27n
27:1o(6+1)28d
"""
a_3350 = """RES
1b:x-dglc-HEX-1:5
2b:x-lgal-HEX-1:5|6:d
3b:x-dglc-HEX-1:5
4b:x-dman-HEX-1:5
5b:x-dman-HEX-1:5
6b:x-dglc-HEX-1:5
7b:x-dgal-HEX-1:5
8b:x-dgro-dgal-NON-2:6|1:a|2:keto|3:d
9s:n-acetyl
10s:n-acetyl
11b:x-dman-HEX-1:5
12b:x-dglc-HEX-1:5
13b:x-dgal-HEX-1:5
14s:n-acetyl
15b:x-dglc-HEX-1:5
16b:x-dgal-HEX-1:5
17s:n-acetyl
18s:n-acetyl
19s:n-acetyl
LIN
1:1o(-1+1)2d
2:1o(-1+1)3d
3:3o(-1+1)4d
4:4o(-1+1)5d
5:5o(-1+1)6d
6:6o(-1+1)7d
7:7o(-1+2)8d
8:8d(5+1)9n
9:6d(2+1)10n
10:4o(-1+1)11d
11:11o(-1+1)12d
12:12o(-1+1)13d
13:12d(2+1)14n
14:11o(-1+1)15d
15:15o(-1+1)16d
16:15d(2+1)17n
17:3d(2+1)18n
18:1d(2+1)19n
"""
```
undefined = """RES
1b:x-dglc-HEX-1:5
2s:n-acetyl
3b:b-dglc-HEX-1:5
4s:n-acetyl
5b:b-dman-HEX-1:5
6b:a-dman-HEX-1:5
7b:b-dglc-HEX-1:5
8s:n-acetyl
9b:a-dman-HEX-1:5
10b:b-dglc-HEX-1:5
11s:n-acetyl
12b:b-dglc-HEX-1:5
13s:n-acetyl
14b:a-lgal-HEX-1:5|6:d
LIN
1:1d(2+1)2n
2:1o(4+1)3d
3:3d(2+1)4n
4:3o(4+1)5d
5:5o(3+1)6d
6:6o(2+1)7d
7:7d(2+1)8n
8:5o(6+1)9d
9:9o(2+1)10d
10:10d(2+1)11n
11:9o(6+1)12d
12:12d(2+1)13n
13:1o(6+1)14d
UND
UND1:100.0:100.0
ParentIDs:1|3|5|6|7|9|10|12|14
SubtreeLinkageID1:o(4+1)d
RES
15b:b-dgal-HEX-1:5
16b:a-lgal-HEX-1:5|6:d
17b:a-dgal-HEX-1:5
18s:n-acetyl
LIN
14:15o(2+1)16d
15:15o(3+1)17d
16:17d(2+1)18n"""
und_glycan = glycoct.loads(undefined)
test1 = """RES
1b:x-dglc-HEX-1:5
2s:n-acetyl
3b:b-dglc-HEX-1:5
4s:n-acetyl
5b:a-dman-HEX-1:5
6b:a-dman-HEX-1:5
7b:b-dglc-HEX-1:5
8s:n-acetyl
9b:b-dglc-HEX-1:5
10s:n-acetyl
11b:a-dman-HEX-1:5
12b:b-dglc-HEX-1:5
13s:n-acetyl
14b:a-lgal-HEX-1:5|6:d
LIN
1:1d(2+1)2n
2:1o(4+1)3d
3:3d(2+1)4n
4:3o(4+1)5d
5:5o(3+1)6d
6:6o(2+1)7d
7:7d(2+1)8n
8:6o(4+1)9d
9:9d(2+1)10n
10:5o(6+1)11d
11:11o(2+1)12d
12:12d(2+1)13n
13:1o(6+1)14d
UND
UND1:100.0:100.0
ParentIDs:1|3|5|6|7|9|11|12|14
SubtreeLinkageID1:o(4+1)d
RES
15b:b-dgal-HEX-1:5
"""
glycan_test1 = glycoct.loads(test1)
reload(glycoct)
reload(glycan_io)
glycan_dict = glycan_io.load_glycan_obj_from_dir('/Users/apple/Desktop/NathanLab/CHO_Anders/GlycanSVG/')
A4FG4S4 = glycoct.loads(str(glycan_dict['A4FG4S4']))
glycan_dict['A4FG4S4']
temp_mono = A4FG4S4.root
## recursion,
temp_mono.children()
GlycanComposition.from_glycan(A4FG4S4)
from glypy.structure import monosaccharide
from glypy import monosaccharides
from glypy.structure import glycan
# (monosaccharides.GlcNAc)
GlycanComposition.from_glycan(glycan.Glycan(monosaccharides.GlcNAc))
# # get
def drop_terminal(a_glycan):
term_list =[]
temp_mono = a_glycan.root
def rec_drop_term(a_mono):
# print(a_mono)
temp_children = a_mono.children()
return_list = []
if temp_children:
for pos, child in temp_children:
temp_term = rec_term(child)
# print(temp_term)
return_list.extend(temp_term)
return return_list
else:
# print(a_mono, temp_children)
return [(a_mono] # a list of term
temp_term = rec_term(temp_mono)
return temp_term
# # A4FG4S4.root
# term_a4fg4s4=find_terminal(A4FG4S4)[4]
# term_a4fg4s4.parents()
A4FG4S4 = glycoct.loads(str(glycan_dict['A4FG4S4']))
for i in list(A4FG4S4.leaves()):
i.drop_monosaccharide(i.parents()[0][0])
_mono_list = list(A4FG4S4.leaves())
_mono_list
for i in _mono_list:
i.drop_monosaccharide(i.parents()[0][0])
plot_glycan_utilities.plot_glycan(A4FG4S4)
_mono_parents_list = [i.parents()[0][1] for i in _mono_list]
_mono_parents_list
#drop_monosaccharide(pos)
for _mpar in _mono_parents_list:
if len(_mpar.children())==1:
print(_mpar.children())
_mpar.drop_monosaccharide(_mpar.children()[0][0])
continue
for _index, _mchild in _mpar.children():
if _mchild in _mono_list:
_mpar.drop_monosaccharide(_index)
break
A4FG4S4
ud_composition = GlycanComposition.from_glycan(ud_glycan)
ud_composition.serialize()
a = FrozenGlycanComposition.from_glycan(ud_glycan)
```
# extract_motif
```
# transform glycoct to Glycan obj
a_glycan = glycoct.loads(a_3350)
# extract_motif
glycan_motif_dict = extract_motif.extract_motif(a_glycan)
print(glycan_motif_dict.keys())
print(glycan_motif_dict[1])
print(type(glycan_motif_dict[1][0]))
```
# Plot
```
plot_glycan_utilities.plot_glycan(a_glycan)
plot_glycan_utilities.plot_glycan_list([a_glycan],['demo'])
```
# pipeline
```
# in gc_init: clarify the glycoct_dict_goto_extraction_addr
# in gc_init: clarify the glytoucan_data_base_addr__
# two files above are input data file for this pip
extract_motif.get_motif_pip(22, prior=True)
# it would be faster if you run the python directly
# check the gc_init as well
# it would be faster if you run the python directly
customizing_motif_vec.customizing_motif_vec_pip()
# load motif vector and return edge_list
motif_dict = json_utility.load_json("/Users/apple/PycharmProjects/GlyCompare/intermediate_file/NBT_motif_dic_degree_list.json")
motif_lib = motif_class.GlycanMotifLib(motif_dict)
dep_tree, edge_list = motif_lib.motif_dependence_tree()
edge_list
len(motif_lib.motif_vec)
```
## plot glycan mass
```
a = json_utility.load_json('/Users/apple/PycharmProjects/nbt_glycan_profile/intermediate_file/NBT_glycan_dict.json')
name_k = {}
name_dict = {}
list_k = []
list_mass = []
# fi.patch.set_facecolor('white')
for i in sorted(a.keys()):
for k in a[i].keys():
name_k[k] = a[i][k]
name_dict[k] = i
list_k.append(glycoct.loads(a[i][k]))
list_mass.append(i)
len(list(name_k))
plot_glycan_utilities.plot_glycan_list(list_k, list_mass)
```
|
github_jupyter
|
```
import matplotlib.pyplot as plt
from matplotlib import style
import numpy as np
%matplotlib inline
style.use('ggplot')
x = [20,30,50]
y = [ 10,50,13]
x2 = [4,10,47,]
y2= [56,4,30]
plt.plot(x, y, 'r', label='line one', linewidth=5)
plt.plot(x2, y2, 'c', label ='line two', linewidth=5)
plt.title('Interactive plot')
plt.xlabel('X axis')
plt.ylabel('Y axis')
plt.legend()
#plt.grid(True, color='k')
plt.show()
#BAR GRAPH
plt.bar([1,4,5,3,2],[4,7,8,10,11], label='Type 1')
plt.bar([9,7,6,8,10],[3,6,9,11,15], label = 'Type 2', color='k')
plt.legend()
plt.xlabel('Bar Number')
plt.ylabel('Bar Height')
plt.title('Bar Graph')
plt.show()
```
HISTOGRAM
```
#Bar plots have cateogrical variables while histogram has quantitative variables
population_ages = [22,34,45,78,23,65,47,98,70,56,54,87,23,54,31,35,
64,76,87,80,60,73,47,63,79,52,75,64,51,46,83,62,36,74,63]
from numpy.random import seed
from numpy.random import randint
seed(1)
#generate some random integers
population_ages_2 = randint(10,50,40)
#print(population_ages_2)
bins = [20,30,40,50,60,70,80,90,100]
plt.hist(population_ages, bins, histtype='bar', color = 'm', rwidth = 0.5)
plt.hist(population_ages_2, bins, histtype='bar', color = 'c', rwidth = 0.5)
plt.xlabel('X asis')
plt.ylabel('Y axis')
plt.title('Histogram')
plt.legend()
plt.show()
```
AREA PLOT AND STACK PLOT
```
days = randint(1,5,5)
seed(0)
sleeping = randint(10,30,5)
eating = randint(40,60,5)
working = randint(70,100,5)
playing = randint(100,150,5)
plt.plot([],[], color = 'm', label = 'sleeping', linewidth = 5)
plt.plot([],[], color = 'c', label = 'eating', linewidth = 5)
plt.plot([],[], color = 'r', label = 'working', linewidth = 5)
plt.plot([],[], color = 'k', label = 'playing', linewidth = 5)
plt.stackplot(days, sleeping, eating, working, playing, colors = ['m','c','r','k'])
plt.legend()
```
PIE CHART
```
seed(0)
slices = randint(20,100,5)
activities = ['balling','playing','sleeping','praying','eating']
cols = ['c','m','r','b','y']
plt.pie(slices,
labels = activities,
startangle = 90,
shadow = True,
colors = cols,
autopct = '%.1f%%', #formats the percentage of the data given
explode=(0,0.2,0,0,0.1)) #this is to explode the chart and takes positional argument
plt.title('Pie Chart')
plt.show()
#working with Multiple Plots
def f(t):
return np.exp(-t) * np.cos(2*np.pi*t)
t1 = np.arange(0.0,5.0,0.1)
t2 = np.arange(0.0,6.0,0.4)
plt.subplot(211)
plt.plot(t1, f(t1),'bo',
t2, f(t2))
plt.subplot(212)
plt.plot(t1, np.cos(2*np.pi*t1), color = 'k')
plt.show()
```
FURTHER PLOTTING IN MATPLOTLIB/PYLAB
```
from matplotlib import pylab
pylab.__version__
import numpy as np
x = np.linspace(0,10,25)
y = x*x+2
print()
print(x)
print()
print(y)
#print(np.array([x,y]).reshape(25,2)) # to join the array together
pylab.plot(x,y, 'r') #'r' stands for red
#drawing a subgraph
pylab.subplot(1,2,1) #rows, columns and indexes
pylab.plot(x,y, 'b--')
pylab.subplot(1,2,2)
pylab.plot(y,x, 'g*-')
import matplotlib.pyplot as plt
from matplotlib import style
style.use('ggplot')
fig = plt.figure()
ax = fig.add_axes([0.5,0.1,0.8,0.8]) #this controls the left,bottom,width and height of the canvas
ax.plot(x,y, 'r')
#we can also draw subgraphs
fig, ax = plt.subplots(nrows=1, ncols=2)
for ax in ax:
ax.plot(x,y, 'r')
#we can drw a picture or a graph inside of another graph
fig = plt.figure()
ax1 = fig.add_axes([0.5,0.1,0.8,0.8]) #Big axes
ax2 = fig.add_axes([0.6,0.5,0.35,0.3]) #small canvas
ax1.plot(x,y,'r')
ax2.plot(y,x, 'g')
fig, ax = plt.subplots(dpi=100)
ax.set_xlabel('X-axis')
ax.set_ylabel('Y-axis')
ax.set_title('tutorial plots')
#ax.plot(x,y, 'r')
ax.plot(x,x**2)
ax.plot(x, x**3)
#ax.legend(['label 1', 'label 2'])
ax.legend(['y = x**2', 'y = x**3'], loc=2) #plotting the legend
#you can also set other properties such as line color, transparency and more
fig, ax = plt.subplots(dpi=100)
ax.plot(x, x**2, 'r', alpha=0.5) #alpha sets the line colour transparency
ax.plot(x, x+2, alpha=.5)
ax.plot(x, x+3, alpha=.5)
fig, ax = plt.subplots(dpi=100)
#line width
ax.plot(x, x+1, 'b', lw=0.5 )
ax.plot(x, x+2, 'b', lw=1.5)
ax.plot(x, x+3, 'b', lw=3)
ax.plot(x, x+4, 'b', lw=3.5)
fig, ax = plt.subplots(dpi=100)
ax.plot(x, x+1, 'b', lw=0.5, linestyle='-')
ax.plot(x, x+2, 'b', lw=1.5, linestyle='-.')
ax.plot(x, x+3, 'b', lw=3, linestyle=':')
ax.plot(x, x+4, 'b', lw=3.5, linestyle='-')
fig, ax = plt.subplots(dpi=100)
ax.plot(x, x+1, 'b', lw=0.5 , marker='o', markersize=5, markerfacecolor='r')
ax.plot(x, x+2, 'b', lw=1.5, marker='+')
ax.plot(x, x+3, 'b', lw=3, marker='s')
ax.plot(x, x+4, 'b', lw=3.5, marker='1', markersize=10)
```
LIMITING OUR DATA
```
fig, ax = plt.subplots(1,2, figsize=(10,5))
ax[0].plot(x,x**2, x,x**3, lw=3)
#ax[0].grid(True) this applies if we are not using ggplot
ax[1].plot(x,x**2, x,x**3, lw=3)
#we set the x and y limit on the second plot
ax[1].set_ylim([0,60])
ax[1].set_xlim([2,5])
```
Other 2_d Graphs
```
n = np.array([0,1,2,3,4,5])
fig, ax = plt.subplots(1,4, figsize=(16,5))
ax[0].set_title('scatter')
ax[0].scatter(x, x + 0.25*np.random.randn(len(x)))
ax[1].set_title('step plot')
ax[1].step(n, n**2, lw=2, color='b')
ax[2].set_title('Bar')
ax[2].bar(n, n**2, align='center', color ='g', alpha=0.5)
ax[3].set_title('fill between')
ax[3].fill_between(x, x**2, x**3, color ='g', alpha=0.5)
plt.show()
#Draw a Histogram '''Very important''
x = np.random.randn(10000)
fig, ax = plt.subplots(1,2, figsize=(12,4))
ax[0].set_title('Histogram')
ax[0].hist(x, color='g', alpha=0.8)
ax[1].set_title('Cumulative detailed histogram')
ax[1].hist(x, cumulative=True, bins=9)
plt.show()
#draw a contour map
#lets create some data where X and Y are coordinates and Z is the depth or height
import matplotlib.pyplot as plt
import numpy as np
import matplotlib.cm as cm
delta = 0.0075
x = np.arange(-3, 3, delta)
y = np.arange(-2, 2, delta)
X, Y = np.meshgrid(x,y)
Z1 = np.exp(-X**2 - Y**2)
Z2 = np.exp(-(-X - 1)**2 - (Y - 1)**2)
Z = (Z1 - Z2)*2
fig, ax = plt.subplots(dpi=100)
CS = ax.contour(X,Y,Z) #CS is contour surface
ax.clabel(CS, inline=1, fontsize=10)
ax.set_title('Contour Map')
```
3 D MAPS
```
from mpl_toolkits.mplot3d.axes3d import Axes3D
fig = plt.figure(figsize=(14,6), dpi=100)
#Specify the 3D graphics to draw with projection='3d'
ax = fig.add_subplot(1,2,1, projection='3d')
ax.plot_surface(X, Y, Z, rstride=10, cstride=10, lw=0, color='c')
#write a program to create a pie chart of the popularity of programming languages
popularity = [200,334,890,290,679,300,980] #No of users of programming languages
prog_lang = ['Java', 'C#', 'C++', 'CSS', 'Java Script', 'Python', 'R']
fig = plt.figure(figsize=(14,6), dpi=100)
plt.pie(popularity,
shadow = True,
autopct= '%.f%%', startangle = 180,
explode=[0,0,0,0,0,0,0.1],
labels = prog_lang)
plt.title('Popularity of Programming languages')
plt.show()
```
|
github_jupyter
|
# Svenskt Kvinnobiografiskt lexikon part 5
version part 5 - 0.1
Check SKBL women if Alvin has an authority for the women
* this [Jupyter Notebook](https://github.com/salgo60/open-data-examples/blob/master/Svenskt%20Kvinnobiografiskt%20lexikon%20part%205.ipynb)
* [part 1](https://github.com/salgo60/open-data-examples/blob/master/Svenskt%20Kvinnobiografiskt%20lexikon.ipynb) check Wikidata and SKBL
* [part 2](https://github.com/salgo60/open-data-examples/blob/master/Svenskt%20Kvinnobiografiskt%20lexikon%20part%202.ipynb) more queries etc.
* [part 4](https://github.com/salgo60/open-data-examples/blob/master/Svenskt%20Kvinnobiografiskt%20lexikon%20part%204.ipynb) get archives
# Wikidata
get SKBL women not connected to Alvin
```
from datetime import datetime
now = datetime.now()
print("Last run: ", datetime.now())
# pip install sparqlwrapper
# https://rdflib.github.io/sparqlwrapper/
import sys,json
import pandas as pd
from SPARQLWrapper import SPARQLWrapper, JSON
endpoint_url = "https://query.wikidata.org/sparql"
querySKBLAlvin = """SELECT ?item (REPLACE(STR(?item), ".*Q", "Q") AS ?wid) ?SKBL (URI(CONCAT("https://www.alvin-portal.org/alvin/resultList.jsf?query=", ENCODE_FOR_URI(?itemLabel), "&searchType=PERSON")) AS ?Alvin) WHERE {
?item wdt:P4963 ?id.
OPTIONAL { ?item wdt:P569 ?birth. }
MINUS { ?item wdt:P6821 ?value. }
BIND(URI(CONCAT("https://www.skbl.se/sv/artikel/", ?id)) AS ?SKBL)
SERVICE wikibase:label {
bd:serviceParam wikibase:language "sv".
?item rdfs:label ?itemLabel.
}
}
ORDER BY (?itemLabel)"""
def get_sparql_dataframe(endpoint_url, query):
"""
Helper function to convert SPARQL results into a Pandas data frame.
"""
user_agent = "salgo60/%s.%s" % (sys.version_info[0], sys.version_info[1])
sparql = SPARQLWrapper(endpoint_url, agent=user_agent)
sparql.setQuery(query)
sparql.setReturnFormat(JSON)
result = sparql.query()
processed_results = json.load(result.response)
cols = processed_results['head']['vars']
out = []
for row in processed_results['results']['bindings']:
item = []
for c in cols:
item.append(row.get(c, {}).get('value'))
out.append(item)
return pd.DataFrame(out, columns=cols)
SKBLmissingAlvin = get_sparql_dataframe(endpoint_url, querySKBLAlvin )
SKBLmissingAlvin.info()
import csv
import urllib3, json
http = urllib3.PoolManager()
listNewItems =[]
for index,row in SKBLmissingAlvin.iterrows():
url = row["Alvin"]
print(url)
r = http.request('GET', url)
print(len(r.data),url)
#listNewItems.append(new_item)
#print (len(listNewItems) ," antal poster")
```
|
github_jupyter
|
[[source]](../api/alibi.explainers.shap_wrappers.rst)
# Tree SHAP
<div class="alert alert-info">
Note
To enable SHAP support, you may need to run:
```bash
pip install alibi[shap]
```
</div>
## Overview
The tree SHAP (**SH**apley **A**dditive ex**P**lanations) algorithm is based on the paper [From local explanations to global understanding with explainable AI for trees](https://www.nature.com/articles/s42256-019-0138-9) by Lundberg et al. and builds on the open source [shap library](https://github.com/slundberg/shap) from the paper's first author.
The algorithm provides human interpretable explanations suitable for regression and classification of models with tree structure applied to tabular data. This method is a member of the *additive feature attribution methods* class; feature attribution refers to the fact that the change of an outcome to be explained (e.g., a class probability in a classification problem) with respect to a *baseline* (e.g., average prediction probability for that class in the training set) can be attributed in different proportions to the model input features.
A simple illustration of the explanation process is shown in Figure 1. Here we see depicted a tree-based model which takes as an input features such as `Age`, `BMI` or `Blood pressure` and outputs `Mortality risk score`, a continuous value. Let's assume that we aim to explain the difference between and observed outcome and no risk, corresponding to a base value of `0.0`. Using the Tree SHAP algorithm, we attribute the `4.0` difference to the input features. Because the sum of the attribute values equals `output - base value`, this method is _additive_. We can see for example that the `Sex` feature contributes negatively to this prediction whereas the remainder of the features have a positive contribution (i.e., increase the mortality risk). For explaining this particular data point, the `Blood Pressure` feature seems to have the largest effect, and corresponds to an increase in the mortality risk. See our example on how to perform explanations with this algorithm and visualise the results using the `shap` library visualisations [here](../examples/interventional_tree_shap_adult_xgb.ipynb) and [here](../examples/path_dependent_tree_shap_adult_xgb.ipynb).

Figure 1: Cartoon ilustration of explanation models with Tree SHAP.
Image Credit: Scott Lundberg (see source [here](https://www.nature.com/articles/s42256-019-0138-9))
## Usage
In order to compute the shap values , the following arguments can optionally be set when calling the `explain` method:
- `interactions`: set to `True` to decompose the shap value of every feature for every example into a main effect and interaction effects
- `approximate`: set to `True` to calculate an approximation to shap values (see our [example](../examples/path_dependent_tree_shap_adult_xgb.ipynb))
- `check_additivity`: if the explainer is initialised with `model_output = raw` and this option is `True` the explainer checks that the sum of the shap values is equal to model output - expected value
- `tree_limit`: it an `int` is passed, an ensemble formed of only `tree_limit` trees is explained
If the dataset contains categorical variables that have been encoded before being passed to the explainer and a single shap value is desired for each categorical variable, the the following options should be specified:
- `summarise_result`: set to `True`
- `cat_var_start_idx`: a sequence of integers containing the column indices where categorical variables start. If the feature matrix contains a categorical feature starting at index 0 and one at index 10, then `cat_var_start_idx=[0, 10]`
- `cat_vars_enc_dim`: a list containing the dimension of the encoded categorical variables. The number of columns specified in this list is summed for each categorical variable starting with the corresponding index in `cat_var_start_idx`. So if `cat_var_start_idx=[0, 10]` and `cat_vars_enc_dim=[3, 5]`, then the columns with indices `0, 1` and `2` and `10, 11, 12, 13` and `14` will be combined to return one shap value for each categorical variable, as opposed to `3` and `5`.
### Path-dependent feature perturbation algorithm
#### Initialiastion and fit
The explainer is initialised with the following agruments:
- a model, which could be an `sklearn`, `xgboost`, `catboost` or `lightgbm` model. Note that some of the models in these packages or models trained with specific objectives may not be supported. In particular, passing raw strings as categorical levels for `catboost` and `lightgbm` is not supported
- `model_output` should always default to `raw` for this algorithm
- optionally, set `task` to `'classification'` or `'regression'` to indicate the type of prediction the model makes. If set to `regression` the `prediction` field of the response is empty
- optionally, a list of feature names via `feature_names`. This is used to provide information about feature importances in the response
- optionally, a dictionary, `category_names`, that maps the columns of the categorical variables to a list of strings representing the names of the categories. This may be used for visualisation in the future.
```python
from alibi.explainers import TreeShap
explainer = TreeShap(
model,
feature_names=['size', 'age'],
categorical_names={0: ['S', 'M', 'L', 'XL', 'XXL']}
)
```
For this algorithm, fit is called with no arguments:
```python
explainer.fit()
```
#### Explanation
To explain an instance `X`, we simply pass it to the explain method:
```python
explanation = explainer.explain(X)
```
The returned explanation object has the following fields:
* `explanation.meta`:
```python
{'name': 'TreeShap',
'type': ['whitebox'],
'task': 'classification',
'explanations': ['local', 'global'],
'params': {'summarise_background': False, 'algorithm': 'tree_path_dependent' ,'kwargs': {}}
}
```
This field contains metadata such as the explainer name and type as well as the type of explanations this method can generate. In this case, the `params` attribute shows the Tree SHAP variant that will be used to explain the model in the `algorithm` attribute.
* `explanation.data`:
```python
data={'shap_values': [
array([[ 5.0661433e-01, 2.7620478e-02],
[-4.1725192e+00, 4.4859368e-03],
[ 4.1338313e-01, -5.5618007e-02]],
dtype=float32)
],
'shap_interaction_values': [array([], dtype=float64)],
'expected_value': array([-0.06472124]),
'model_output': 'raw',
'categorical_names': {0: ['S', 'M', 'L', 'XL', 'XXL']},
'feature_names': ['size', 'age'],
'raw': {
'raw_prediction': array([-0.73818872, -8.8434663 , -3.24204564]),
'loss': [],
'prediction': array([0, 0, 0]),
'instances': array([[0, 23],
[4, 55],
[2, 43]]),
'labels': array([], dtype=float64),
'importances': {
'0': {
'ranked_effect': array([1.6975055 , 1.3598266], dtype=float32),
'names': [
'size',
'age',
]
},
'aggregated': {
'ranked_effect': array([1.6975055 , 1.3598266], dtype=float32),
'names': [
'size',
'age',
]
}
}
}
}
```
This field contains:
* `shap_values`: a list of length equal to the number of model outputs, where each entry is an array of dimension samples x features of shap values. For the example above , 3 instances with 2 features has been explained so the shap values for each class are of dimension 3 x 2
* `shap_interaction_values`: an empty list since this `interactions` was set to `False` in the explain call
* `expected_value`: an array containing expected value for each model output
* `model_output`: `raw` indicates that the model raw output was explained, the only option for the path dependent algorithm
* `feature_names`: a list with the feature names
* `categorical_names`: a mapping of the categorical variables (represented by indices in the shap_values columns) to the description of the category
* `raw`: this field contains:
* `raw_prediction`: a samples x n_outputs array of predictions for each instance to be explained.
* `prediction`: an array containing the index of the maximum value in the `raw_prediction` array
* `instances`: a samples x n_features array of instances which have been explained
* `labels`: an array containing the labels for the instances to be explained
* `importances`: a dictionary where each entry is a dictionary containing the sorted average magnitude of the shap value (ranked_effect) along with a list of feature names corresponding to the re-ordered shap values (names). There are n_outputs + 1 keys, corresponding to n_outputs and the aggregated output (obtained by summing all the arrays in shap_values)
Please see our examples on how to visualise these outputs using the shap library visualisations library visualisations [here](../examples/interventional_tree_shap_adult_xgb.ipynb) and [here](../examples/path_dependent_tree_shap_adult_xgb.ipynb).
#### Shapley interaction values
##### Initialisation and fit
Shapley interaction values can only be calculated using the path-dependent feature perturbation algorithm in this release, so no arguments are passed to the `fit` method:
```python
explainer = TreeShap(
model,
model_output='raw',
)
explainer.fit()
```
##### Explanation
To obtain the Shapley interaction values, the `explain` method is called with the option `interactions=True`:
```python
explanation = explainer.explain(X, interactions=True)
```
The explanation contains a list with the shap interaction values for each model output in the `shap_interaction_values` field of the `data` property.
### Interventional feature perturbation algorithm
#### Explaining model output
##### Initialiastion and fit
```python
explainer = TreeShap(
model,
model_output='raw',
)
explainer.fit(X_reference)
```
Model output can be set to `model_output='probability'` to explain models which return probabilities. Note that this requires the model to be trained with specific objectives. Please the footnote to our path-dependent feature perturbation [example](../examples/path_dependent_tree_shap_adult_xgb.ipynb) for an example of how to set the model training objective in order to explain probability outputs.
##### Explanation
To explain instances in `X`, the explainer is called as follows:
```python
explanation = explainer.explain(X)
```
#### Explaining loss functions
##### Initialisation and fit
To explain loss function, the following configuration and fit steps are necessary:
```python
explainer = TreeShap(
model,
model_output='log_loss',
)
explainer.fit(X_reference)
```
Only square loss regression objectives and cross-entropy classification objectives are supported in this release.
##### Explanation
Note that the labels need to be passed to the `explain` method in order to obtain the explanation:
```python
explanation = explainer.explain(X, y)
```
### Miscellaneous
#### Runtime considerations
##### Adjusting the size of the reference dataset
The algorithm automatically warns the user if a background dataset size of more than `1000` samples is passed. If the runtime of an explanation with the original dataset is too large, then the algorithm can automatically subsample the background dataset during the `fit` step. This can be achieve by specifying the fit step as
```python
explainer.fit(
X_reference,
summarise_background=True,
n_background_samples=300,
)
```
or
```python
explainer.fit(
X_reference,
summarise_background='auto'
)
```
The `auto` option will select `1000` examples, whereas using the boolean argument allows the user to directly control the size of the reference set. If categorical variables are specified, the algorithm uses subsampling of the data. Otherwise, a kmeans clustering algorithm is used to select the background dataset.
As describe above, the explanations are performed with respect to the expected output over this dataset so the shap values will be affected by the dataset selection. We recommend experimenting with various ways to choose the background dataset before deploying explanations.
## Theoretical overview
Recall that, for a model $f$, the Kernel SHAP algorithm [[1]](#References) explains a certain outcome with respect to a chosen reference (or an expected value) by estimating the shap values of each feature $i$ from $\{1, ..., M\}$, as follows:
- enumerate all subsets $S$ of the set $F \setminus \{i\}$
- for each $S \subseteq F \setminus \{i\}$, compute the contribution of feature $i$ as $C(i|S) = f(S \cup \{i\}) - f(S)$
- compute the shap value according to
\begin{equation}\tag{1}
\phi_i := \frac{1}{M} \sum \limits_{{S \subseteq F \setminus \{i\}}} \frac{1}{\binom{M - 1}{|S|}} C(i|S).
\end{equation}
Since most models do not accept arbitrary patterns of missing values at inference time, $f(S)$ needs to be approximated. The original formulation of the Kernel Shap algorithm [[1]](#References) proposes to compute $f(S)$ as the _observational conditional expectation_
\begin{equation}\tag{2}
f(S) := \mathbb{E}\left[f(\mathbf{x}_{S}, \mathbf{X}_{\bar{S}} | \mathbf{X}_S = \mathbf{x}_S) \right]
\end{equation}
where the expectation is taken over a *background dataset*, $\mathcal{D}$, after conditioning. Computing this expectation involves drawing sufficiently many samples from $\mathbf{X}_{\bar{S}}$ for every sample from $\mathbf{X}_S$, which is expensive. Instead, $(2)$ is approximated by
$$
f(S) := \mathbb{E} \left[f(\mathbf{x}_{S}, \mathbf{X}_{\bar{S}})\right]
$$
where features in a subset $S$ are fixed and features in $\bar{S}$ are sampled from the background dataset. This quantity is referred to as _marginal_ or *interventional conditional expectation*, to emphasise that setting features in $S$ to the values $\mathbf{x}_{S}$ can be viewed as an intervention on the instance to be explained.
As described in [[2]](#References), if estimating impact of a feature $i$ on the function value by $\mathbb{E} \left[ f | X_i = x_i \right]$, one should bear in mind that observing $X_i = x_i$ changes the distribution of the features $X_{j \neq i}$ if these variables are correlated. Hence, if the conditional expectation if used to estimate $f(S)$, the Shapley values might not be accurate since they also depend on the remaining variables, effect which becomes important if there are strong correlations amongst the independent variables. Furthermore, the authors show that estimating $f(S)$ using the conditional expectation violates the *sensitivity principle*, according to which the Shapley value of a redundant variable should be 0. On the other hand, the intervention breaks the dependencies, ensuring that the sensitivity holds. One potential drawback of this method is that setting a subset of values to certain values without regard to the values of the features in the complement (i.e., $\bar{S}$) can generate instances that are outside the training data distribution, which will affect the model prediction and hence the contributions.
The following sections detail how these methods work and how, unlike Kernel SHAP, compute the exact shap values in polynomial time. The algorithm estimating contributions using interventional expectations is presented, with the remaining sections being dedicated to presenting an approximate algorithm for evaluating the interventional expectation that does not require a background dataset and Shapley interaction values.
<a id='source_1'></a>
### Interventional feature perturbation
<a id='interventional'></a>
The interventional feature perturbation algorithm provides an efficient way to calculate the expectation $f(S) := \mathbb{E} \left[f(\mathbf{x}_{S}, \mathbf{X}_{\bar{S}})\right]$ for all possible subsets $S$, and to combine these values according to equation $(1)$ in order to obtain the Shapley value. Intuitively, one can proceed as follows:
- choose a background sample $r \in \mathcal{D}$
- for each feature $i$, enumerate all subsets $S \subseteq F \setminus \{i\}$
- for each such subset, $S$, compute $f(S)$ by traversing the tree with a _hybrid sample_ where the features in $\bar{S}$ are replaced by their corresponding values in $r$
- combine results according to equation $(1)$
If $R$ samples from the background distribution are used, then the complexity of this algorithm is $O(RM2^M)$ since we perform $2^M$ enumerations for each of the $M$ features, $R$ times. The key insight into this algorithm is that multiple hybrid samples will end up traversing identical paths and that this can be avoided if the shap values' calculation is reformulated as a summation over the paths in the tree (see [[4]](#References) for a proof):
$$
\phi_i = \sum_{P}\phi_{i}^P
$$
where the summation is over paths $P$ in the tree descending from $i$. The value and sign of the contribution of each path descending through a node depends on whether the split from the node is due to a foreground or a background feature, as explained in the practical example below.
<a id='source_4'></a>
#### Computing contributions with interventional Tree SHAP: a practical example.

Figure 2: Ilustration of the feature contribution and expected value estimation process using interventional perturbation Tree SHAP. The positive and the negative contributions of a node are represented in <span style="color:green">green</span> and <span style="color:red">red</span>, respectively.
In the figure above, the paths followed due the instance to be explained $x$ are coloured in red, paths followed due to the background sample in red, and common paths in yellow.
The instance to be explained is perturbed using a reference sample by the values of the features $F1$, $F3$ and $F5$ in $x$ with the corresponding values in $r$. This process gives the name of the algorithm since following the paths indicated by the background sample is akin to intervening on the instance to be explained with features from the background sample. Therefore, one defines the set $F$ in the previous section as $F = \{ j: x_{j} \neq r_{j}\}$ for this case. Note that these are the only features for which one can estimate a contribution given this background sample; the same path is followed for features $F2$ and $F4$ for both the original and the perturbed sample, so these features do not contribute to explain the difference between the observed outcome ($v_6$) and the outcome that would have been observed if the tree had been traversed according to the reference $(v_{10})$.
Considering the structure of the tree for the given $x$ and $r$ together with equation $(1)$ reveals that the left subtree can be traversed to compute the negative terms in the summation whereas the right subtree will provide positive terms. This is because the nodes in the left subtree can only be reached if $F1$ takes the value from the background sample, that is, only $F1$ is missing. Because $F2$ and $F4$ do not contribute to explaining $f(x) - f(r)$, the negative contribution of the left subtree will be equal to the negative contribution of node $8$. This node sums two negative components: one when the downstream feature $F5$ is also missing (corresponding to evaluating $f$ at $S = \varnothing$) and one when $F5$ is present (corresponding to evaluating $f$ at $S=\{F5\}$). These negative values are weighted according to the combinatorial factor in equation $(1)$. By a similar reasoning, the nodes in the right subtree are reached only if $F1$ is present and they provide the positive terms for the shap value computation. Note that the combinatorial factor in $(1)$ should be evaluated with $|S| \gets |S| - 1$ for positive contributions since $|S|$ is increased by $1$ because of the feature whose contribution is calculated is present in the right subtree.
A similar reasoning is applied to compute the contributions of the downstream nodes. For example, to estimate the contribution of $F5$, one considers a set $S = \varnothing$ and observes the value of node $10$, and weighs that with the combinatorial factor from equation $(1)$ where $M-1 = 1$ and $|S|=0$ (because there are no features present on the path) and a positive contribution from node $9$ weighted by the same combinatorial factor (because $S = \{F5\}$ so $|S| - 1 = 0$).
To summarise, the efficient algorithm relies on the following key ideas:
- each node in the tree is assigned a positive contribution reflecting membership of the splitting feature in a subset $S$ and a negative contribution to indicate the feature is missing ($i\in \bar{S}$)
- the positive and negative contributions of a node can be computed by summing the positive and negative contributions of the children nodes, in keeping with the fact that the Shapley value can be computed by summing a contribution from each path the feature is on
- to compute the contribution of a feature at a node, one adds a positive contribution from the node reached by splitting on the feature from the instance to be explained and a negative contribution from the node reached by splitting on the feature in the background sample
- features for which the instance to be explained and the reference follow the same path are assigned $0$ contribution.
#### Explaining loss functions
One advantage of the interventional approach is that it allows to approximately transform the shap values to account for nonlinear transformation of outputs, such as the loss function. Recall that given $\phi_i, ..., \phi_M$ the local accuracy property guarantees that given $\phi_0 = \mathbb{E}[f(x)]$
\begin{equation}\tag{3}
f(x) = \phi_0 + \sum \limits_{i=1}^M \phi_i.
\end{equation}
Hence, in order to account for the effect of the nonlinear transformation $h$, one has to find the functions $g_0, ..., g_M$ such that
\begin{equation}\tag{4}
h(f(x)) = g(\phi_0) + \sum \limits_{i=1}^M g_i(\phi_i)
\end{equation}
For simplicity, let $y=h(x)$. Then using a first-order Taylor series expansion around $\mathbb{E}[y]$ one obtains
\begin{equation}\tag{5}
h(y) \approx h(\mathbb{E}[y]) + \frac{\partial h(y) }{\partial y} \Bigr|_{y=\mathbb{E}[y]}(y - \mathbb{E}[y]).
\end{equation}
Substituting $(3)$ in $(5)$ and comparing coefficients with $(4)$ yields
\begin{equation*}
\begin{split}
g_0 & \approx h(\mathbb{E}[y]) \\
g_i &\approx \phi_i \frac{\partial h(y) }{\partial y} \Bigr|_{y=\mathbb{E}[y]} .
\end{split}
\end{equation*}
Hence, an approximate correction is given by simply scaling the shap values using the gradient of the nonlinear function. Note that in practice one may take the Taylor series expansion at a reference point $r$ from the background dataset and average over the entire background dataset to compute the scaling factor. This introduces an additional source of noise since $h(\mathbb{E}[y]) = \mathbb{E}[h(y)]$ only when $h$ is linear.
#### Computational complexity
For a single foreground and background sample and a single tree, the algorithm runs in $O(LD)$ time. Thus, using $R$ background samples and a model containing $T$ trees, yields a complexity of $O(TRLD)$.
### Path dependent feature perturbation
<a id='path_dependent'></a>
Another way to approximate equation $(2)$ to compute $f(S)$ given an instance $x$ and a set of missing features $\bar{S}$ is to recursively follow the decision path through the tree and:
- return the node value if a split on a feature $i \in S$ is performed
- take a weighted average of the values returned by children if $i \in \bar{S}$, where the weighing factor is equal to the proportion of training examples flowing down each branch. This proportion is a property of each node, sometimes referred to as _weight_ or _cover_ and measures how important is that node with regard to classifying the training data.
Therefore, in the path-dependent perturbation method, we compute the expectations with respect to the training data distribution by weighting the leaf values according to the proportion of the training examples that flow to that leaf.
To avoid repeating the above recursion $M2^M$ times, one first notices that for a single decision tree, applying a perturbation would result in the sample ending up in a different leaf. Therefore, following each path from the root to a leaf in the tree is equivalent to perturbing subsets of features of varying cardinalities. Consequently, each leaf will contain a certain proportion of all possible subsets $S \subseteq F$. Therefore, to compute the shap values, the following quantities are computed at each leaf, *for every feature $i$ on the path leading to that leaf*:
- the proportion of subsets $S$ at the leaf that contain $i$ and the proportion of subsets $S$ that do not contain $i$
- for each cardinality, the proportion of the sets of that cardinality contained at the leaf. Tracking each cardinality as opposed to a single count of subsets falling into a given leaf is necessary since it allows to apply the weighting factor in equation (1), which depends on the subset size, $|S|$.
This intuition can be summarised as follows:
\begin{equation}\tag{6}
\phi_i := \sum \limits_{j=1}^L \sum \limits_{P \in {S_j}} \frac {w(|P|, j)}{ M_j {\binom{M_j - 1}{|P|}}} (p_o^{i,j} - p_z^{i, j}) v_j
\end{equation}
where $S_j$ is the set of present feature subsets at leaf $j$, $M_j$ is the length of the path and $w(|P|, j)$ is the proportion of all subsets of cardinality $P$ at leaf $j$, $p_o^{i, j}$ and $p_z^{i, j}$ represent the fractions of subsets that contain or do not contain feature $i$ respectively.
#### Computational complexity
Using the above quantities, one can compute the _contribution_ of each leaf to the Shapley value of every feature. This algorithm has complexity $O(TLD^2)$ for an ensemble of trees where $L$ is the number of leaves, $T$ the number of trees in the ensemble and $D$ the maximum tree depth. If the tree is balanced, then $D=\log L$ and the complexity of our algorithm is $O(TL\log^2L)$
#### Expected value for the path-dependent perturbation algorithm
Note that although a background dataset is not provided, the expected value is computed using the node cover information, stored at each node. The computation proceeds recursively, starting at the root. The contribution of a node to the expected value of the tree is a function of the expected values of the children and is computed as follows:
$$
c_j = \frac{c_{r(j)}r_{r(j)} + c_{l(j)}r_{l(j)}}{r_j}
$$
where $j$ denotes the node index, $c_j$ denotes the node expected value, $r_j$ is the cover of the $j$th node and $r(j)$ and $l(j)$ represent the indices of the right and left children, respectively. The expected value used by the tree is simply $c_{root}$. Note that for tree ensembles, the expected values of the ensemble members is weighted according to the tree weight and the weighted expected values of all trees are summed to obtain a single value.
The cover depends on the objective function and the model chosen. For example, in a gradient boosted tree trained with squared loss objective, $r_j$ is simply the number of training examples flowing through $j$. For an arbitrary objective, this is the sum of the Hessian of the loss function evaluated at each point flowing through $j$, as explained [here](../examples/xgboost_model_fitting_adult.ipynb).
### Shapley interaction values
While the Shapley values provide a solution to the problem of allocating a function variation to the input features, in practice it might be of interest to understand how the importance of a feature depends on the other features. The Shapley interaction values can solve this problem, by allocating the change in the function amongst the individual features (*main effects*) and all pairs of features (*interaction effects*). Thus, they are defined as
\begin{equation}\tag{7}
\Phi_{i, j}(f, x) = \sum_{S \subseteq {F \setminus \{i, j\}}} \frac{1}{2|S| {\binom{M-1}{|S| - 1}}} \nabla_{ij}(f, x, S), \; i \neq j
\end{equation}
and
\begin{equation}\tag{8}
\nabla_{ij}(f, x, S) = \underbrace{f_{x}(S \cup \{i, j\}) - f_x(S \cup \{j\})}_{j \; present} - \underbrace{[f_x(S \cup \{i\}) - f_x(S)]}_{j \; not \; present}.
\end{equation}
Therefore, the interaction of features $i$ and $j$ can be computed by taking the difference between the shap values of $i$ when $j$ is present and when $j$ is not present. The main effects are defined as
$$
\Phi_{i,i}(f, x) = \phi_i(f, x) - \sum_{i \neq j} \Phi_{i, j}(f, x),
$$
Setting $\Phi_{0, 0} = f_x(\varnothing)$ yields the local accuracy property for Shapley interaction values:
$$f(x) = \sum \limits_{i=0}^M \sum \limits_{j=0}^M \Phi_{i, j}.(f, x) $$.
The interaction is split equally between feature $i$ and $j$, which is why the division by two appears in equation $(7)$. The total interaction effect is defined as $\Phi_{i, j}(f, x) + \Phi_{j, i}(f,x)$.
#### Computational complexity
According to equation $(8)$, the interaction values can be computed by applying either the interventional or path-dependent feature perturbation algorithm twice: once by fixing the value of feature $j$ to $x_j$ and computing the shapley value for feature $i$ in this configuration, and once by fixing $x_j$ to a "missing" value and performing the same computation. Thus, the interaction values can be computed in $O(TMLD^2)$ with the path-dependent perturbation algorithm and $O(TMLDR)$ with the interventional feature perturbation algorithm.
### Comparison to other methods
Tree-based models are widely used in areas where model interpretability is of interest because node-level statistics gathered from the training data can be used to provide insights into the behaviour of the model across the training dataset, providing a _global explanation_ technique. As shown in our [example](../examples/path_dependent_tree_shap_adult_xgb.ipynb), considering different statistics gives rise to different importance rankings. As discussed in [[1]](#References) and [[3]](#References), depending on the statistic chosen, feature importances derived from trees are not *consistent*, meaning that a model where a feature is known to have a bigger impact might fail to have a larger importance. As such, feature importances cannot be compared across models. In contrast, both the path-dependent and interventional perturbation algorithms tackle this limitation.
In contrast to feature importances derived from tree statistics, the Tree SHAP algorithms can also provide local explanations, allowing the identification of features that are globally "not important", but can affect specific outcomes significantly, as might be the case in healthcare applications. Additionally, it provides a means to succinctly summarise the effect magnitude and direction (positive or negative) across potentially large samples. Finally, as shown in [[1]](#References) (see [here](https://static-content.springer.com/esm/art%3A10.1038%2Fs42256-019-0138-9/MediaObjects/42256_2019_138_MOESM1_ESM.pdf), p. 26), averaging the instance-level shap values importance to derive a global score for each feature can result in improvements in feature selection tasks.
Another method to derive instance-level explanations for tree-based model has been proposed by Sabaas [here](https://github.com/andosa/treeinterpreter). This feature attribution method is similar in spirit to Shapley value, but does not account for the effect of variable order as explained [here](https://static-content.springer.com/esm/art%3A10.1038%2Fs42256-019-0138-9/MediaObjects/42256_2019_138_MOESM1_ESM.pdf) (pp. 10-11) as well as not satisfying consistency ([[3]](#References)).
Finally, both Tree SHAP algorithms exploit model structure to provide exact Shapley values computation albeit using different estimates for the effect of missing features, achieving explanations in low-order polynomial time. The KernelShap method relies on post-hoc (black-box) function modelling and approximations to approximate the same quantities and given enough samples has been shown to to the exact values (see experiments [here](https://static-content.springer.com/esm/art%3A10.1038%2Fs42256-019-0138-9/MediaObjects/42256_2019_138_MOESM1_ESM.pdf) and our [example](../examples/interventional_tree_shap_adult_xgb.ipynb)). Our Kernel SHAP [documentation](KernelSHAP.ipynb) provides comparisons of feature attribution methods based on Shapley values with other algorithms such as LIME and [anchors](Anchors.ipynb).
<a id='source_3'></a>
## References
<a id='References'></a>
[[1]](#source_1) Lundberg, S.M. and Lee, S.I., 2017. A unified approach to interpreting model predictions. In Advances in neural information processing systems (pp. 4765-4774).
[[2]](#source_2) Janzing, D., Minorics, L. and Blöbaum, P., 2019. Feature relevance quantification in explainable AI: A causality problem. arXiv preprint arXiv:1910.13413.
[[3]](#source_3) Lundberg, S.M., Erion, G.G. and Lee, S.I., 2018. Consistent individualized feature attribution for tree ensembles. arXiv preprint arXiv:1802.03888.
[[4]](#source_4) Chen, H., Lundberg, S.M. and Lee, S.I., 2018. Understanding Shapley value explanation algorithms for trees. Under review for publication in Distill, draft available [here](https://hughchen.github.io/its_blog/index.html).
## Examples
### Path-dependent Feature Perturbation Tree SHAP
[Explaing tree models with path-dependent feature perturbation Tree SHAP](../examples/path_dependent_tree_shap_adult_xgb.ipynb)
### Interventional Feature Perturbation Tree SHAP
[Explaing tree models with path-dependent feature perturbation Tree SHAP](../examples/interventional_tree_shap_adult_xgb.ipynb)
|
github_jupyter
|
# Using PyTorch with TensorRT through ONNX:
TensorRT is a great way to take a trained PyTorch model and optimize it to run more efficiently during inference on an NVIDIA GPU.
One approach to convert a PyTorch model to TensorRT is to export a PyTorch model to ONNX (an open format exchange for deep learning models) and then convert into a TensorRT engine. Essentially, we will follow this path to convert and deploy our model:

Both TensorFlow and PyTorch models can be exported to ONNX, as well as many other frameworks. This allows models created using either framework to flow into common downstream pipelines.
To get started, let's take a well-known computer vision model and follow five key steps to deploy it to the TensorRT Python runtime:
1. __What format should I save my model in?__
2. __What batch size(s) am I running inference at?__
3. __What precision am I running inference at?__
4. __What TensorRT path am I using to convert my model?__
5. __What runtime am I targeting?__
## 1. What format should I save my model in?
We are going to use ResNet50, a widely used CNN architecture first described in <a href=https://arxiv.org/abs/1512.03385>this paper</a>.
Let's start by loading dependencies and downloading the model:
```
import torchvision.models as models
import torch
import torch.onnx
# load the pretrained model
resnet50 = models.resnet50(pretrained=True, progress=False)
```
Next, we will select our batch size and export the model:
```
# set up a dummy input tensor and export the model to ONNX
BATCH_SIZE = 32
dummy_input=torch.randn(BATCH_SIZE, 3, 224, 224)
torch.onnx.export(resnet50, dummy_input, "resnet50_pytorch.onnx", verbose=False)
```
Note that we are picking a BATCH_SIZE of 4 in this example.
Let's use a benchmarking function included in this guide to time this model:
```
from benchmark import benchmark
resnet50.to("cuda").eval()
benchmark(resnet50)
```
Now, let's restart our Jupyter Kernel so PyTorch doesn't collide with TensorRT:
```
import os
os._exit(0) # Shut down all kernels so TRT doesn't fight with PyTorch for GPU memory
```
## 2. What batch size(s) am I running inference at?
We are going to run with a fixed batch size of 4 for this example. Note that above we set BATCH_SIZE to 4 when saving our model to ONNX. We need to create another dummy batch of the same size (this time it will need to be in our target precision) to test out our engine.
First, as before, we will set our BATCH_SIZE to 4. Note that our trtexec command above includes the '--explicitBatch' flag to signal to TensorRT that we will be using a fixed batch size at runtime.
```
BATCH_SIZE = 32
```
Importantly, by default TensorRT will use the input precision you give the runtime as the default precision for the rest of the network. So before we create our new dummy batch, we also need to choose a precision as in the next section:
## 3. What precision am I running inference at?
Remember that lower precisions than FP32 tend to run faster. There are two common reduced precision modes - FP16 and INT8. Graphics cards that are designed to do inference well often have an affinity for one of these two types. This guide was developed on an NVIDIA V100, which favors FP16, so we will use that here by default. INT8 is a more complicated process that requires a calibration step.
```
import numpy as np
USE_FP16 = True
target_dtype = np.float16 if USE_FP16 else np.float32
dummy_input_batch = np.zeros((BATCH_SIZE, 224, 224, 3), dtype = np.float32)
```
## 4. What TensorRT path am I using to convert my model?
We can use trtexec, a command line tool for working with TensorRT, in order to convert an ONNX model originally from PyTorch to an engine file.
Let's make sure we have TensorRT installed (this comes with trtexec):
```
import tensorrt
```
To convert the model we saved in the previous step, we need to point to the ONNX file, give trtexec a name to save the engine as, and last specify that we want to use a fixed batch size instead of a dynamic one.
```
# step out of Python for a moment to convert the ONNX model to a TRT engine using trtexec
if USE_FP16:
!trtexec --onnx=resnet50_pytorch.onnx --saveEngine=resnet_engine_pytorch.trt --explicitBatch --fp16
else:
!trtexec --onnx=resnet50_pytorch.onnx --saveEngine=resnet_engine_pytorch.trt --explicitBatch
```
This will save our model as 'resnet_engine.trt'.
## 5. What TensorRT runtime am I targeting?
Now, we have a converted our model to a TensorRT engine. Great! That means we are ready to load it into the native Python TensorRT runtime. This runtime strikes a balance between the ease of use of the high level Python APIs used in frameworks and the fast, low level C++ runtimes available in TensorRT.
```
import tensorrt as trt
import pycuda.driver as cuda
import pycuda.autoinit
f = open("resnet_engine_pytorch.trt", "rb")
runtime = trt.Runtime(trt.Logger(trt.Logger.WARNING))
engine = runtime.deserialize_cuda_engine(f.read())
context = engine.create_execution_context()
```
Now allocate input and output memory, give TRT pointers (bindings) to it:
```
# need to set input and output precisions to FP16 to fully enable it
output = np.empty([BATCH_SIZE, 1000], dtype = target_dtype)
# allocate device memory
d_input = cuda.mem_alloc(1 * dummy_input_batch.nbytes)
d_output = cuda.mem_alloc(1 * output.nbytes)
bindings = [int(d_input), int(d_output)]
stream = cuda.Stream()
```
Next, set up the prediction function.
This involves a copy from CPU RAM to GPU VRAM, executing the model, then copying the results back from GPU VRAM to CPU RAM:
```
def predict(batch): # result gets copied into output
# transfer input data to device
cuda.memcpy_htod_async(d_input, batch, stream)
# execute model
context.execute_async_v2(bindings, stream.handle, None)
# transfer predictions back
cuda.memcpy_dtoh_async(output, d_output, stream)
# syncronize threads
stream.synchronize()
return output
```
Finally, let's time the function!
Note that we're going to include the extra CPU-GPU copy time in this evaluation, so it won't be directly comparable with our TRTorch model performance as it also includes additional overhead.
```
print("Warming up...")
predict(dummy_input_batch)
print("Done warming up!")
%%timeit
pred = predict(dummy_input_batch)
```
However, even with the CPU-GPU copy, this is still faster than our raw PyTorch model!
## Next Steps:
<h4> Profiling </h4>
This is a great next step for further optimizing and debugging models you are working on productionizing
You can find it here: https://docs.nvidia.com/deeplearning/tensorrt/best-practices/index.html
<h4> TRT Dev Docs </h4>
Main documentation page for the ONNX, layer builder, C++, and legacy APIs
You can find it here: https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html
<h4> TRT OSS GitHub </h4>
Contains OSS TRT components, sample applications, and plugin examples
You can find it here: https://github.com/NVIDIA/TensorRT
#### TRT Supported Layers:
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/samplePlugin
#### TRT ONNX Plugin Example:
https://docs.nvidia.com/deeplearning/tensorrt/support-matrix/index.html#layers-precision-matrix
|
github_jupyter
|
```
import pandas as pd
import utils
import matplotlib.pyplot as plt
import random
import plotly.express as px
import numpy as np
random.seed(9000)
plt.style.use("seaborn-ticks")
plt.rcParams["image.cmap"] = "Set1"
plt.rcParams['axes.prop_cycle'] = plt.cycler(color=plt.cm.Set1.colors)
%matplotlib inline
```
In this notebook the Percent Replicating score for DMSO at each position is computed for the following U2OS 48h time point compound plates
1. Whole plate normalized CP profiles
2. Spherized CP profiles
3. Spherized DL profiles
The following are the steps taken
1. Whole plate normalized CP profiles, Spherized CP profiles and Spherized DL profiles from the 48h Compound experiment are read and the replicates plates merged into a single dataframe.
2. All the non-negative control wells are removed.
3. DMSO wells in the same position are considered replicates while DMSO wells in different positions are considered non-replicates.
4. The signal distribution, which is the median pairwise replicate correlation, is computed for each replicate.
5. The null distribution, which is the median pairwise correlation of non-replicates, is computed for 1000 combinations of non-replicates.
6. Percent Replicating is computed as the percentage of the signal distribution that is the greater than the 95th percentile of null distribution
7. The signal and noise distributions and the Percent Replicating values are plotted and the table of Percent Replicating is printed.
```
n_samples = 10000
n_replicates = 4
corr_replicating_df = pd.DataFrame()
group_by_feature = 'Metadata_Well'
perturbation = "compound"
cell = "U2OS"
time = "48"
experiment_df = (
pd.read_csv('output/experiment-metadata.tsv', sep='\t')
.query('Batch=="2020_11_04_CPJUMP1" or Batch=="2020_11_04_CPJUMP1_DL"')
.query('Perturbation==@perturbation')
.query('Cell_type==@cell')
.query('Time==@time')
)
batches = {
"2020_11_04_CPJUMP1": {
"normalized": "normalized.csv.gz",
"spherized": "spherized.csv.gz"
},
"2020_11_04_CPJUMP1_DL": {
"spherized": "spherized.csv.gz"
}
}
for batch in experiment_df.Batch.unique():
for type in batches[batch]:
filename = batches[batch][type]
batch_df = experiment_df.query('Batch==@batch')
data_df = pd.DataFrame()
for plate in experiment_df.Assay_Plate_Barcode.unique():
plate_df = utils.load_data(batch, plate, filename)
data_df = utils.concat_profiles(data_df, plate_df)
data_df = data_df.query('Metadata_control_type=="negcon"')
metadata_df = utils.get_metadata(data_df)
features_df = utils.get_featuredata(data_df).replace(np.inf, np.nan).dropna(axis=1, how="any")
data_df = pd.concat([metadata_df, features_df], axis=1)
replicating_corr = list(utils.corr_between_replicates(data_df, group_by_feature)) # signal distribution
null_replicating = list(utils.corr_between_non_replicates(data_df, n_samples=n_samples, n_replicates=n_replicates, metadata_compound_name = group_by_feature)) # null distribution
prop_95_replicating, value_95_replicating = utils.percent_score(null_replicating,
replicating_corr,
how='right')
if batch == "2020_11_04_CPJUMP1":
features = 'CellProfiler'
else:
features = 'DeepProfiler'
corr_replicating_df = corr_replicating_df.append({'Description':f'{features}_{type}',
'Modality':f'{perturbation}',
'Cell':f'{cell}',
'time':f'{time}',
'Replicating':replicating_corr,
'Null_Replicating':null_replicating,
'Percent_Replicating':'%.1f'%prop_95_replicating,
'Value_95':value_95_replicating}, ignore_index=True)
print(corr_replicating_df[['Description', 'Percent_Replicating']].to_markdown(index=False))
utils.distribution_plot(df=corr_replicating_df, output_file="5.percent_replicating.png", metric="Percent Replicating")
corr_replicating_df['Percent_Replicating'] = corr_replicating_df['Percent_Replicating'].astype(float)
corr_replicating_df.loc[(corr_replicating_df.Modality=='compound') & (corr_replicating_df.time=='48'), 'time'] = 'long'
plot_corr_replicating_df = (
corr_replicating_df.rename(columns={'Modality':'Perturbation'})
.drop(columns=['Null_Replicating','Value_95','Replicating'])
)
fig = px.bar(data_frame=plot_corr_replicating_df,
x='Description',
y='Percent_Replicating',
facet_row='time',
facet_col='Cell')
fig.update_layout(title='Percent Replicating vs. Perturbation - U2OS 48h Compound plates',
xaxis=dict(title='Feature set'),
yaxis=dict(title='Percent Replicating'),
yaxis3=dict(title='Percent Replicating'))
fig.show("png")
fig.write_image(f'figures/5.percent_replicating_facet.png', width=640, height=480, scale=2)
print(plot_corr_replicating_df[['Description','Perturbation','time', 'Cell' ,'Percent_Replicating']].to_markdown(index=False))
```
|
github_jupyter
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.