Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
11,100 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Snapshotting with Devito using the ConditionalDimension
This notebook intends to introduce new Devito users (especially with a C or FORTRAN background) to the best practice on saving snapshots to disk, as a binary float file.
We start by presenting a naive approach, and then introduce a more efficient method, which exploits Devito's ConditionalDimension.
Initialize utilities
Step1: Problem Setup
This tutorial is based on an example that has appeared in a TLE tutorial(Louboutin et. al., 2017), in which one shot is modeled over a 2-layer velocity model.
Step2: Saving snaps to disk - naive approach
We want to get equally spaced snaps from the nt-2 saved in u.data. The user can then define the total number of snaps nsnaps, which determines a factor to divide nt.
Step3: Checking u.data spaced by factor using matplotlib,
Step4: Or from the saved file
Step5: This C/FORTRAN way of saving snaps is clearly not optimal when using Devito; the wavefield object u is specified to save all snaps, and a memory copy is done at every op time step. Giving that we don't want all the snaps saved, this process is wasteful; only the selected snapshots should be copied during execution.
To address these issues, a better way to save snaps using Devito's capabilities is presented in the following section.
Saving snaps to disk - Devito method
A better way to save snapshots to disk is to create a new TimeFunction, usave, whose time size is equal to
nsnaps. There are 3 main differences from the previous code, which are flagged by #Part 1, #Part 2 and #Part 3 . After running the code each part is explained with more detail.
Step6: As usave.data has the desired snaps, no extra variable copy is required. The snaps can then be visualized
Step7: About Part 1
Here a subsampled version (time_subsampled) of the full time Dimension (model.grid.time_dim) is created with the ConditionalDimension. time_subsampled is then used to define an additional symbolic wavefield usave, which will store in usave.data only the predefined number of snapshots (see Part 2).
Further insight on how ConditionalDimension works and its most common uses can be found in the Devito documentation. The following excerpt exemplifies subsampling of simple functions
Step8: To run snaps as a movie (outside Jupyter Notebook), run the code below, altering filename, nsnaps, nx, nz accordingly | Python Code:
#NBVAL_IGNORE_OUTPUT
%reset -f
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Snapshotting with Devito using the ConditionalDimension
This notebook intends to introduce new Devito users (especially with a C or FORTRAN background) to the best practice on saving snapshots to disk, as a binary float file.
We start by presenting a naive approach, and then introduce a more efficient method, which exploits Devito's ConditionalDimension.
Initialize utilities
End of explanation
# This cell sets up the problem that is already explained in the first TLE tutorial.
#NBVAL_IGNORE_OUTPUT
#%%flake8
from examples.seismic import Receiver
from examples.seismic import RickerSource
from examples.seismic import Model, plot_velocity, TimeAxis
from devito import TimeFunction
from devito import Eq, solve
from devito import Operator
# Set velocity model
nx = 201
nz = 201
nb = 10
shape = (nx, nz)
spacing = (20., 20.)
origin = (0., 0.)
v = np.empty(shape, dtype=np.float32)
v[:, :int(nx/2)] = 2.0
v[:, int(nx/2):] = 2.5
model = Model(vp=v, origin=origin, shape=shape, spacing=spacing,
space_order=2, nbl=10, bcs="damp")
# Set time range, source, source coordinates and receiver coordinates
t0 = 0. # Simulation starts a t=0
tn = 1000. # Simulation lasts tn milliseconds
dt = model.critical_dt # Time step from model grid spacing
time_range = TimeAxis(start=t0, stop=tn, step=dt)
nt = time_range.num # number of time steps
f0 = 0.010 # Source peak frequency is 10Hz (0.010 kHz)
src = RickerSource(
name='src',
grid=model.grid,
f0=f0,
time_range=time_range)
src.coordinates.data[0, :] = np.array(model.domain_size) * .5
src.coordinates.data[0, -1] = 20. # Depth is 20m
rec = Receiver(
name='rec',
grid=model.grid,
npoint=101,
time_range=time_range) # new
rec.coordinates.data[:, 0] = np.linspace(0, model.domain_size[0], num=101)
rec.coordinates.data[:, 1] = 20. # Depth is 20m
depth = rec.coordinates.data[:, 1] # Depth is 20m
plot_velocity(model, source=src.coordinates.data,
receiver=rec.coordinates.data[::4, :])
#Used for reshaping
vnx = nx+20
vnz = nz+20
# Set symbolics for the wavefield object `u`, setting save on all time steps
# (which can occupy a lot of memory), to later collect snapshots (naive method):
u = TimeFunction(name="u", grid=model.grid, time_order=2,
space_order=2, save=time_range.num)
# Set symbolics of the operator, source and receivers:
pde = model.m * u.dt2 - u.laplace + model.damp * u.dt
stencil = Eq(u.forward, solve(pde, u.forward))
src_term = src.inject(field=u.forward, expr=src * dt**2 / model.m,
offset=model.nbl)
rec_term = rec.interpolate(expr=u, offset=model.nbl)
op = Operator([stencil] + src_term + rec_term, subs=model.spacing_map)
# Run the operator for `(nt-2)` time steps:
op(time=nt-2, dt=model.critical_dt)
Explanation: Problem Setup
This tutorial is based on an example that has appeared in a TLE tutorial(Louboutin et. al., 2017), in which one shot is modeled over a 2-layer velocity model.
End of explanation
nsnaps = 100
factor = round(u.shape[0] / nsnaps) # Get approx nsnaps, for any nt
ucopy = u.data.copy(order='C')
filename = "naivsnaps.bin"
file_u = open(filename, 'wb')
for it in range(0, nsnaps):
file_u.write(ucopy[it*factor, :, :])
file_u.close()
Explanation: Saving snaps to disk - naive approach
We want to get equally spaced snaps from the nt-2 saved in u.data. The user can then define the total number of snaps nsnaps, which determines a factor to divide nt.
End of explanation
#NBVAL_IGNORE_OUTPUT
plt.rcParams['figure.figsize'] = (20, 20) # Increases figure size
imcnt = 1 # Image counter for plotting
plot_num = 5 # Number of images to plot
for i in range(0, nsnaps, int(nsnaps/plot_num)):
plt.subplot(1, plot_num+1, imcnt+1)
imcnt = imcnt + 1
plt.imshow(np.transpose(u.data[i * factor, :, :]), vmin=-1, vmax=1, cmap="seismic")
plt.show()
Explanation: Checking u.data spaced by factor using matplotlib,
End of explanation
#NBVAL_IGNORE_OUTPUT
fobj = open("naivsnaps.bin", "rb")
snaps = np.fromfile(fobj, dtype = np.float32)
snaps = np.reshape(snaps, (nsnaps, vnx, vnz)) #reshape vec2mtx, devito format. nx first
fobj.close()
plt.rcParams['figure.figsize'] = (20,20) # Increases figure size
imcnt = 1 # Image counter for plotting
plot_num = 5 # Number of images to plot
for i in range(0, nsnaps, int(nsnaps/plot_num)):
plt.subplot(1, plot_num+1, imcnt+1);
imcnt = imcnt + 1
plt.imshow(np.transpose(snaps[i,:,:]), vmin=-1, vmax=1, cmap="seismic")
plt.show()
Explanation: Or from the saved file:
End of explanation
#NBVAL_IGNORE_OUTPUT
from devito import ConditionalDimension
nsnaps = 103 # desired number of equally spaced snaps
factor = round(nt / nsnaps) # subsequent calculated factor
print(f"factor is {factor}")
#Part 1 #############
time_subsampled = ConditionalDimension(
't_sub', parent=model.grid.time_dim, factor=factor)
usave = TimeFunction(name='usave', grid=model.grid, time_order=2, space_order=2,
save=nsnaps, time_dim=time_subsampled)
print(time_subsampled)
#####################
u = TimeFunction(name="u", grid=model.grid, time_order=2, space_order=2)
pde = model.m * u.dt2 - u.laplace + model.damp * u.dt
stencil = Eq(u.forward, solve(pde, u.forward))
src_term = src.inject(
field=u.forward,
expr=src * dt**2 / model.m,
offset=model.nbl)
rec_term = rec.interpolate(expr=u, offset=model.nbl)
#Part 2 #############
op1 = Operator([stencil] + src_term + rec_term,
subs=model.spacing_map) # usual operator
op2 = Operator([stencil] + src_term + [Eq(usave, u)] + rec_term,
subs=model.spacing_map) # operator with snapshots
op1(time=nt - 2, dt=model.critical_dt) # run only for comparison
u.data.fill(0.)
op2(time=nt - 2, dt=model.critical_dt)
#####################
#Part 3 #############
print("Saving snaps file")
print("Dimensions: nz = {:d}, nx = {:d}".format(nz + 2 * nb, nx + 2 * nb))
filename = "snaps2.bin"
usave.data.tofile(filename)
#####################
Explanation: This C/FORTRAN way of saving snaps is clearly not optimal when using Devito; the wavefield object u is specified to save all snaps, and a memory copy is done at every op time step. Giving that we don't want all the snaps saved, this process is wasteful; only the selected snapshots should be copied during execution.
To address these issues, a better way to save snaps using Devito's capabilities is presented in the following section.
Saving snaps to disk - Devito method
A better way to save snapshots to disk is to create a new TimeFunction, usave, whose time size is equal to
nsnaps. There are 3 main differences from the previous code, which are flagged by #Part 1, #Part 2 and #Part 3 . After running the code each part is explained with more detail.
End of explanation
#NBVAL_IGNORE_OUTPUT
fobj = open("snaps2.bin", "rb")
snaps = np.fromfile(fobj, dtype=np.float32)
snaps = np.reshape(snaps, (nsnaps, vnx, vnz))
fobj.close()
plt.rcParams['figure.figsize'] = (20, 20) # Increases figure size
imcnt = 1 # Image counter for plotting
plot_num = 5 # Number of images to plot
for i in range(0, plot_num):
plt.subplot(1, plot_num, i+1);
imcnt = imcnt + 1
ind = i * int(nsnaps/plot_num)
plt.imshow(np.transpose(snaps[ind,:,:]), vmin=-1, vmax=1, cmap="seismic")
plt.show()
Explanation: As usave.data has the desired snaps, no extra variable copy is required. The snaps can then be visualized:
End of explanation
def print2file(filename, thingToPrint):
import sys
orig_stdout = sys.stdout
f = open(filename, 'w')
sys.stdout = f
print(thingToPrint)
f.close()
sys.stdout = orig_stdout
# print2file("op1.c", op1) # uncomment to print to file
# print2file("op2.c", op2) # uncomment to print to file
# print(op1) # uncomment to print here
# print(op2) # uncomment to print here
Explanation: About Part 1
Here a subsampled version (time_subsampled) of the full time Dimension (model.grid.time_dim) is created with the ConditionalDimension. time_subsampled is then used to define an additional symbolic wavefield usave, which will store in usave.data only the predefined number of snapshots (see Part 2).
Further insight on how ConditionalDimension works and its most common uses can be found in the Devito documentation. The following excerpt exemplifies subsampling of simple functions:
Among the other things, ConditionalDimensions are indicated to implement
Function subsampling. In the following example, an Operator evaluates the
Function ``g`` and saves its content into ``f`` every ``factor=4`` iterations.
>>> from devito import Dimension, ConditionalDimension, Function, Eq, Operator
>>> size, factor = 16, 4
>>> i = Dimension(name='i')
>>> ci = ConditionalDimension(name='ci', parent=i, factor=factor)
>>> g = Function(name='g', shape=(size,), dimensions=(i,))
>>> f = Function(name='f', shape=(int(size/factor),), dimensions=(ci,))
>>> op = Operator([Eq(g, 1), Eq(f, g)])
The Operator generates the following for-loop (pseudocode)
.. code-block:: C
for (int i = i_m; i <= i_M; i += 1) {
g[i] = 1;
if (i%4 == 0) {
f[i / 4] = g[i];
}
}
From this excerpt we can see that the C code generated by Operator with the extra argument Eq(f,g) mainly corresponds to adding an if block on the optimized C-code, which saves the desired snapshots on f, from g, at the correct times. Following the same line of thought, in the following section the symbolic and C-generated code are compared, with and without snapshots.
About Part 2
We then define Operators op1 (no snaps) and op2 (with snaps). The only difference between the two is that op2 has an extra symbolic equation Eq(usave, u). Notice that even though usave and u have different Dimensions, Devito's symbolic interpreter understands it, because usave's time_dim was defined through the ConditionalDimension.
Below, we show relevant excerpts of the compiled Operators. As explained above, the main difference between the optimized C-code of op1 and op2 is the addition of an if block. For op1's C code:
```c
// #define's
//...
// declare dataobj struct
//...
// declare profiler struct
//...
int Kernel(struct dataobj restrict damp_vec, const float dt, struct dataobj restrict m_vec, const float o_x, const float o_y, struct dataobj restrict rec_vec, struct dataobj restrict rec_coords_vec, struct dataobj restrict src_vec, struct dataobj restrict src_coords_vec, struct dataobj *restrict u_vec, const int x_M, const int x_m, const int y_M, const int y_m, const int p_rec_M, const int p_rec_m, const int p_src_M, const int p_src_m, const int time_M, const int time_m, struct profiler * timers)
{
// ...
// ...
float (restrict u)[u_vec->size[1]][u_vec->size[2]] attribute ((aligned (64))) = (float ()[u_vec->size[1]][u_vec->size[2]]) u_vec->data;
// ...
for (int time = time_m, t0 = (time)%(3), t1 = (time + 1)%(3), t2 = (time + 2)%(3); time <= time_M; time += 1, t0 = (time)%(3), t1 = (time + 1)%(3), t2 = (time + 2)%(3))
{
struct timeval start_section0, end_section0;
gettimeofday(&start_section0, NULL);
for (int x = x_m; x <= x_M; x += 1)
{
#pragma omp simd
for (int y = y_m; y <= y_M; y += 1)
{
float r0 = 1.0e+4Fdtm[x + 2][y + 2] + 5.0e+3F(dtdt)damp[x + 1][y + 1];
u[t1][x + 2][y + 2] = 2.0e+4Fdtm[x + 2][y + 2]u[t0][x + 2][y + 2]/r0 - 1.0e+4Fdtm[x + 2][y + 2]u[t2][x + 2][y + 2]/r0 + 1.0e+2F((dtdtdt)u[t0][x + 1][y + 2]/r0 + (dtdtdt)u[t0][x + 2][y + 1]/r0 + (dtdtdt)u[t0][x + 2][y + 3]/r0 + (dtdtdt)u[t0][x + 3][y + 2]/r0) + 5.0e+3F(dtdt)damp[x + 1][y + 1]u[t2][x + 2][y + 2]/r0 - 4.0e+2Fdtdtdtu[t0][x + 2][y + 2]/r0;
}
}
gettimeofday(&end_section0, NULL);
timers->section0 += (double)(end_section0.tv_sec-start_section0.tv_sec)+(double)(end_section0.tv_usec-start_section0.tv_usec)/1000000;
struct timeval start_section1, end_section1;
gettimeofday(&start_section1, NULL);
for (int p_src = p_src_m; p_src <= p_src_M; p_src += 1)
{
//source injection
//...
}
gettimeofday(&end_section1, NULL);
timers->section1 += (double)(end_section1.tv_sec-start_section1.tv_sec)+(double)(end_section1.tv_usec-start_section1.tv_usec)/1000000;
struct timeval start_section2, end_section2;
gettimeofday(&start_section2, NULL);
for (int p_rec = p_rec_m; p_rec <= p_rec_M; p_rec += 1)
{
//receivers interpolation
//...
}
gettimeofday(&end_section2, NULL);
timers->section2 += (double)(end_section2.tv_sec-start_section2.tv_sec)+(double)(end_section2.tv_usec-start_section2.tv_usec)/1000000;
}
return 0;
}
```
op2's C code (differences are highlighted by //<<<<<<<<<<<<<<<<<<<<):
```c
// #define's
//...
// declare dataobj struct
//...
// declare profiler struct
//...
int Kernel(struct dataobj restrict damp_vec, const float dt, struct dataobj restrict m_vec, const float o_x, const float o_y, struct dataobj restrict rec_vec, struct dataobj restrict rec_coords_vec, struct dataobj restrict src_vec, struct dataobj restrict src_coords_vec, struct dataobj restrict u_vec, struct dataobj restrict usave_vec, const int x_M, const int x_m, const int y_M, const int y_m, const int p_rec_M, const int p_rec_m, const int p_src_M, const int p_src_m, const int time_M, const int time_m, struct profiler * timers)
{
// ...
// ...
float (restrict u)[u_vec->size[1]][u_vec->size[2]] attribute ((aligned (64))) = (float ()[u_vec->size[1]][u_vec->size[2]]) u_vec->data;
//<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<DECLARE USAVE<<<<<<<<<<<<<<<<<<<<<
float (restrict usave)[usave_vec->size[1]][usave_vec->size[2]] attribute ((aligned (64))) = (float ()[usave_vec->size[1]][usave_vec->size[2]]) usave_vec->data;
//<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
//flush denormal numbers...
for (int time = time_m, t0 = (time)%(3), t1 = (time + 1)%(3), t2 = (time + 2)%(3); time <= time_M; time += 1, t0 = (time)%(3), t1 = (time + 1)%(3), t2 = (time + 2)%(3))
{
struct timeval start_section0, end_section0;
gettimeofday(&start_section0, NULL);
for (int x = x_m; x <= x_M; x += 1)
{
#pragma omp simd
for (int y = y_m; y <= y_M; y += 1)
{
float r0 = 1.0e+4Fdtm[x + 2][y + 2] + 5.0e+3F(dtdt)damp[x + 1][y + 1];
u[t1][x + 2][y + 2] = 2.0e+4Fdtm[x + 2][y + 2]u[t0][x + 2][y + 2]/r0 - 1.0e+4Fdtm[x + 2][y + 2]u[t2][x + 2][y + 2]/r0 + 1.0e+2F((dtdtdt)u[t0][x + 1][y + 2]/r0 + (dtdtdt)u[t0][x + 2][y + 1]/r0 + (dtdtdt)u[t0][x + 2][y + 3]/r0 + (dtdtdt)u[t0][x + 3][y + 2]/r0) + 5.0e+3F(dtdt)damp[x + 1][y + 1]u[t2][x + 2][y + 2]/r0 - 4.0e+2Fdtdtdtu[t0][x + 2][y + 2]/r0;
}
}
gettimeofday(&end_section0, NULL);
timers->section0 += (double)(end_section0.tv_sec-start_section0.tv_sec)+(double)(end_section0.tv_usec-start_section0.tv_usec)/1000000;
//<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<SAVE SNAPSHOT<<<<<<<<<<<<<<<<<<<<<
if ((time)%(60) == 0)
{
struct timeval start_section1, end_section1;
gettimeofday(&start_section1, NULL);
for (int x = x_m; x <= x_M; x += 1)
{
#pragma omp simd
for (int y = y_m; y <= y_M; y += 1)
{
usave[time / 60][x + 2][y + 2] = u[t0][x + 2][y + 2];
}
}
//<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
gettimeofday(&end_section1, NULL);
timers->section1 += (double)(end_section1.tv_sec-start_section1.tv_sec)+(double)(end_section1.tv_usec-start_section1.tv_usec)/1000000;
}
struct timeval start_section2, end_section2;
gettimeofday(&start_section2, NULL);
for (int p_src = p_src_m; p_src <= p_src_M; p_src += 1)
{
//source injection
//...
}
gettimeofday(&end_section2, NULL);
timers->section2 += (double)(end_section2.tv_sec-start_section2.tv_sec)+(double)(end_section2.tv_usec-start_section2.tv_usec)/1000000;
struct timeval start_section3, end_section3;
gettimeofday(&start_section3, NULL);
for (int p_rec = p_rec_m; p_rec <= p_rec_M; p_rec += 1)
{
//receivers interpolation
//...
}
gettimeofday(&end_section3, NULL);
timers->section3 += (double)(end_section3.tv_sec-start_section3.tv_sec)+(double)(end_section3.tv_usec-start_section3.tv_usec)/1000000;
}
return 0;
}
```
To inspect the full codes of op1 and op2, run the block below:
End of explanation
#NBVAL_IGNORE_OUTPUT
#NBVAL_SKIP
from IPython.display import HTML
import matplotlib.pyplot as plt
import matplotlib.animation as animation
filename = "naivsnaps.bin"
nsnaps = 100
fobj = open(filename, "rb")
snapsObj = np.fromfile(fobj, dtype=np.float32)
snapsObj = np.reshape(snapsObj, (nsnaps, vnx, vnz))
fobj.close()
fig, ax = plt.subplots()
matrice = ax.imshow(snapsObj[0, :, :].T, vmin=-1, vmax=1, cmap="seismic")
plt.colorbar(matrice)
plt.xlabel('x')
plt.ylabel('z')
plt.title('Modelling one shot over a 2-layer velocity model with Devito.')
def update(i):
matrice.set_array(snapsObj[i, :, :].T)
return matrice,
# Animation
ani = animation.FuncAnimation(fig, update, frames=nsnaps, interval=50, blit=True)
plt.close(ani._fig)
HTML(ani.to_html5_video())
Explanation: To run snaps as a movie (outside Jupyter Notebook), run the code below, altering filename, nsnaps, nx, nz accordingly:
End of explanation |
11,101 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Corrupt known signal with point spread
The aim of this tutorial is to demonstrate how to put a known signal at a
desired location(s) in a
Step1: First, we set some parameters.
Step2: Load the MEG data
Step3: Estimate the background noise covariance from the baseline period
Step4: Generate sinusoids in two spatially distant labels
Step5: Find the center vertices in source space of each label
We want the known signal in each label to only be active at the center. We
create a mask for each label that is 1 at the center vertex and 0 at all
other vertices in the label. This mask is then used when simulating
source-space data.
Step6: Create source-space data with known signals
Put known signals onto surface vertices using the array of signals and
the label masks (stored in labels[i].values).
Step7: Plot original signals
Note that the original signals are highly concentrated (point) sources.
Step8: Simulate sensor-space signals
Use the forward solution and add Gaussian noise to simulate sensor-space
(evoked) data from the known source-space signals. The amount of noise is
controlled by nave (higher values imply less noise).
Step9: Plot the point-spread of corrupted signal
Notice that after applying the forward- and inverse-operators to the known
point sources that the point sources have spread across the source-space.
This spread is due to the minimum norm solution so that the signal leaks to
nearby vertices with similar orientations so that signal ends up crossing the
sulci and gyri. | Python Code:
import os.path as op
import numpy as np
import mne
from mne.datasets import sample
from mne.minimum_norm import read_inverse_operator, apply_inverse
from mne.simulation import simulate_stc, simulate_evoked
Explanation: Corrupt known signal with point spread
The aim of this tutorial is to demonstrate how to put a known signal at a
desired location(s) in a :class:mne.SourceEstimate and then corrupt the
signal with point-spread by applying a forward and inverse solution.
End of explanation
seed = 42
# parameters for inverse method
method = 'sLORETA'
snr = 3.
lambda2 = 1.0 / snr ** 2
# signal simulation parameters
# do not add extra noise to the known signals
nave = np.inf
T = 100
times = np.linspace(0, 1, T)
dt = times[1] - times[0]
# Paths to MEG data
data_path = sample.data_path()
subjects_dir = op.join(data_path, 'subjects')
fname_fwd = op.join(data_path, 'MEG', 'sample',
'sample_audvis-meg-oct-6-fwd.fif')
fname_inv = op.join(data_path, 'MEG', 'sample',
'sample_audvis-meg-oct-6-meg-fixed-inv.fif')
fname_evoked = op.join(data_path, 'MEG', 'sample',
'sample_audvis-ave.fif')
Explanation: First, we set some parameters.
End of explanation
fwd = mne.read_forward_solution(fname_fwd)
fwd = mne.convert_forward_solution(fwd, force_fixed=True, surf_ori=True,
use_cps=False)
fwd['info']['bads'] = []
inv_op = read_inverse_operator(fname_inv)
raw = mne.io.read_raw_fif(op.join(data_path, 'MEG', 'sample',
'sample_audvis_raw.fif'))
raw.set_eeg_reference(projection=True)
events = mne.find_events(raw)
event_id = {'Auditory/Left': 1, 'Auditory/Right': 2}
epochs = mne.Epochs(raw, events, event_id, baseline=(None, 0), preload=True)
epochs.info['bads'] = []
evoked = epochs.average()
labels = mne.read_labels_from_annot('sample', subjects_dir=subjects_dir)
label_names = [label.name for label in labels]
n_labels = len(labels)
Explanation: Load the MEG data
End of explanation
cov = mne.compute_covariance(epochs, tmin=None, tmax=0.)
Explanation: Estimate the background noise covariance from the baseline period
End of explanation
# The known signal is all zero-s off of the two labels of interest
signal = np.zeros((n_labels, T))
idx = label_names.index('inferiorparietal-lh')
signal[idx, :] = 1e-7 * np.sin(5 * 2 * np.pi * times)
idx = label_names.index('rostralmiddlefrontal-rh')
signal[idx, :] = 1e-7 * np.sin(7 * 2 * np.pi * times)
Explanation: Generate sinusoids in two spatially distant labels
End of explanation
hemi_to_ind = {'lh': 0, 'rh': 1}
for i, label in enumerate(labels):
# The `center_of_mass` function needs labels to have values.
labels[i].values.fill(1.)
# Restrict the eligible vertices to be those on the surface under
# consideration and within the label.
surf_vertices = fwd['src'][hemi_to_ind[label.hemi]]['vertno']
restrict_verts = np.intersect1d(surf_vertices, label.vertices)
com = labels[i].center_of_mass(subjects_dir=subjects_dir,
restrict_vertices=restrict_verts,
surf='white')
# Convert the center of vertex index from surface vertex list to Label's
# vertex list.
cent_idx = np.where(label.vertices == com)[0][0]
# Create a mask with 1 at center vertex and zeros elsewhere.
labels[i].values.fill(0.)
labels[i].values[cent_idx] = 1.
# Print some useful information about this vertex and label
if 'transversetemporal' in label.name:
dist, _ = label.distances_to_outside(
subjects_dir=subjects_dir)
dist = dist[cent_idx]
area = label.compute_area(subjects_dir=subjects_dir)
# convert to equivalent circular radius
r = np.sqrt(area / np.pi)
print(f'{label.name} COM vertex is {dist * 1e3:0.1f} mm from edge '
f'(label area equivalent to a circle with r={r * 1e3:0.1f} mm)')
Explanation: Find the center vertices in source space of each label
We want the known signal in each label to only be active at the center. We
create a mask for each label that is 1 at the center vertex and 0 at all
other vertices in the label. This mask is then used when simulating
source-space data.
End of explanation
stc_gen = simulate_stc(fwd['src'], labels, signal, times[0], dt,
value_fun=lambda x: x)
Explanation: Create source-space data with known signals
Put known signals onto surface vertices using the array of signals and
the label masks (stored in labels[i].values).
End of explanation
kwargs = dict(subjects_dir=subjects_dir, hemi='split', smoothing_steps=4,
time_unit='s', initial_time=0.05, size=1200,
views=['lat', 'med'])
clim = dict(kind='value', pos_lims=[1e-9, 1e-8, 1e-7])
brain_gen = stc_gen.plot(clim=clim, **kwargs)
Explanation: Plot original signals
Note that the original signals are highly concentrated (point) sources.
End of explanation
evoked_gen = simulate_evoked(fwd, stc_gen, evoked.info, cov, nave,
random_state=seed)
# Map the simulated sensor-space data to source-space using the inverse
# operator.
stc_inv = apply_inverse(evoked_gen, inv_op, lambda2, method=method)
Explanation: Simulate sensor-space signals
Use the forward solution and add Gaussian noise to simulate sensor-space
(evoked) data from the known source-space signals. The amount of noise is
controlled by nave (higher values imply less noise).
End of explanation
brain_inv = stc_inv.plot(**kwargs)
Explanation: Plot the point-spread of corrupted signal
Notice that after applying the forward- and inverse-operators to the known
point sources that the point sources have spread across the source-space.
This spread is due to the minimum norm solution so that the signal leaks to
nearby vertices with similar orientations so that signal ends up crossing the
sulci and gyri.
End of explanation |
11,102 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ensegment
Step1: Documentation
Write some beautiful documentation of your program here. | Python Code:
from default import *
Explanation: ensegment: default program
End of explanation
Pw = Pdist(data=datafile("data/count_1w.txt"))
segmenter = Segment(Pw)
with open("data/input/dev.txt") as f:
for line in f:
print(" ".join(segmenter.segment(line.strip())))
Explanation: Documentation
Write some beautiful documentation of your program here.
End of explanation |
11,103 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Feature Selection
Step1: Data Split
Idealy, we'd perform stratified 5x4 fold cross validation, however, given the timeframe, we'll stick with a single split. We'll use an old chunck of data as training, a more recent as validation, and finally, the most recent data as test set.
Don't worry, we'll use K-fold cross-validation in the next notebook
Since the data we want to predict is in the future, we'll use the first 60% as training, and the following 20% as validation and 20% test.
Step2: checking data distribution, we see that this is a good split (considering the porportion of targets)
Step3: Normalization
For the sake of simplicity, we will use the 0-1 range normalization
Step4: Feature Importance
Variance Threshold
Step5: Correlation
Step6: features loc_x and loc_y are too correlated with latitude and longitude, respectively, for this reason, we'll delete lox_x and loc_y.
Step7: Feature Importance (Trees)
This can be done using sklearn.feature_selection.SelectFromModel, however, we do it by ourselves in order to get a better visualization of the process.
Step8: Based on this results, we'll select the N features
Step9: It seems like the location and category features are more important than the date related features.
On the date related features, the system also selected opened_dayofweek and opened_dayofmonth.
Test Set | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import bokeh
from bokeh.io import output_notebook
output_notebook()
import os
DATA_STREETLIGHT_CASES_URL = 'https://data.sfgov.org/api/views/c53t-rr3f/rows.json?accessType=DOWNLOAD'
DATA_STREETLIGHT_CASES_LOCAL = 'DATA_STREETLIGHT_CASES.json'
data_path = DATA_STREETLIGHT_CASES_URL
if os.path.isfile(DATA_STREETLIGHT_CASES_LOCAL):
data_path = DATA_STREETLIGHT_CASES_LOCAL
import urllib, json
def _load_data(url):
response = urllib.urlopen(url)
raw_data = json.loads(response.read())
columns = [col['name'] for col in raw_data['meta']['view']['columns']]
rows = raw_data['data']
return pd.DataFrame(data=rows, columns=columns)
df = _load_data(data_path)
df.columns = [col.lower().replace(' ', '_') for col in df.columns]
df['opened'] = pd.to_datetime(df.opened)
df['opened_dayofweek'] = df.opened.dt.dayofweek
df['opened_month'] = df.opened.dt.month
df['opened_year'] = df.opened.dt.year
df['opened_dayofmonth'] = df.opened.dt.day
df['opened_weekend'] = df.opened_dayofweek >= 5
df['closed'] = pd.to_datetime(df.closed)
df['closed_dayofweek'] = df.closed.dt.dayofweek
df['closed_month'] = df.closed.dt.month
df['closed_year'] = df.closed.dt.year
df['closed_dayofmonth'] = df.closed.dt.day
df['closed_weekend'] = df.closed_dayofweek >= 5
df['delta'] = (df.closed - df.opened).dt.days
df['is_open'] = pd.isnull(df.closed)
df['target'] = df.delta <= 2
from geopy.distance import vincenty
df['latitude'] = df.point.apply(lambda e: float(e[1]))
df['longitude'] = df.point.apply(lambda e: float(e[2]))
min_lat, max_lat = min(df.latitude), max(df.latitude)
min_lng, max_lng = min(df.longitude), max(df.longitude)
def grid(lat, lng):
x = vincenty((lat, min_lng), (lat, lng)).miles
y = vincenty((min_lat, lng), (lat, lng)).miles
return x, y
xy = [grid(lat, lng) for lat, lng in zip(df.latitude.values, df.longitude.values)]
df['loc_x'] = np.array(xy)[:,0]
df['loc_y'] = np.array(xy)[:,1]
dummies = pd.get_dummies(df.neighborhood.str.replace(' ', '_').str.lower(), prefix='neigh_', drop_first=False)
df[dummies.columns] = dummies
del df['neighborhood']
dummies = pd.get_dummies(df.category.str.replace(' ', '_').str.lower(), prefix='cat_', drop_first=False)
df[dummies.columns] = dummies
del df['category']
dummies = pd.get_dummies(df.source.str.replace(' ', '_').str.lower(), prefix='source_', drop_first=False)
df[dummies.columns] = dummies
del df['source']
df['status'] = df.status == 'Closed'
del df['sid']
del df['id']
del df['position']
del df['created_at']
del df['created_meta']
del df['updated_at']
del df['updated_meta']
del df['meta']
del df['caseid']
del df['address']
del df['responsible_agency']
del df['request_details']
del df['request_type']
del df['status']
del df['updated']
del df['supervisor_district']
del df['point']
df = df.sort_values(by='opened', ascending=True)
del df['opened']
del df['closed']
del df['closed_dayofweek']
del df['closed_month']
del df['closed_year']
del df['closed_dayofmonth']
del df['closed_weekend']
del df['delta']
del df['is_open']
# deleting opened_year because there is only 2012 and 2013, which are not relevant for future classifications
del df['opened_year']
df = df.dropna()
columns = list(df.columns)
columns.remove('target')
columns.append('target')
df = df[columns]
feature_columns = columns[:-1]
Explanation: Feature Selection
End of explanation
l1 = int(df.shape[0]*0.6)
l2 = int(df.shape[0]*0.8)
df_tra = df.loc[range(0,l1)]
df_val = df.loc[range(l1,l2)]
df_tst = df.loc[range(l2, df.shape[0])]
df_tra.shape, df_val.shape, df_tst.shape, df.shape
Explanation: Data Split
Idealy, we'd perform stratified 5x4 fold cross validation, however, given the timeframe, we'll stick with a single split. We'll use an old chunck of data as training, a more recent as validation, and finally, the most recent data as test set.
Don't worry, we'll use K-fold cross-validation in the next notebook
Since the data we want to predict is in the future, we'll use the first 60% as training, and the following 20% as validation and 20% test.
End of explanation
fig, axs = plt.subplots(1,3, sharex=True, sharey=True, figsize=(12,3))
axs[0].hist(df_tra.target, bins=2)
axs[1].hist(df_val.target, bins=2)
axs[2].hist(df_tst.target, bins=2)
axs[0].set_title('Training')
axs[1].set_title('Validation')
axs[2].set_title('Test')
X_tra = df_tra.drop(labels=['target'], axis=1, inplace=False).values
y_tra = df_tra.target.values
X_val = df_val.drop(labels=['target'], axis=1, inplace=False).values
y_val = df_val.target.values
X_tst = df_tst.drop(labels=['target'], axis=1, inplace=False).values
y_tst = df_tst.target.values
Explanation: checking data distribution, we see that this is a good split (considering the porportion of targets)
End of explanation
from sklearn.preprocessing import MinMaxScaler
normalizer = MinMaxScaler().fit(X_tra)
X_tra = normalizer.transform(X_tra)
X_val = normalizer.transform(X_val)
X_tst = normalizer.transform(X_tst)
Explanation: Normalization
For the sake of simplicity, we will use the 0-1 range normalization:
$ x_i = \dfrac{x_i - min(x_i)}{max(x_i) - min(x_i)}$
This is allowed because we do not have that many 'outliers' in our features.
The Alpha-Trimmed normalization or Standard Scaler normalization would be more appropriate if we introduced other (interesting) features such as:
- Average cases/week in the neighborhood.
- Number of cases in the last X days in that neighborhood.
End of explanation
from sklearn.feature_selection import VarianceThreshold
print X_tra.shape
threshold=(.999 * (1 - .999))
sel = VarianceThreshold(threshold=threshold)
X_tra = sel.fit(X_tra).transform(X_tra)
X_val = sel.transform(X_val)
X_tst = sel.transform(X_tst)
print X_tra.shape
removed_features_1 = np.array(columns)[np.where(sel.variances_ < threshold)]
selected_features_1 = np.array(feature_columns)[np.where(sel.variances_ >= threshold)]
print 'removed_features'
print removed_features_1
Explanation: Feature Importance
Variance Threshold
End of explanation
plt.figure(figsize=(12,8))
sns.heatmap(df.corr('pearson'))
Explanation: Correlation
End of explanation
del df['loc_x']
del df['loc_y']
Explanation: features loc_x and loc_y are too correlated with latitude and longitude, respectively, for this reason, we'll delete lox_x and loc_y.
End of explanation
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
def feature_importance(X, y, feat_names, forest='random_forest', plot=False, print_=False):
# Build a forest and compute the feature importances
if forest == 'random_forest':
forest = RandomForestClassifier(n_estimators=200, random_state=0)
elif forest == 'extra_trees':
forest = ExtraTreesClassifier(n_estimators=200, random_state=0)
forest.fit(X, y)
importances = forest.feature_importances_
sd = np.std([tree.feature_importances_ for tree in forest.estimators_], axis=0)
mn = np.mean([tree.feature_importances_ for tree in forest.estimators_], axis=0)
indices = np.argsort(importances)[::-1]
# Print the feature ranking
if print_:
print("Feature ranking:")
for f in range(X.shape[1]):
print("%d. feature %d (%f) %s" % (f + 1, indices[f], importances[indices[f]], feat_names[indices[f]]))
if plot:
plt.figure(figsize=(16,3))
plt.title("Feature importances")
plt.bar(range(len(importances)), importances[indices],
color="r", yerr=sd[indices], align="center")
plt.xticks(range(len(importances)), indices)
plt.xlim([-1, len(indices)])
plt.show()
return indices, importances
indices, importances = feature_importance(X_tra, y_tra, selected_features_1, plot=True, forest='random_forest')
indices, importances = feature_importance(X_tra, y_tra, selected_features_1, plot=True, forest='extra_trees')
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import roc_auc_score
scores = []
for i in range(1,len(indices)):
mask = indices[:i]
clf = RandomForestClassifier(n_estimators=100)
clf.fit(X_tra[:,mask], y_tra)
score = roc_auc_score(y_val, clf.predict_proba(X_val[:,mask])[:,1])
scores.append(score)
plt.plot(np.arange(len(scores)), scores)
plt.xlabel("# Features")
plt.ylabel("AUC")
max_index = np.argmax(scores)
sel_index = 18
Explanation: Feature Importance (Trees)
This can be done using sklearn.feature_selection.SelectFromModel, however, we do it by ourselves in order to get a better visualization of the process.
End of explanation
selected_features_2 = np.array(selected_features_1)[indices[:sel_index]]
selected_features_2
Explanation: Based on this results, we'll select the N features
End of explanation
from sklearn.metrics import roc_curve, auc
def find_cutoff(y_true, y_pred):
fpr, tpr, threshold = roc_curve(y_true, y_pred)
i = np.arange(len(tpr))
roc = pd.DataFrame({'tf' : pd.Series(tpr-(1-fpr), index=i), 'threshold' : pd.Series(threshold, index=i)})
roc_t = roc.ix[(roc.tf-0).abs().argsort()[:1]]
return list(roc_t['threshold'])[0]
from sklearn.feature_selection import SelectFromModel, SelectKBest
from sklearn.pipeline import Pipeline
def __feature_importance(X, y):
forest = RandomForestClassifier(n_estimators=200, random_state=0)
forest.fit(X, y)
return forest.feature_importances_
pipe = Pipeline([
('normalizer', MinMaxScaler()),
('selection_threshold', VarianceThreshold(threshold=(.999 * (1 - .999)))),
('selection_kbest', SelectKBest(__feature_importance, k=31)),
('classifier', RandomForestClassifier(n_estimators=100))])
pipe.fit(X_tra, y_tra)
y_proba = pipe.predict_proba(X_tst)
cutoff = find_cutoff(y_tst, y_proba[:,1])
from sklearn.metrics import roc_curve, auc
fpr, tpr, thresh = roc_curve(y_tst, y_proba[:,1])
auc_roc = auc(fpr, tpr)
print 'cuttoff {:.4f}'.format(cutoff)
plt.title('ROC Curve')
plt.plot(fpr, tpr, 'b',
label='AUC = %0.2f'% auc_roc)
plt.legend(loc='lower right')
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.2])
plt.ylim([-0.1,1.2])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
from sklearn.metrics import classification_report
print classification_report(y_tst, y_proba[:,1] >= cutoff)
import sqlite3
from sqlalchemy import create_engine
SQL_ENGINE = create_engine('sqlite:///streetlight_cases.db')
df.to_sql('new_data', SQL_ENGINE, if_exists='replace', index=False)
Explanation: It seems like the location and category features are more important than the date related features.
On the date related features, the system also selected opened_dayofweek and opened_dayofmonth.
Test Set
End of explanation |
11,104 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Defining network architecture (we use Arch-2)
We also define some functions to make training convinent here.
Step2: Mounting folder from Google Drive
Step3: Verify the files are correctly mounted and available
Step4: Load the model in case a trained one is already available
Step5: The training step
Step6: Once again save the model (although it is already saved each epoch)
Step7: Evaluate using the trained model
Step8: Plotting the loss and ROC
Step9: The 2D Histogram of QCD Loss vs Mass | Python Code:
# This program will not generate the jet images, it will only train the autoencoder
# and evaluate the results. The jet images can be found in:
# https://drive.google.com/drive/folders/1i5DY9duzDuumQz636u5YQeYQEt_7TYa8?usp=sharing
# Please download those images to your google drive and use the colab - drive integration.
import lzma
from google.colab import drive
import numpy as np
import tensorflow as tf
import keras
from keras import backend as K
from keras.layers import Input, Dense
from keras.models import Model
import matplotlib.pyplot as plt
def READ_XZ (filename):
file = lzma.LZMAFile(filename)
type_bytes = file.read(-1)
type_array = np.frombuffer(type_bytes,dtype='float32')
return type_array
def Count(array,val):
count = 0.0
for e in range(array.shape[0]):
if array[e]>val :
count=count+1.0
return count / array.shape[0]
width=40
batch_size=200
ModelName = "Model_40_24_8_24_40_40"
config = tf.ConfigProto( device_count = {'GPU': 1 , 'CPU': 2} )
sess = tf.Session(config=config)
keras.backend.set_session(sess)
K.tensorflow_backend._get_available_gpus()
Explanation: <a href="https://colab.research.google.com/github/aravindhv10/CPP_Wrappers/blob/master/AntiQCD4/Training_Notebook.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
This program will not generate the jet images, it will only train the autoencoder
and evaluate the results. The jet images can be found in:
https://drive.google.com/drive/folders/1i5DY9duzDuumQz636u5YQeYQEt_7TYa8?usp=sharing
Please download those images to your google drive and use the colab - drive integration.
A program to generate jet images is available at
https://github.com/aravindhv10/CPP_Wrappers/blob/master/AntiQCD4/JetImageFormation.hh
in the form of the class BoxImageGen.
The images used in this program were produced using BoxImageGen<40,float,true> with the ratio $m_J/E_J=0.5$.
End of explanation
# this is our input placeholder
input_img = Input(shape=(width*width,))
# "encoded" is the encoded representation of the input
Layer1 = Dense(24*24, activation='relu')(input_img)
Layer2 = Dense(8*8, activation='relu')(Layer1)
Layer3 = Dense(24*24, activation='relu')(Layer2)
Layer4 = Dense(40*40, activation='relu')(Layer3)
Out = Dense(40*40, activation='softmax')(Layer4)
# this model maps an input to its reconstruction
autoencoder = Model(input_img, Out)
autoencoder.compile(optimizer='adam', loss='mean_squared_error')
def EvalOnFile (InFileName,OutFileName):
data = READ_XZ (InFileName)
x_train = data.reshape(-1,width*width)
x_out = autoencoder.predict(x_train,200,use_multiprocessing=True)
diff = x_train - x_out
lrnorm = np.ones((diff.shape[0]))
for e in range(diff.shape[0]):
lrnorm[e] = np.linalg.norm(diff[e])
lrnorm.tofile(OutFileName)
print(lrnorm.shape)
def TrainOnFile (filename,testfilename,totalepochs):
data = READ_XZ (filename)
x_train = data.reshape(-1,width*width)
datatest = READ_XZ (testfilename)
x_test = datatest.reshape(-1,width*width)
autoencoder.fit(
x_train, x_train, epochs=totalepochs,
batch_size=200, shuffle=True,
validation_data=(x_test, x_test)
)
autoencoder.save(ModelName)
Explanation: Defining network architecture (we use Arch-2)
We also define some functions to make training convinent here.
End of explanation
# Please download the files from the link below and appropriately change this program:
# https://drive.google.com/drive/folders/1i5DY9duzDuumQz636u5YQeYQEt_7TYa8?usp=sharing
drive.mount('/gdrive')
%cd /gdrive
Explanation: Mounting folder from Google Drive:
End of explanation
%cd /gdrive/My\ Drive/JetImages/QCD/
!ls ./TEST/BoxImages/0.xz
!ls ./TRAIN/BoxImages/0.xz
Explanation: Verify the files are correctly mounted and available:
End of explanation
%cd /gdrive/My Drive/JetImages/QCD
autoencoder = keras.models.load_model(ModelName)
Explanation: Load the model in case a trained one is already available:
End of explanation
%cd /gdrive/My Drive/JetImages/QCD
for e in range(4):
TrainOnFile("./TRAIN/BoxImages/0.xz","./TEST/BoxImages/0.xz",10)
TrainOnFile("./TRAIN/BoxImages/1.xz","./TEST/BoxImages/1.xz",10)
TrainOnFile("./TRAIN/BoxImages/2.xz","./TEST/BoxImages/2.xz",10)
TrainOnFile("./TRAIN/BoxImages/3.xz","./TEST/BoxImages/3.xz",10)
TrainOnFile("./TRAIN/BoxImages/4.xz","./TEST/BoxImages/4.xz",10)
TrainOnFile("./TRAIN/BoxImages/5.xz","./TEST/BoxImages/5.xz",10)
TrainOnFile("./TRAIN/BoxImages/6.xz","./TEST/BoxImages/6.xz",10)
TrainOnFile("./TRAIN/BoxImages/7.xz","./TEST/BoxImages/7.xz",10)
TrainOnFile("./TRAIN/BoxImages/8.xz","./TEST/BoxImages/8.xz",10)
TrainOnFile("./TRAIN/BoxImages/9.xz","./TEST/BoxImages/9.xz",10)
TrainOnFile("./TRAIN/BoxImages/10.xz","./TEST/BoxImages/10.xz",10)
TrainOnFile("./TRAIN/BoxImages/11.xz","./TEST/BoxImages/11.xz",10)
TrainOnFile("./TRAIN/BoxImages/12.xz","./TEST/BoxImages/12.xz",10)
TrainOnFile("./TRAIN/BoxImages/13.xz","./TEST/BoxImages/13.xz",10)
TrainOnFile("./TRAIN/BoxImages/14.xz","./TEST/BoxImages/14.xz",10)
TrainOnFile("./TRAIN/BoxImages/15.xz","./TEST/BoxImages/15.xz",10)
Explanation: The training step:
End of explanation
%cd /gdrive/My Drive/JetImages/QCD
autoencoder.save(ModelName)
# autoencoder = keras.models.load_model(ModelName)
%cd /gdrive/My Drive/JetImages/QCD
!ls -lh
%cd /gdrive/My Drive/JetImages/QCD
!xz -z9evvfk Model_40_24_8_24_40_40
Explanation: Once again save the model (although it is already saved each epoch)
End of explanation
%cd /gdrive/My Drive/JetImages/QCD
EvalOnFile("./TEST/BoxImages/0.xz","./TEST/BoxImages/0_out")
EvalOnFile("./TEST/BoxImages/1.xz","./TEST/BoxImages/1_out")
EvalOnFile("./TEST/BoxImages/2.xz","./TEST/BoxImages/2_out")
EvalOnFile("./TEST/BoxImages/3.xz","./TEST/BoxImages/3_out")
EvalOnFile("./TEST/BoxImages/4.xz","./TEST/BoxImages/4_out")
EvalOnFile("./TEST/BoxImages/5.xz","./TEST/BoxImages/5_out")
EvalOnFile("./TEST/BoxImages/6.xz","./TEST/BoxImages/6_out")
EvalOnFile("./TEST/BoxImages/7.xz","./TEST/BoxImages/7_out")
%cd /gdrive/My Drive/JetImages/TOP
EvalOnFile("./TEST/BoxImages/0.xz","./TEST/BoxImages/0_out")
EvalOnFile("./TEST/BoxImages/1.xz","./TEST/BoxImages/1_out")
EvalOnFile("./TEST/BoxImages/2.xz","./TEST/BoxImages/2_out")
EvalOnFile("./TEST/BoxImages/3.xz","./TEST/BoxImages/3_out")
EvalOnFile("./TEST/BoxImages/4.xz","./TEST/BoxImages/4_out")
EvalOnFile("./TEST/BoxImages/5.xz","./TEST/BoxImages/5_out")
EvalOnFile("./TEST/BoxImages/6.xz","./TEST/BoxImages/6_out")
EvalOnFile("./TEST/BoxImages/7.xz","./TEST/BoxImages/7_out")
%cd /gdrive/My Drive/JetImages/
!cat TOP/TEST/BoxImages/*_out > TOP_OUT
!cat QCD/TEST/BoxImages/*_out > QCD_OUT
!ls TOP/TEST/BoxImages/*_out TOP_OUT -lh
!ls QCD/TEST/BoxImages/*_out QCD_OUT -lh
Explanation: Evaluate using the trained model:
End of explanation
%cd /gdrive/My Drive/JetImages/
qcdloss = np.fromfile("QCD_OUT", dtype=float, count=-1, sep='', offset=0)
toploss = np.fromfile("TOP_OUT", dtype=float, count=-1, sep='', offset=0)
qcdloss=np.sort(qcdloss)
toploss=np.sort(toploss)
print(qcdloss.shape)
print(toploss.shape)
plt.hist(toploss,100,(0.0,0.4),density=True,histtype='step')
plt.hist(qcdloss,100,(0.0,0.4),density=True,histtype='step')
plt.show()
dx = (0.4 - 0.0) / 100.0
qcdeff = np.ones((100))
topeff = np.ones((100))
for i in range(100):
xval = i*dx
qcdeff[i]=1.0/(Count(qcdloss,xval)+0.0000000001)
topeff[i]=Count(toploss,xval)
plt.yscale('log')
plt.plot(topeff,qcdeff)
%cd /gdrive/My Drive/JetImages/
def ReadLossMass(lossname,massname):
loss = np.fromfile(lossname, dtype=float, count=-1, sep='', offset=0)
mass = np.fromfile(massname, dtype='float32', count=-1, sep='', offset=0)
out = np.ones((mass.shape[0],2))
for i in range(mass.shape[0]):
out[i][0] = loss[i]
out[i][1] = mass[i]
return out
def GetQCDPair () :
pair = ReadLossMass("QCD/TEST/BoxImages/0_out","QCD/TEST/Mass/0")
pair = np.append (pair,ReadLossMass("QCD/TEST/BoxImages/1_out","QCD/TEST/Mass/1"),0)
pair = np.append (pair,ReadLossMass("QCD/TEST/BoxImages/2_out","QCD/TEST/Mass/2"),0)
pair = np.append (pair,ReadLossMass("QCD/TEST/BoxImages/3_out","QCD/TEST/Mass/3"),0)
pair = np.append (pair,ReadLossMass("QCD/TEST/BoxImages/4_out","QCD/TEST/Mass/4"),0)
pair = np.append (pair,ReadLossMass("QCD/TEST/BoxImages/5_out","QCD/TEST/Mass/5"),0)
pair = np.append (pair,ReadLossMass("QCD/TEST/BoxImages/6_out","QCD/TEST/Mass/6"),0)
pair = np.append (pair,ReadLossMass("QCD/TEST/BoxImages/7_out","QCD/TEST/Mass/7"),0)
return pair
def GetTOPPair () :
pair = ReadLossMass("TOP/TEST/BoxImages/0_out","TOP/TEST/Mass/0")
pair = np.append (pair,ReadLossMass("TOP/TEST/BoxImages/1_out","TOP/TEST/Mass/1"),0)
pair = np.append (pair,ReadLossMass("TOP/TEST/BoxImages/2_out","TOP/TEST/Mass/2"),0)
pair = np.append (pair,ReadLossMass("TOP/TEST/BoxImages/3_out","TOP/TEST/Mass/3"),0)
pair = np.append (pair,ReadLossMass("TOP/TEST/BoxImages/4_out","TOP/TEST/Mass/4"),0)
pair = np.append (pair,ReadLossMass("TOP/TEST/BoxImages/5_out","TOP/TEST/Mass/5"),0)
pair = np.append (pair,ReadLossMass("TOP/TEST/BoxImages/6_out","TOP/TEST/Mass/6"),0)
pair = np.append (pair,ReadLossMass("TOP/TEST/BoxImages/7_out","TOP/TEST/Mass/7"),0)
return pair
qcdpair = GetQCDPair()
toppair = GetTOPPair()
Explanation: Plotting the loss and ROC:
End of explanation
#plt.hist(qcdpair[:,1],100,(0.0,300.0),density=True,histtype='step')
#plt.hist(toppair[:,1],100,(0.0,300.0),density=True,histtype='step')
plt.hist2d(qcdpair[:,1],qcdpair[:,0],bins=100,range=[[0,400],[0.0,0.3]])
plt.show()
def QCDMassBin(minmass,maxmass):
ret = np.ones((1))
for e in range(qcdpair.shape[0]):
if (minmass < qcdpair[e][1]) and (qcdpair[e][1] < maxmass) :
if e == 0 :
ret[e] = qcdpair[e][0]
else:
ret = np.append(ret,qcdpair[e][0])
return ret
plt.hist(QCDMassBin(0,100),100,(0.0,0.4),density=True,histtype='step')
plt.hist(QCDMassBin(100,200),100,(0.0,0.4),density=True,histtype='step')
plt.hist(QCDMassBin(200,300),100,(0.0,0.4),density=True,histtype='step')
plt.hist(QCDMassBin(300,400),100,(0.0,0.4),density=True,histtype='step')
plt.hist(QCDMassBin(400,500),100,(0.0,0.4),density=True,histtype='step')
plt.hist(QCDMassBin(500,5000),100,(0.0,0.4),density=True,histtype='step')
plt.show()
Explanation: The 2D Histogram of QCD Loss vs Mass
End of explanation |
11,105 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plot single trial activity, grouped by ROI and sorted by RT
This will produce what is sometimes called an event related
potential / field (ERP/ERF) image.
The EEGLAB example file, which contains an experiment with button press
responses to simple visual stimuli, is read in and response times are
calculated.
Regions of Interest are determined by the channel types (in 10/20 channel
notation, even channels are right, odd are left, and 'z' are central). The
median and the Global Field Power within each channel group is calculated,
and the trials are plotted, sorting by response time.
Step1: Load EEGLAB example data (a small EEG dataset)
Step2: Create Epochs
Step3: Plot using
Step4: Plot using median | Python Code:
# Authors: Jona Sassenhagen <[email protected]>
#
# License: BSD (3-clause)
import mne
from mne.event import define_target_events
from mne.channels import make_1020_channel_selections
print(__doc__)
Explanation: Plot single trial activity, grouped by ROI and sorted by RT
This will produce what is sometimes called an event related
potential / field (ERP/ERF) image.
The EEGLAB example file, which contains an experiment with button press
responses to simple visual stimuli, is read in and response times are
calculated.
Regions of Interest are determined by the channel types (in 10/20 channel
notation, even channels are right, odd are left, and 'z' are central). The
median and the Global Field Power within each channel group is calculated,
and the trials are plotted, sorting by response time.
End of explanation
data_path = mne.datasets.testing.data_path()
fname = data_path + "/EEGLAB/test_raw.set"
event_id = {"rt": 1, "square": 2} # must be specified for str events
raw = mne.io.read_raw_eeglab(fname)
mapping = {
'EEG 000': 'Fpz', 'EEG 001': 'EOG1', 'EEG 002': 'F3', 'EEG 003': 'Fz',
'EEG 004': 'F4', 'EEG 005': 'EOG2', 'EEG 006': 'FC5', 'EEG 007': 'FC1',
'EEG 008': 'FC2', 'EEG 009': 'FC6', 'EEG 010': 'T7', 'EEG 011': 'C3',
'EEG 012': 'C4', 'EEG 013': 'Cz', 'EEG 014': 'T8', 'EEG 015': 'CP5',
'EEG 016': 'CP1', 'EEG 017': 'CP2', 'EEG 018': 'CP6', 'EEG 019': 'P7',
'EEG 020': 'P3', 'EEG 021': 'Pz', 'EEG 022': 'P4', 'EEG 023': 'P8',
'EEG 024': 'PO7', 'EEG 025': 'PO3', 'EEG 026': 'POz', 'EEG 027': 'PO4',
'EEG 028': 'PO8', 'EEG 029': 'O1', 'EEG 030': 'Oz', 'EEG 031': 'O2'
}
raw.rename_channels(mapping)
raw.set_channel_types({"EOG1": 'eog', "EOG2": 'eog'})
raw.set_montage('standard_1020')
events = mne.events_from_annotations(raw, event_id)[0]
Explanation: Load EEGLAB example data (a small EEG dataset)
End of explanation
# define target events:
# 1. find response times: distance between "square" and "rt" events
# 2. extract A. "square" events B. followed by a button press within 700 msec
tmax = .7
sfreq = raw.info["sfreq"]
reference_id, target_id = 2, 1
new_events, rts = define_target_events(events, reference_id, target_id, sfreq,
tmin=0., tmax=tmax, new_id=2)
epochs = mne.Epochs(raw, events=new_events, tmax=tmax + .1,
event_id={"square": 2})
Explanation: Create Epochs
End of explanation
# Parameters for plotting
order = rts.argsort() # sorting from fast to slow trials
selections = make_1020_channel_selections(epochs.info, midline="12z")
# The actual plots (GFP)
epochs.plot_image(group_by=selections, order=order, sigma=1.5,
overlay_times=rts / 1000., combine='gfp',
ts_args=dict(vlines=[0, rts.mean() / 1000.]))
Explanation: Plot using :term:Global Field Power <GFP>
End of explanation
epochs.plot_image(group_by=selections, order=order, sigma=1.5,
overlay_times=rts / 1000., combine='median',
ts_args=dict(vlines=[0, rts.mean() / 1000.]))
Explanation: Plot using median
End of explanation |
11,106 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Examples
Step1: Building on our discussion of modules from last week, we'll use the my_dataset module that I have prepared as a basis. This module is largely identical to what we have built out in previous weeks.
Step2: We'll use a read_csv function that is provided in our module to feed into a Dataset object. This loads in our full dataset, and we'll take a look at the columns it generates.
Note that we're using a different encoding. In particular, this is because the file is encoded in cp1252, which is a Windows-specific extension that includes some characters that can't be interpreted directly in Unicode. If we didn't specify this, it would crash.
Step3: Let's see what data is available.
Step4: Wow! OK, so, we have a column for each year that data is available. Other than that, we have an indicator code, a country code, and then the longer names of those two things. How many indicator codes are there? (Spoiler alert
Step5: We'll convert our columns like usual, but this time we'll "blacklist" the ones we don't want to convert to float.
Step6: Let's do some checks to make sure we've got a full set of values, by checking that we have N_countries by N_indicators.
Step7: Awesome, we're good.
It's a bit unwieldy to have a bunch of columns. We're going to join all the years, in order, so that we can have just a single column for that. This column will be N-wide, where N is the number of years.
To do this, we'll create a list of arrays, and then use np.array to turn them into a single array.
Step8: But it's the wrong shape, as we'll see
Step9: So we'll transpose it (switching first and last axes) and then stick it into our dataset as the column "values".
Step10: Just to make sure, let's double-check that our values column shows up.
Step11: Let's get started. I've picked out an indicator we can use, which is the population in the largest city.
Step12: What's the population in just the US in this city over time?
Step13: We'll need some years to plot against, so we will use np.arange to generate the values to supply to our plotting routines. This routine drops off the final value, so we have to go to N+1 where N is the final value we want.
Step14: Great. That kind of works as we want. Now let's clean it up, put it in a function, and look at a couple different countries. And, while we're at it, we'll go ahead and make the indicator name a parameter, too.
We'll plug this into the ipywidgets.interact function momentarily.
Step15: What countries and indicators do we have to choose from? And, let's pick a couple at random.
Step16: Now we'll test our routine.
Step17: Note something here -- it's masking data that doesn't exist in the dataset. Those points appear as discontinuities. This isn't always what we want; for instance, we may instead wish to show those as a continuous, but irregularly spaced, line plot. However, in this case, we want to emphasize the discontinuity.
Now, we'll use ipywidgets to change which country and indicator we plot. This isn't incredibly useful, since the number of indicators is intimidatingly long, but we will use it as a quick way to explore some of the data.
Note that we're calling tolist() on these. If we just feed in arrays, the interact function gets a bit confused.
Step18: Let's just toss everything into a single plot. This is a bad idea, and let's demonstrate why.
Step19: Instead, we'll create a function that plots only those whose countries whose values were greater than those in the US in a given year. For instance, if you specify 1968, only those countries whose population was greater than the US in 1968 will be plotted.
Step20: Again, we'll use ipywidgets.interact to do this.
Step21: Now let's see if we can get the top N countries for a given indicator. Note that we're doing a couple things here that we've done in the past, but in a more optimized form.
Inside our routine, we're filtering out the indicator name that we're given. We then compute the maximum along all years for each country, then we sort that maximum. We then iterate over the sorted list, in opposite order, and plot those countries.
Step22: Let's see which indicators might be interesting -- specifically, let's take a look at all the indicators that have "Population" in their name.
Step23: We have lots of options. So, let's filter out a couple of them -- we'll get the three components of population. These should all add up to 100%, so we'll use them as an example of how to do stacked plots.
The three components we'll pull out are the age between 0-14, 15-64, and 65 and above. We'll do this for the United States. (But, remember, you could extend this to select the country as well!)
Step24: We'll make a simple plot first, just to see the trends.
Step25: Probably not a huge surprise, since the age range 15-64 is quite big. Let's verify that these sum up to 100%.
Step26: Yup, looks like. Now we'll make a stackplot. This is a plot where we're showing the area included in each, and putting one on top of the other. This type of plot is useful for when you want to show the variation in composition over time of different quantities.
Step27: We can plot both the line plots and the stacked plots next to each other to get a more visual comparison between them.
Step28: What about things that don't "stack" to a constant value, but a value varying over time? We can also use a stack plot here, which shows the trendlines, but note that it's not always going to be as clearly demonstrative of the relative percentages. While it shows composition, extracting from that the specific composition is not trivial. | Python Code:
%matplotlib inline
Explanation: Examples: Week 7
This week, we will apply some of our discussions around filtering, splitting and so on to build out comparisons between different variables within the World Bank Economic Indicators dataset.
This dataset, which covers 1960-2016, 264 countries and 1452 variables, can be obtained at the Worldbank Data site.
End of explanation
import sys
sys.path.insert(0, "/srv/nbgrader/data/WDI/")
from my_dataset import Dataset, read_csv
import numpy as np
import matplotlib.pyplot as plt
import ipywidgets
plt.rcParams["figure.figsize"] = (6, 5)
Explanation: Building on our discussion of modules from last week, we'll use the my_dataset module that I have prepared as a basis. This module is largely identical to what we have built out in previous weeks.
End of explanation
data = Dataset(read_csv("/srv/nbgrader/data/WDI/WDI_Data.csv",
encoding="cp1252"))
Explanation: We'll use a read_csv function that is provided in our module to feed into a Dataset object. This loads in our full dataset, and we'll take a look at the columns it generates.
Note that we're using a different encoding. In particular, this is because the file is encoded in cp1252, which is a Windows-specific extension that includes some characters that can't be interpreted directly in Unicode. If we didn't specify this, it would crash.
End of explanation
data.columns()
Explanation: Let's see what data is available.
End of explanation
np.unique(data["Indicator Code"])
np.unique(data["Indicator Code"]).size
np.unique(data["Country Code"]).size
Explanation: Wow! OK, so, we have a column for each year that data is available. Other than that, we have an indicator code, a country code, and then the longer names of those two things. How many indicator codes are there? (Spoiler alert: I put the number up above already!)
End of explanation
mapping = {"Country Code": "str", "Country Name": "str",
"Indicator Code": "str", "Indicator Name": "str"}
for c in data.columns():
data.convert(c, mapping.get(c, "float"))
Explanation: We'll convert our columns like usual, but this time we'll "blacklist" the ones we don't want to convert to float.
End of explanation
data["1964"].shape
383328/264.
Explanation: Let's do some checks to make sure we've got a full set of values, by checking that we have N_countries by N_indicators.
End of explanation
values = []
for i in range(1960, 2017):
values.append(data[str(i)])
values = np.array(values)
Explanation: Awesome, we're good.
It's a bit unwieldy to have a bunch of columns. We're going to join all the years, in order, so that we can have just a single column for that. This column will be N-wide, where N is the number of years.
To do this, we'll create a list of arrays, and then use np.array to turn them into a single array.
End of explanation
values.shape
Explanation: But it's the wrong shape, as we'll see:
End of explanation
values = values.transpose()
data.data["values"] = values
Explanation: So we'll transpose it (switching first and last axes) and then stick it into our dataset as the column "values".
End of explanation
data.columns()
Explanation: Just to make sure, let's double-check that our values column shows up.
End of explanation
# EN.URB.LCTY
pop_city = data.filter_eq("Indicator Code", "EN.URB.LCTY")
pop_city["values"].shape
Explanation: Let's get started. I've picked out an indicator we can use, which is the population in the largest city.
End of explanation
us_pop_city = pop_city.filter_eq("Country Name", 'United States')
Explanation: What's the population in just the US in this city over time?
End of explanation
years = np.arange(1960, 2017)
plt.plot(years, us_pop_city["values"][0,:], '.-')
Explanation: We'll need some years to plot against, so we will use np.arange to generate the values to supply to our plotting routines. This routine drops off the final value, so we have to go to N+1 where N is the final value we want.
End of explanation
def plot_country_indicator(country, indicator):
indicator_value = data.filter_eq("Indicator Name", indicator)
country_value = indicator_value.filter_eq("Country Name", country)
plt.plot(years, country_value["values"][0,:], '.-')
plt.xlabel("Year")
plt.ylabel(indicator)
Explanation: Great. That kind of works as we want. Now let's clean it up, put it in a function, and look at a couple different countries. And, while we're at it, we'll go ahead and make the indicator name a parameter, too.
We'll plug this into the ipywidgets.interact function momentarily.
End of explanation
indicators = np.unique(data["Indicator Name"])
countries = np.unique(data["Country Name"])
indicators[50], countries[100]
Explanation: What countries and indicators do we have to choose from? And, let's pick a couple at random.
End of explanation
plot_country_indicator(countries[100], indicators[50])
Explanation: Now we'll test our routine.
End of explanation
ipywidgets.interact(plot_country_indicator,
country=countries.tolist(),
indicator=indicators.tolist())
Explanation: Note something here -- it's masking data that doesn't exist in the dataset. Those points appear as discontinuities. This isn't always what we want; for instance, we may instead wish to show those as a continuous, but irregularly spaced, line plot. However, in this case, we want to emphasize the discontinuity.
Now, we'll use ipywidgets to change which country and indicator we plot. This isn't incredibly useful, since the number of indicators is intimidatingly long, but we will use it as a quick way to explore some of the data.
Note that we're calling tolist() on these. If we just feed in arrays, the interact function gets a bit confused.
End of explanation
plt.plot(years, pop_city["values"].transpose())
Explanation: Let's just toss everything into a single plot. This is a bad idea, and let's demonstrate why.
End of explanation
def greater_than_year(year = 2010):
year = str(year)
greater_than_us = pop_city.filter_gt(year, us_pop_city[year])
for i, country in enumerate(greater_than_us["Country Name"]):
plt.plot(years, greater_than_us["values"][i,:], label=country)
plt.ylim(0, np.nanmax(pop_city["values"]))
plt.legend()
Explanation: Instead, we'll create a function that plots only those whose countries whose values were greater than those in the US in a given year. For instance, if you specify 1968, only those countries whose population was greater than the US in 1968 will be plotted.
End of explanation
ipywidgets.interact(greater_than_year, year = (1960, 2016))
Explanation: Again, we'll use ipywidgets.interact to do this.
End of explanation
def plot_country_indicator_top(indicator, top = 5):
indicator_value = data.filter_eq("Indicator Name", indicator)
max_value = np.nanmax(indicator_value["values"], axis=1)
max_value[np.isnan(max_value)] = -1e90
max_indices = np.argsort(max_value)
for ind in reversed(max_indices[-top:]):
plt.plot(years, indicator_value["values"][ind,:], '.-',
label=indicator_value["Country Name"][ind])
plt.xlabel("Year")
plt.legend()
plt.ylabel(indicator)
ipywidgets.interact(plot_country_indicator_top,
country=countries.tolist(),
indicator=indicators.tolist())
Explanation: Now let's see if we can get the top N countries for a given indicator. Note that we're doing a couple things here that we've done in the past, but in a more optimized form.
Inside our routine, we're filtering out the indicator name that we're given. We then compute the maximum along all years for each country, then we sort that maximum. We then iterate over the sorted list, in opposite order, and plot those countries.
End of explanation
for indicator in indicators:
if "Population" in indicator:
print(indicator)
Explanation: Let's see which indicators might be interesting -- specifically, let's take a look at all the indicators that have "Population" in their name.
End of explanation
united_states = data.filter_eq("Country Name", "United States")
us_pop_0014 = united_states.filter_eq("Indicator Name", "Population ages 0-14 (% of total)")
us_pop_1564 = united_states.filter_eq("Indicator Name", "Population ages 15-64 (% of total)")
us_pop_6500 = united_states.filter_eq("Indicator Name", "Population ages 65 and above (% of total)")
Explanation: We have lots of options. So, let's filter out a couple of them -- we'll get the three components of population. These should all add up to 100%, so we'll use them as an example of how to do stacked plots.
The three components we'll pull out are the age between 0-14, 15-64, and 65 and above. We'll do this for the United States. (But, remember, you could extend this to select the country as well!)
End of explanation
plt.plot(years, us_pop_0014["values"][0,:], label="0-14")
plt.plot(years, us_pop_1564["values"][0,:], label="15-64")
plt.plot(years, us_pop_6500["values"][0,:], label="65+")
plt.legend()
plt.xlabel("Years")
plt.ylabel("Percentage")
Explanation: We'll make a simple plot first, just to see the trends.
End of explanation
us_pop_0014["values"][0,:] + us_pop_1564["values"][0,:] + us_pop_6500["values"][0,:]
Explanation: Probably not a huge surprise, since the age range 15-64 is quite big. Let's verify that these sum up to 100%.
End of explanation
plt.stackplot(years,
us_pop_0014["values"][0,:],
us_pop_1564["values"][0,:],
us_pop_6500["values"][0,:],
labels = ["0-14", "15-64", "65+"])
plt.legend(loc="center left")
plt.xlabel("Years")
plt.ylabel("Percentage")
plt.xlim(1960, 2015)
plt.ylim(0, 100)
Explanation: Yup, looks like. Now we'll make a stackplot. This is a plot where we're showing the area included in each, and putting one on top of the other. This type of plot is useful for when you want to show the variation in composition over time of different quantities.
End of explanation
baseline = us_pop_1564["values"][0,:]
plt.subplot(2,1,1)
plt.stackplot(years,
us_pop_0014["values"][0,:],
us_pop_1564["values"][0,:],
us_pop_6500["values"][0,:],
labels = ["0-14", "15-64", "65+"])
plt.legend(loc="center left")
plt.xlabel("Years")
plt.ylabel("Percentage")
plt.xlim(1960, 2015)
plt.ylim(0, 100)
plt.subplot(2,1,2)
plt.plot(years, us_pop_0014["values"][0,:]/baseline)
plt.plot(years, us_pop_1564["values"][0,:]/baseline)
plt.plot(years, us_pop_6500["values"][0,:]/baseline)
Explanation: We can plot both the line plots and the stacked plots next to each other to get a more visual comparison between them.
End of explanation
pop_urban = united_states.filter_eq("Indicator Name", "Population in urban agglomerations of more than 1 million")
pop_total = united_states.filter_eq("Indicator Name", "Population, total")
pop_urban = pop_urban["values"][0,:]
pop_total = pop_total["values"][0,:]
pop_non_urban = pop_total - pop_urban
plt.stackplot(years, pop_non_urban, pop_urban, labels=["Non-Urban", "Urban"])
plt.xlabel("Years")
plt.ylabel("People")
plt.legend()
Explanation: What about things that don't "stack" to a constant value, but a value varying over time? We can also use a stack plot here, which shows the trendlines, but note that it's not always going to be as clearly demonstrative of the relative percentages. While it shows composition, extracting from that the specific composition is not trivial.
End of explanation |
11,107 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MMTL Basics Tutorial
The purpose of this tutorial is to introduce the basic classes and flow of the MMTL package within Snorkel MeTaL (not necessarily to motivate or explain multi-task learning at large; we assume prior experience with MTL). For a broader understanding of the general Snorkel pipeline and Snorkel MeTaL library, see the Basics tutorial. In this notebook, we'll look at a simple MTL model with only two tasks, each having distinct data and only one set of labels (the ground truth or "gold" labels).
The primary purpose of the MMTL package is to enable flexible prototyping and experimentation in what we call the massive multi-task learning setting, where we have large numbers of tasks and labels of varying types, granularities, and label accuracies. A major requirement of this regime is the ability to easily add or remove new datasets, new label sets, new tasks, and new metrics. Thus, in the MMTL package, each of these concepts have been decoupled.
Environment Setup
We first need to make sure that the metal/ directory is on our Python path. If the following cell runs without an error, you're all set. If not, make sure that you've installed snorkel-metal with pip or that you've added the repo to your path if you're running from source; for example, running source add_to_path.sh from the repository root.
Step1: Create Toy Dataset
We'll now create a toy dataset to work with.
Our data points are 2D points in the square with edge length 2 centered on the origin.
Our tasks will be classifying whether these points are
Step2: Note that, as is the case throughout the Snorkel MeTaL repo, the label 0 is reserved for abstaining/no label; all actual labels have values greater than 0. This provides flexibility for supervision sources to label only portions of a dataset, for example. Thus, we'll convert our labels from being (1 = positive, 0 = negative) to (0=abstain, 1 = positive, 2 = negative).
Step3: We use our utility funtion split_data() to divide this synthetic data into train/valid/test splits.
Step4: And we view can view the ground truth labels of our tasks visually to confirm our intuition on what the decision boundaries look like.
Step5: Define MMTL Components
Now we'll define the core components of an MMTL problem
Step6: We now create the MetalModel from our list of tasks.
Step7: Payloads (Instances & Label Sets)
Now we'll define our Payloads.
A Payload is a bundle of instances (data points) and one or more corresponding label sets.
Each Payload contains data from only one split of the data (i.e., train data and test data should never touch).
Because we have two datasets with disjoint instance sets and three splits per dataset, we will make a total of six Payloads.
The instances in a Payload can consist of multiple fields of varied types (e.g., an image component and a text component for a caption generation task), and each Payload can contain multiple label sets (for example, if the same set of instances has labels for more than one task). If the instances have only one field and one label set, then you can use the helper method Payload.from_tensors(). In this case, the data you pass in (in our case, X) will be stored under the field name "data" by default and the label set will be given the name "labels" by default. See the other MMTL tutorial(s) for examples of problems where the data requires multiple fields or the instances have labels from multiple label sets.
Each Payload stores a dict that maps each label set to the task that it corresponds to.
Step8: Train Model
The MetalModel is built from a list of Task objects.
When the network is printed, it displays one input/middle/head module for each Task, (even if multiple Tasks share the same module). We can also see that each module is wrapped in a DataParallel() layer (to enable parallelization across multiple GPUs when available) and MetalModuleWrappers (which wrap arbitrary Pytorch modules to ensure that they maintain the proper input/output formats that MeTaL expects. This output is often quite long, so we generally set verbose=False when constructing the model.
In a future version update, more flexibility will be provided for specifying arbitrary DAG-like networks of modules between tasks.
Step9: To train the model, we create a MultiTaskTrainer.
The default scheduling strategy in MeTaL is to pull batches from Payloads proportional to the number of batches they contain relative to the total number of batches; this is approximately equivalent to dividing all Payloads into batches at the beginning of each epoch, shuffling them, and then operating over the shuffled list sequentially.
Step10: The train_model() method requires a MetalModel to train, and payloads with data and labels to run through the model.
Note once again that the data is separate from the tasks and model; the same model could be trained using payloads belonging to a different dataset, for example.
Task-specific metrics are recorded in the form "task/payload/label_set/metric" corresponding to the task the made the predictions, the payload (data) the predictions were made on, the label set used to score the predictions, and the metric being calculated.
For model-wide metrics (such as the total loss over all tasks or the learning rate), the default task name is model, the payload name is the name of the split, and the label_set is all.
Step11: To calculate predictions or probabilities for an individual payload, we can then use the provided MetalModel methods. | Python Code:
# Confirm we can import from metal
import sys
sys.path.append('../../metal')
import metal
# Import other dependencies
import torch
import torch.nn as nn
import torch.nn.functional as F
# Set random seed for notebook
SEED = 123
%load_ext autoreload
%autoreload 2
%matplotlib inline
Explanation: MMTL Basics Tutorial
The purpose of this tutorial is to introduce the basic classes and flow of the MMTL package within Snorkel MeTaL (not necessarily to motivate or explain multi-task learning at large; we assume prior experience with MTL). For a broader understanding of the general Snorkel pipeline and Snorkel MeTaL library, see the Basics tutorial. In this notebook, we'll look at a simple MTL model with only two tasks, each having distinct data and only one set of labels (the ground truth or "gold" labels).
The primary purpose of the MMTL package is to enable flexible prototyping and experimentation in what we call the massive multi-task learning setting, where we have large numbers of tasks and labels of varying types, granularities, and label accuracies. A major requirement of this regime is the ability to easily add or remove new datasets, new label sets, new tasks, and new metrics. Thus, in the MMTL package, each of these concepts have been decoupled.
Environment Setup
We first need to make sure that the metal/ directory is on our Python path. If the following cell runs without an error, you're all set. If not, make sure that you've installed snorkel-metal with pip or that you've added the repo to your path if you're running from source; for example, running source add_to_path.sh from the repository root.
End of explanation
import torch
torch.manual_seed(SEED)
N = 500 # Data points per dataset
R = 1 # Unit distance
# Dataset 0
X0 = torch.rand(N, 2) * 2 - 1
Y0 = (X0[:,0]**2 + X0[:,1]**2 < R).long()
# Dataset 1
X1 = torch.rand(N, 2) * 2 - 1
Y1 = ((-0.5 < X1[:,0]) * (X1[:,0] < 0.5) * (-0.5 < X1[:,1]) * (X1[:,1] < 0.5)).long()
Explanation: Create Toy Dataset
We'll now create a toy dataset to work with.
Our data points are 2D points in the square with edge length 2 centered on the origin.
Our tasks will be classifying whether these points are:
Inside a unit circle centered on the origin
Inside a unit square centered on the origin
We'll visualize these decision boundaries in a few cells.
End of explanation
from metal.utils import convert_labels
Y0 = convert_labels(Y0, "onezero", "categorical")
Y1 = convert_labels(Y1, "onezero", "categorical")
Explanation: Note that, as is the case throughout the Snorkel MeTaL repo, the label 0 is reserved for abstaining/no label; all actual labels have values greater than 0. This provides flexibility for supervision sources to label only portions of a dataset, for example. Thus, we'll convert our labels from being (1 = positive, 0 = negative) to (0=abstain, 1 = positive, 2 = negative).
End of explanation
from metal.utils import split_data
X0_splits, Y0_splits = split_data(X0, Y0, splits=[0.8, 0.1, 0.1], seed=SEED)
X1_splits, Y1_splits = split_data(X1, Y1, splits=[0.8, 0.1, 0.1], seed=SEED)
Explanation: We use our utility funtion split_data() to divide this synthetic data into train/valid/test splits.
End of explanation
import matplotlib.pyplot as plt
fig, axs = plt.subplots(1, 2)
axs[0].scatter(X0_splits[0][:,0], X0_splits[0][:,1], c=Y0_splits[0])
axs[0].set_aspect('equal', 'box')
axs[0].set_title('Task0', fontsize=10)
axs[1].scatter(X1_splits[0][:,0], X1_splits[0][:,1], c=Y1_splits[0])
axs[1].set_aspect('equal', 'box')
axs[1].set_title('Task1', fontsize=10)
print()
Explanation: And we view can view the ground truth labels of our tasks visually to confirm our intuition on what the decision boundaries look like.
End of explanation
from metal.mmtl.task import ClassificationTask
input_module = nn.Sequential(
torch.nn.Linear(2, 8),
nn.ReLU(),
torch.nn.Linear(8, 4),
nn.ReLU()
)
# Note that both tasks are initialized with the same copy of the input_module
# This ensures that those parameters will be shared (rather than creating two separate input_module copies)
task0 = ClassificationTask(
name="CircleTask",
input_module=input_module,
head_module=torch.nn.Linear(4, 2)
)
task1 = ClassificationTask(
name="SquareTask",
input_module=input_module,
head_module=torch.nn.Linear(4, 2)
)
Explanation: Define MMTL Components
Now we'll define the core components of an MMTL problem: Tasks, Models, and Payloads.
Tasks & MetalModels
A Task is a path through a network. In MeTaL, this corresponds to a particular sequence of Pytorch modules that each instance will pass through, ending with a "task head" module that outputs a prediction for that instance on that task. Task objects are not necessarily tied to a particular set of instances (data points) or labels.
In addition to specifying a path through the network, each task specifies which loss function and metrics it supports. You can look at the documentation for the Task class to see how to use custom losses or metrics; for now, we'll use the basic built-in ClassificationTask that uses cross-entropy loss and calculates accuracy.
The MetalModel is constructed from a set of Tasks. It constructs a network by stitching together the modules provided in each Task. In a future version of MeTaL, arbitrary DAG-like graphs of modules will be supported. In the present version, each Task can specify an input module, middle module, and head module (any module that is not provided will become an IdentityModule, which simply passes the data through with no modification).
The most common structure for MTL networks is to have a common trunk (e.g., input and/or middle modules) and separate heads (i.e., head modules). We will follow that design in this tutorial, making a feedforward network with 2 shared layers and separate task heads for each task. Each module can be composed of multiple submodules, so to accomplish this design, we can either include two linear layers in our input module, or assign one to the input module and one to the middle module; we arbitrarily use the former here.
End of explanation
from metal.mmtl import MetalModel
tasks = [task0, task1]
model = MetalModel(tasks, verbose=False)
Explanation: We now create the MetalModel from our list of tasks.
End of explanation
from pprint import pprint
from metal.mmtl.payload import Payload
payloads = []
splits = ["train", "valid", "test"]
for i, (X_splits, Y_splits) in enumerate([(X0_splits, Y0_splits), (X1_splits, Y1_splits)]):
for X, Y, split in zip(X_splits, Y_splits, splits):
payload_name = f"Payload{i}_{split}"
task_name = tasks[i].name
payload = Payload.from_tensors(payload_name, X, Y, task_name, split, batch_size=32)
payloads.append(payload)
pprint(payloads)
Explanation: Payloads (Instances & Label Sets)
Now we'll define our Payloads.
A Payload is a bundle of instances (data points) and one or more corresponding label sets.
Each Payload contains data from only one split of the data (i.e., train data and test data should never touch).
Because we have two datasets with disjoint instance sets and three splits per dataset, we will make a total of six Payloads.
The instances in a Payload can consist of multiple fields of varied types (e.g., an image component and a text component for a caption generation task), and each Payload can contain multiple label sets (for example, if the same set of instances has labels for more than one task). If the instances have only one field and one label set, then you can use the helper method Payload.from_tensors(). In this case, the data you pass in (in our case, X) will be stored under the field name "data" by default and the label set will be given the name "labels" by default. See the other MMTL tutorial(s) for examples of problems where the data requires multiple fields or the instances have labels from multiple label sets.
Each Payload stores a dict that maps each label set to the task that it corresponds to.
End of explanation
model = MetalModel(tasks, verbose=False)
Explanation: Train Model
The MetalModel is built from a list of Task objects.
When the network is printed, it displays one input/middle/head module for each Task, (even if multiple Tasks share the same module). We can also see that each module is wrapped in a DataParallel() layer (to enable parallelization across multiple GPUs when available) and MetalModuleWrappers (which wrap arbitrary Pytorch modules to ensure that they maintain the proper input/output formats that MeTaL expects. This output is often quite long, so we generally set verbose=False when constructing the model.
In a future version update, more flexibility will be provided for specifying arbitrary DAG-like networks of modules between tasks.
End of explanation
from metal.mmtl.trainer import MultitaskTrainer
trainer = MultitaskTrainer()
Explanation: To train the model, we create a MultiTaskTrainer.
The default scheduling strategy in MeTaL is to pull batches from Payloads proportional to the number of batches they contain relative to the total number of batches; this is approximately equivalent to dividing all Payloads into batches at the beginning of each epoch, shuffling them, and then operating over the shuffled list sequentially.
End of explanation
scores = trainer.train_model(
model,
payloads,
n_epochs=20,
log_every=2,
lr=0.02,
progress_bar=False,
)
Explanation: The train_model() method requires a MetalModel to train, and payloads with data and labels to run through the model.
Note once again that the data is separate from the tasks and model; the same model could be trained using payloads belonging to a different dataset, for example.
Task-specific metrics are recorded in the form "task/payload/label_set/metric" corresponding to the task the made the predictions, the payload (data) the predictions were made on, the label set used to score the predictions, and the metric being calculated.
For model-wide metrics (such as the total loss over all tasks or the learning rate), the default task name is model, the payload name is the name of the split, and the label_set is all.
End of explanation
from metal.contrib.visualization.analysis import *
Y_probs = np.array(model.predict_probs(payloads[0]))
Y_preds = np.array(model.predict(payloads[0]))
Y_gold = payloads[0].data_loader.dataset.Y_dict["labels"].numpy()
plot_predictions_histogram(Y_preds, Y_gold)
plot_probabilities_histogram(Y_probs)
Explanation: To calculate predictions or probabilities for an individual payload, we can then use the provided MetalModel methods.
End of explanation |
11,108 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Errors and Exceptions
While executing a python program we may encounter errors. There are 2 types of errors
Step1: Exceptions
Step2: Built-in Exceptions
Python creates an Exception object whenever a runtime error occurs. There are a number of built-in exceptions.
Step3: Following are some of the built-in exceptions.
ZeroDivisionError - Raised when you try to divide a number by zero
FileNotFoundError - Raised when a file required does not exist
SyntaxError - Raised when proper syntax is not applied
NameError - Raised when a variable is not found in local or global scope
KeyError - Raised when a key is not found in a dictionary
Handling Exceptions
Python provides 'try/except' statements to handle the exceptions. The operation which can raise exception is placed inside the 'try' statement and code that handles exception is written in the 'except' clause.
Step4: Catching Specific Exceptions
A try clause can have any number of except clause to capture specific exceptions and only one will be executed in case an exception occurs. We can use tuple of values to specify multiple exceptions in an exception clause
Step5: The last except clause may omit the exception name(s), to serve as a wildcard. Use this with extreme caution, since it is easy to mask a real programming error in this way! It can also be used to print an error message and then re-raise the exception (allowing a caller to handle the exception as well)
Step6: The try … except statement has an optional else clause, which, when present, must follow all except clauses. It is useful for code that must be executed if the try clause does not raise an exception. For example
Step7: The use of the else clause is better than adding additional code to the try clause because it avoids accidentally catching an exception that wasn’t raised by the code being protected by the try … except statement.
Exception Instances
The except clause may specify a variable after the exception name. The variable is bound to an exception instance with the arguments stored in instance.args. For convenience, the exception instance defines str() so the arguments can be printed directly without having to reference .args.
Step8: Exception handlers don’t just handle exceptions if they occur immediately in the try clause, but also if they occur inside functions that are called (even indirectly) in the try clause. For example
Step9: Raise Exceptions
The raise statement allows the programmer to force a specified exception to occur. For example
Step10: If you need to determine whether an exception was raised but don’t intend to handle it, a simpler form of the raise statement allows you to re-raise the exception
Step11: User Exceptions
Python has many built-in exceptions which forces your program to output an error when something in it goes wrong.
However, sometimes you may need to create custom exceptions that serves your purpose.
In Python, users can define such exceptions by creating a new class. This exception class has to be derived, either directly or indirectly, from Exception class. Most of the built-in exceptions are also derived form this class.
Step15: Here, we have created a user-defined exception called CustomError which is derived from the Exception class. This new exception can be raised, like other exceptions, using the raise statement with an optional error message.
Point to Note
When we are developing a large Python program, it is a good practice to place all the user-defined exceptions that our program raises in a separate file. Many standard modules do this. They define their exceptions separately as exceptions.py or errors.py (generally but not always).
Most exceptions are defined with names that end in “Error,” similar to the naming of the standard exceptions.
Step19: Here, we have defined a base class called Error.
The other two exceptions (ValueTooSmallError and ValueTooLargeError) that are actually raised by our program are derived from this class. This is the standard way to define user-defined exceptions in Python programming.
Many standard modules define their own exceptions to report errors that may occur in functions they define. A detailed example is given below
Step20: Clean up Actions
The try statement in Python can have an optional finally clause. This clause is executed no matter what, and is generally used to release external resources.
A finally clause is always executed before leaving the try statement, whether an exception has occurred or not. When an exception has occurred in the try clause and has not been handled by an except clause (or it has occurred in an except or else clause), it is re-raised after the finally clause has been executed.
The finally clause is also executed “on the way out” when any other clause of the try statement is left via a break, continue or return statement.
Step21: Please note that the TypeError raised by dividing two strings is not handled by the except clause and therefore re-raised after the finally clause has been executed.
Pre Clean up Actions
Some objects define standard clean-up actions to be undertaken when the object is no longer needed.
Look at the following example, which tries to open a file and print its contents to the screen.
Step22: The problem with this code is that it leaves the file open for an indeterminate amount of time after this part of the code has finished executing. This is not a best practice. | Python Code:
print('Hello)
Explanation: Errors and Exceptions
While executing a python program we may encounter errors. There are 2 types of errors:
Syntax Errors - When you don't follow the proper structure of the python program (Like missing a quote during initialising a string).
Exceptions - Sometimes even when the syntax is correct, errors may occur when the program is run or executed. These run time errors are called Exceptions (like trying to divide by zero or file does not exist).
If Exceptions are not handled properly, the program will crash and come to a sudden & unexpected halt.
Syntax Errors
End of explanation
1 / 0
open('doesnotexistfile.txt')
Explanation: Exceptions
End of explanation
print(locals()['__builtins__'])
Explanation: Built-in Exceptions
Python creates an Exception object whenever a runtime error occurs. There are a number of built-in exceptions.
End of explanation
import sys
def divide(a,b):
try:
return a / b
except:
print(sys.exc_info()[0])
divide (1,2)
divide (2,0) # This will be captured by the 'except' clause
# print custom error message
def divide(a,b):
try:
return a / b
except:
print('Error occured',sys.exc_info()[0])
divide (1,2)
divide (2,0) # This will be captured by the 'except' clause
Explanation: Following are some of the built-in exceptions.
ZeroDivisionError - Raised when you try to divide a number by zero
FileNotFoundError - Raised when a file required does not exist
SyntaxError - Raised when proper syntax is not applied
NameError - Raised when a variable is not found in local or global scope
KeyError - Raised when a key is not found in a dictionary
Handling Exceptions
Python provides 'try/except' statements to handle the exceptions. The operation which can raise exception is placed inside the 'try' statement and code that handles exception is written in the 'except' clause.
End of explanation
def divide(a,b):
try:
return a / b
except (ZeroDivisionError):
print('Number cannot be divided by zero or non-integer')
except:
print('Error Occured',sys.exc_info()[0])
divide (1,2)
divide (2,0) # This will be captured by the 'except - zero division error' clause
divide (2,'a') # This will be captured by the generic 'except' clause
def divide(a,b):
try:
return a / b
except (ZeroDivisionError, TypeError): # use a tuple to capture multiple errors
print('Number cannot be divided by zero or non-integer')
except:
print('Error Occured',sys.exc_info()[0])
divide (1,2)
divide (2,0) # This will be captured by the 'except - zero division error' clause
divide (2,'a') # This will be captured by the generic 'except' clause
Explanation: Catching Specific Exceptions
A try clause can have any number of except clause to capture specific exceptions and only one will be executed in case an exception occurs. We can use tuple of values to specify multiple exceptions in an exception clause
End of explanation
import sys
try:
f = open('myfile.txt')
s = f.readline()
i = int(s.strip())
except OSError as err:
print("OS error: {0}".format(err))
except ValueError:
print("Could not convert data to an integer.")
except:
print("Unexpected error:", sys.exc_info()[0])
raise
Explanation: The last except clause may omit the exception name(s), to serve as a wildcard. Use this with extreme caution, since it is easy to mask a real programming error in this way! It can also be used to print an error message and then re-raise the exception (allowing a caller to handle the exception as well):
End of explanation
for arg in sys.argv[1:]:
try:
f = open(arg, 'r')
except OSError:
print('cannot open', arg)
else:
print(arg, 'has', len(f.readlines()), 'lines')
f.close()
Explanation: The try … except statement has an optional else clause, which, when present, must follow all except clauses. It is useful for code that must be executed if the try clause does not raise an exception. For example:
End of explanation
try:
raise Exception('1002','Custom Exception Occured')
except Exception as inst:
print(type(inst))
print(inst)
print(inst.args)
errno, errdesc = inst.args
print('Error Number:',errno)
print('Error Description:',errdesc)
Explanation: The use of the else clause is better than adding additional code to the try clause because it avoids accidentally catching an exception that wasn’t raised by the code being protected by the try … except statement.
Exception Instances
The except clause may specify a variable after the exception name. The variable is bound to an exception instance with the arguments stored in instance.args. For convenience, the exception instance defines str() so the arguments can be printed directly without having to reference .args.
End of explanation
def func_will_fail():
return 1 / 0
try:
func_will_fail()
except ZeroDivisionError as err:
print('Handling Error - ',err)
Explanation: Exception handlers don’t just handle exceptions if they occur immediately in the try clause, but also if they occur inside functions that are called (even indirectly) in the try clause. For example:
End of explanation
raise NameError('Error Occured')
Explanation: Raise Exceptions
The raise statement allows the programmer to force a specified exception to occur. For example:
End of explanation
try:
raise NameError('Error Captured')
except NameError:
print('Captured Exception')
raise
Explanation: If you need to determine whether an exception was raised but don’t intend to handle it, a simpler form of the raise statement allows you to re-raise the exception:
End of explanation
class CustomError(Exception):
pass
raise CustomError()
raise CustomError('Unexpected Error Occured')
Explanation: User Exceptions
Python has many built-in exceptions which forces your program to output an error when something in it goes wrong.
However, sometimes you may need to create custom exceptions that serves your purpose.
In Python, users can define such exceptions by creating a new class. This exception class has to be derived, either directly or indirectly, from Exception class. Most of the built-in exceptions are also derived form this class.
End of explanation
# define Python user-defined exceptions
class Error(Exception):
Base class for other exceptions
pass
class ValueTooSmallError(Error):
Raised when the input value is too small
pass
class ValueTooLargeError(Error):
Raised when the input value is too large
pass
# our main program
# user guesses a number until he/she gets it right
# you need to guess this number
number = 10
while True:
try:
i_num = int(input("Enter a number: "))
if i_num < number:
raise ValueTooSmallError
elif i_num > number:
raise ValueTooLargeError
break
except ValueTooSmallError:
print("This value is too small, try again!")
print()
except ValueTooLargeError:
print("This value is too large, try again!")
print()
print("Congratulations! You guessed it correctly.")
Explanation: Here, we have created a user-defined exception called CustomError which is derived from the Exception class. This new exception can be raised, like other exceptions, using the raise statement with an optional error message.
Point to Note
When we are developing a large Python program, it is a good practice to place all the user-defined exceptions that our program raises in a separate file. Many standard modules do this. They define their exceptions separately as exceptions.py or errors.py (generally but not always).
Most exceptions are defined with names that end in “Error,” similar to the naming of the standard exceptions.
End of explanation
class Error(Exception):
Base class for exceptions in this module.
pass
class InputError(Error):
Exception raised for errors in the input.
Attributes:
expression -- input expression in which the error occurred
message -- explanation of the error
def __init__(self, expression, message):
self.expression = expression
self.message = message
class TransitionError(Error):
Raised when an operation attempts a state transition that's not
allowed.
Attributes:
previous -- state at beginning of transition
next -- attempted new state
message -- explanation of why the specific transition is not allowed
def __init__(self, previous, next, message):
self.previous = previous
self.next = next
self.message = message
Explanation: Here, we have defined a base class called Error.
The other two exceptions (ValueTooSmallError and ValueTooLargeError) that are actually raised by our program are derived from this class. This is the standard way to define user-defined exceptions in Python programming.
Many standard modules define their own exceptions to report errors that may occur in functions they define. A detailed example is given below:
End of explanation
try:
raise KeyBoardInterrupt
finally:
print('Bye')
def divide(a,b):
try:
result = a / b
except ZeroDivisionError:
print('Number cannot be divided by zero')
else:
print('Result',result)
finally:
print('Executed Finally Clause')
divide(2,1)
divide(2,0)
divide('1','2')
Explanation: Clean up Actions
The try statement in Python can have an optional finally clause. This clause is executed no matter what, and is generally used to release external resources.
A finally clause is always executed before leaving the try statement, whether an exception has occurred or not. When an exception has occurred in the try clause and has not been handled by an except clause (or it has occurred in an except or else clause), it is re-raised after the finally clause has been executed.
The finally clause is also executed “on the way out” when any other clause of the try statement is left via a break, continue or return statement.
End of explanation
for line in open("myfile.txt"):
print(line, end="")
Explanation: Please note that the TypeError raised by dividing two strings is not handled by the except clause and therefore re-raised after the finally clause has been executed.
Pre Clean up Actions
Some objects define standard clean-up actions to be undertaken when the object is no longer needed.
Look at the following example, which tries to open a file and print its contents to the screen.
End of explanation
with open("test.txt") as f:
for line in f:
print(line, end="")
Explanation: The problem with this code is that it leaves the file open for an indeterminate amount of time after this part of the code has finished executing. This is not a best practice.
End of explanation |
11,109 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Log-Normal or Over-Dispersed Poisson?
We replicate the empirical applications in Harnau (2018a) in Section 2 and Section 6.
The work on this vignette was supported by the European Research Council, grant AdG 694262.
First, we import the package
Step1: 2. Empirical illustration of the problem
This section motivates the problem. Based on the Verrall et al. (2010) it applies the misspecification tests from Harnau (2018b). We split the data into two sub-samples after the fifth accident year. Then we test for breaks in dispersion parameters with a Bartlett test and linear predictors with an F-test.
Remark
Step2: Neither in a log-normal nor in an over-dispersed Poisson model can we convincingly reject the model specification based on these tests. This illustrates a situation in which it is not clear what model to use so that the new R-test can prove its usefulness.
6.1 Empirical illustration revisited
In this section, we return to the empirical illustration from above and test whether an R-test can help us to decide between (generalized) log-normal and over-dispersed Poisson model.
The package comes with built-in functionality for the R-test. Say we want to test
$$ H_0
Step3: This matches the value for the statistic in the paper and the p-value in Table 3 (which is given in %).
Remark
Step4: The R-statistics are as follows.
Step5: And Table 3 is given by this
Step6: In the paper we now move on to find the 5% critical value under the over-dispersed Poisson model as well as the power at that value. This functionality is not directly implemented in the package; however we can easily replicate it with the package quad_form_ratio.
Step7: 6.2 Sensitivity to invalid model reductions
In this section, we use the data from Barnett and Zehnwirth (2000, Table 3.5). These data are known to require a calendar effect for modeling. We show that the test results may be misleading when the baseline model is already misspecified.
$H_0
Step8: Thus, we reject the extended generalized log-normal model.
Despite the rejection of the model, we move on to test whether we can drop the calendar effect
Step9: As expected for the data at hand, we reject this reduction; the calendar effect is needed.
For illustrative purposes, we nonetheless move on to test
$$ H_0
Step10: Perhaps surprisingly, the generalized log-normal model looks better now - we cannot convincingly reject it against the over-dispersed Poisson model.
$H_0
Step11: In this case, we cannot reject the over-dispersed Poisson model. The power at the value of $R^*_{ls}$ is $0.98$; this corresponds to one minus the p-value under the extended generalized log-normal null hypothesis.
Just as before, we can test whether we can reasonably drop the calendar effect, just now from the extended over-dispersed Poisson model
Step12: One again, this reduction is rejected.
Neglecting this result, we investigate what happens if we test the models without calendar effect in
$$ H_0
Step13: This time, we reject the over-dispersed Poisson model. Thus, the results completely flipped by dropping the calendar effect. With calendar effect, we cannot reject the over-dispersed Poisson model but can reject the generalized log-normal model. By dropping the much needed calendar effect, we turn this on its head and reject the over-dispersed Poisson but not the generalized log-normal model. Thus, we should be careful what we use as a baseline model before testing.
6.3 A general to specific testing procedure
Taking into account the insights from above, we now consider a general to specific testing procedure. That is, we start with "the most general" model and test for possible reductions, stopping once we run into a rejection. For this application, we consider the data by Taylor and Ashe (1983) that has been become a kinda of benchmark data set.
First, we consider an extended generalized log-normal model and test it against its over-dispersed Poisson counterpart
Step14: The $R$-test rejects the extended generalized log-normal model. Thus, we do not proceed with this model.
Instead, we now consider the reverse test
Step15: We cannot reject the hypothesis. Thus, we got to hunt for further evidence against the extended over-dispersed Poisson model.
We consider the misspecification tests from Harnau (2018b). In the extended over-dipsersed Poisson model with calendar effect, we split the data into four sub-samples and then test
$$ H_{\sigma^2}
Step16: This is a somewhat close call. In light of the fact that simpler models tend to perform better in forecasting, we interpret the test result as the absence of strong evidence against the hypothesis.
Then, taking $H_{\sigma^2}$ as given, we can move on to test for breaks in linear predictors across sub-samples
$$ H_{\mu, \sigma^2}
Step17: Similar to before, we take the result of the $F$-test as a lack of convincing evidence against the model.
Thus, we now consider whether we can reduce the model by dropping the calendar effect by means of an $F$-test for
$$ H_0
Step18: With a p-value of $0.30$, we cannot reject this reduction. In the model without calendar effect, point forecasts match the chain-ladder technique forecasts.
We can now consider whether the model without calendar effect still survives the same tests it did before. First, we test it against a generalized log-normal model
Step19: The model passes this test easily.
Now we can repeat the misspecification tests, testing
$$ H_{\sigma^2} | Python Code:
import apc
# Turn off FutureWarnings
import warnings
warnings.simplefilter('ignore', FutureWarning)
Explanation: Log-Normal or Over-Dispersed Poisson?
We replicate the empirical applications in Harnau (2018a) in Section 2 and Section 6.
The work on this vignette was supported by the European Research Council, grant AdG 694262.
First, we import the package
End of explanation
for family in ('log_normal_response', 'od_poisson_response'):
model_VNJ = apc.Model()
model_VNJ.data_from_df(apc.loss_VNJ(), data_format='CL')
model_VNJ.fit(family, 'AC')
sub_models_VNJ = [model_VNJ.sub_model(coh_from_to=(1,5)),
model_VNJ.sub_model(coh_from_to=(6,10))]
bartlett_VNJ = apc.bartlett_test(sub_models_VNJ)
f_VNJ = apc.f_test(model_VNJ, sub_models_VNJ)
print(family)
print('='*len(family))
print('Bartlett test p-value: {:.2f}'.format(
bartlett_VNJ['p_value']))
print('F-test p-value: {:.2f} \n'.format(
f_VNJ['p_value']))
Explanation: 2. Empirical illustration of the problem
This section motivates the problem. Based on the Verrall et al. (2010) it applies the misspecification tests from Harnau (2018b). We split the data into two sub-samples after the fifth accident year. Then we test for breaks in dispersion parameters with a Bartlett test and linear predictors with an F-test.
Remark: we replicated the empirical applications in Harnau (2018b) here.
End of explanation
r_VNJ = apc.r_test(apc.loss_VNJ(), # specify the data set
family_null='gen_log_normal_response', # declare null model
predictor='AC', # AC = age-cohort matching the chain-ladder
R_stat='wls_ls', # R-stat: wls_ls -> R^{star}_{ls}
R_dist='wls_ls') # Pi est in limiting dist: wls_ls -> Pi^{star}_{ls}
print('R-statistic: {:.2f}'.format(r_VNJ['R_stat']))
print('p_value: {:.4f}'.format(r_VNJ['p_value']))
Explanation: Neither in a log-normal nor in an over-dispersed Poisson model can we convincingly reject the model specification based on these tests. This illustrates a situation in which it is not clear what model to use so that the new R-test can prove its usefulness.
6.1 Empirical illustration revisited
In this section, we return to the empirical illustration from above and test whether an R-test can help us to decide between (generalized) log-normal and over-dispersed Poisson model.
The package comes with built-in functionality for the R-test. Say we want to test
$$ H_0: \text{generalized log-normal} \quad \text{vs} \quad \text{over-dispersed Poisson} $$
based on the statistic $R^_{ls}$ and compare it to $\widehat{\mathrm{R}}^_{ls}$.
End of explanation
import pandas as pd
def r_test_all_combs(model):
# create an empty series to be filled with R-statistics
R_stats = pd.Series(None, index=('$R_{ls}$', '$R_{ql}$',
'$R^*_{ls}$', '$R^*_{ql}$'))
# create and empty df to be filled with p-values
base_df = pd.DataFrame(
None,
index = ('$\widehat{\mathrm{R}}_{ls}$',
'$\widehat{\mathrm{R}}_{ql}$',
'$\widehat{\mathrm{R}}^*_{ls}$',
'$\widehat{\mathrm{R}}^*_{ql}$'),
columns = pd.MultiIndex.from_product(
[
('$H_0: $ generalized log-normal',
'$H_0: $ over-dispersed Poisson'),
('$R_{ls}$', '$R_{ql}$', '$R^*_{ls}$', '$R^*_{ql}$')
])
)
# iterate over ways to compute the R-statistic
for i, R_stat in enumerate(['ls', 'ql', 'wls_ls', 'wls_ql']):
# iterate over ways to estimate Pi in the limiting dist
for j, R_dist in enumerate(['ls', 'ql', 'wls_ls', 'wls_ql']):
# compute R-test
r_test = apc.r_test(apc.loss_VNJ(),
family_null='gen_log_normal_response',
predictor='AC',
R_stat=R_stat, R_dist=R_dist,
data_format='CL')
base_df.iloc[j, i] = r_test['p_value']
base_df.iloc[j, i+4] = 1 - r_test['power_at_R']
R_stats.iloc[i] = r_test['R_stat']
return base_df, R_stats
table3, R_stats = r_test_all_combs(model_VNJ)
Explanation: This matches the value for the statistic in the paper and the p-value in Table 3 (which is given in %).
Remark: besides the value of the test statistic and the p-value under the null, apc.r_test also return the power at the value oder the R-statistic (power_at_R). The power corresponds to one minus the p-value under the alternative.
To replicate the remaining test statistics and the entire Table 3 we employ a small function that iterates over all possible combinations.
End of explanation
pd.DataFrame(R_stats.rename('R-Statistic')).T
Explanation: The R-statistics are as follows.
End of explanation
table3*100
Explanation: And Table 3 is given by this:
End of explanation
from quad_form_ratio import saddlepoint_cdf_R, saddlepoint_inv_cdf_R
import numpy as np
model_VNJ.fit('log_normal_response', 'AC')
X, Z = model_VNJ.design, np.log(model_VNJ.data_vector['response'])
tau_ls = model_VNJ.fitted_values.sum()
sqrt_Pi_ls = np.diag(np.sqrt(model_VNJ.fitted_values/tau_ls))
rss = model_VNJ.rss
X_star_ls, Z_star_ls = sqrt_Pi_ls.dot(X), sqrt_Pi_ls.dot(Z)
# fit the weighted least squares model, we set rcond=0. since
# we know that X_star has full column rank.
wls_ls_fit = np.linalg.lstsq(X_star_ls, Z_star_ls, rcond=0.)
xi_star_ls, RSS_star_ls = wls_ls_fit[0], wls_ls_fit[1][0]
fitted_wls_ls = np.exp(X.dot(xi_star_ls))
sqrt_Pi_star_ls = np.diag(np.sqrt(fitted_wls_ls/fitted_wls_ls.sum()))
# Use the QR-decomposition to compute the orthogonal projection M
Q, _ = np.linalg.qr(X)
M = np.identity(model_VNJ.n) - Q.dot(Q.T)
# do the same for the weighted least squares orthogonal projection
X_star_ls = sqrt_Pi_star_ls.dot(X)
Q_star_ls, _ = np.linalg.qr(X_star_ls)
M_star_ls = np.identity(model_VNJ.n) - Q_star_ls.dot(Q_star_ls.T)
# A refers to the sandwiched matrix in the numerator
# B refers to the sandwiched matrix in the denominator
# _gln and _odp refer to the sandwiches under the respective nulls
A_gln = M
B_gln = sqrt_Pi_star_ls.dot(M_star_ls).dot(sqrt_Pi_star_ls)
A_odp = np.linalg.inv(sqrt_Pi_star_ls).dot(M).dot(np.linalg.inv(sqrt_Pi_star_ls))
B_odp = M_star_ls
# We compute the 5% critical value under ODP (lower quantile)
# The function iterates to find the critical value up to a precision of 0.0001
cv = saddlepoint_inv_cdf_R(A_odp, B_odp, probabilities=[0.05])
print('5% critical value for over-dispersed Poisson: {:.1f}'.format(cv[0.05]))
# Given the critical value, we compute the power
pwr_at_cv5 = saddlepoint_cdf_R(A_gln, B_gln, cv)
print('Power at 5% critical value: {:.2f}'.format(pwr_at_cv5.iloc[0]))
Explanation: In the paper we now move on to find the 5% critical value under the over-dispersed Poisson model as well as the power at that value. This functionality is not directly implemented in the package; however we can easily replicate it with the package quad_form_ratio.
End of explanation
r_BZ_GLNe = apc.r_test(
apc.loss_BZ(), family_null='gen_log_normal_response',
predictor='APC', # APC = age-period-cohort, incl. calendar effect
data_format='CL' # optional, the package can infer the data_format
) # the default for R_stat and R_dist are our preferrred 'wls_ls'
print('R-statistic: {:.2f}'.format(r_BZ_GLNe['R_stat']))
print('p_value: {:.2f}'.format(r_BZ_GLNe['p_value']))
Explanation: 6.2 Sensitivity to invalid model reductions
In this section, we use the data from Barnett and Zehnwirth (2000, Table 3.5). These data are known to require a calendar effect for modeling. We show that the test results may be misleading when the baseline model is already misspecified.
$H_0:$ Generalized log-normal
First, we test in a model with calendar effect so the linear predictor is $\mu_{ij} = \alpha_i + \beta_j + \gamma_k + \delta$. The first hypothesis we consider is
$$ H_0: \text{extended generalized log-normal} \quad \text{vs} \quad H_A: \text{extended over-dispersed Poisson}.$$
This is easily tested with an $R$-test.
End of explanation
model_BZ = apc.Model()
model_BZ.data_from_df(apc.loss_BZ(), data_format='CL')
model_BZ.fit_table('log_normal_response', attach_to_self=False).loc[
['AC'],:
]
Explanation: Thus, we reject the extended generalized log-normal model.
Despite the rejection of the model, we move on to test whether we can drop the calendar effect:
$$ H_0: \text{generalized log-normal} \quad \text{vs} \quad H_A: \text{extended generalized log-normal}.$$
We can do this with a simple $F$-test, both in a log-normal and a generalized log-normal model (Kuang and Nielsen, 2018).
End of explanation
r_BZ_GLN = apc.r_test(
apc.loss_BZ(), family_null='gen_log_normal_response',
predictor='AC', data_format='CL'
)
print('R-statistic: {:.2f}'.format(r_BZ_GLN['R_stat']))
print('p_value: {:.2f}'.format(r_BZ_GLN['p_value']))
Explanation: As expected for the data at hand, we reject this reduction; the calendar effect is needed.
For illustrative purposes, we nonetheless move on to test
$$ H_0: \text{generalized log-normal} \quad \text{vs} \quad H_A: \text{over-dispersed Poisson}, $$
thus a scenario in which neither model has a calendar effect.
End of explanation
r_BZ_ODPe = apc.r_test(
apc.loss_BZ(), family_null='od_poisson_response', data_format='CL'
) # the default for predictor is APC, thus includes a calendar effect
print('R-statistic: {:.2f}'.format(r_BZ_ODPe['R_stat']))
print('p_value: {:.2f}'.format(r_BZ_ODPe['p_value']))
print('Power at R: {:.2f}'.format(r_BZ_ODPe['power_at_R']))
Explanation: Perhaps surprisingly, the generalized log-normal model looks better now - we cannot convincingly reject it against the over-dispersed Poisson model.
$H_0:$ Over-dispersed Poisson
Now we start the other way around and take the over-dispersed Poisson model as a baseline. First, we again include a calendar effect and test
$$ H_0: \text{extended over-dispersed Poisson} \quad \text{vs} \quad H_A: \text{extended generalized log-normal}.$$
End of explanation
model_BZ = apc.Model()
model_BZ.data_from_df(apc.loss_BZ(), data_format='CL')
model_BZ.fit_table('od_poisson_response', attach_to_self=False).loc[
['AC'],:
]
Explanation: In this case, we cannot reject the over-dispersed Poisson model. The power at the value of $R^*_{ls}$ is $0.98$; this corresponds to one minus the p-value under the extended generalized log-normal null hypothesis.
Just as before, we can test whether we can reasonably drop the calendar effect, just now from the extended over-dispersed Poisson model:
$$ H_0: \text{over-dispersed Poisson} \quad \text{vs} \quad H_A: \text{extended over-dispersed Poisson}.$$
This is easily done with an $F$-test (Harnau and Nielsen 2017).
End of explanation
r_BZ_ODP = apc.r_test(
apc.loss_BZ(), family_null='od_poisson_response',
predictor='AC', data_format='CL'
)
print('R-statistic: {:.2f}'.format(r_BZ_ODP['R_stat']))
print('p_value: {:.2f}'.format(r_BZ_ODP['p_value']))
Explanation: One again, this reduction is rejected.
Neglecting this result, we investigate what happens if we test the models without calendar effect in
$$ H_0: \text{over-dispersed Poisson} \quad \text{vs} \quad H_A: \text{generalized log-normal}.$$
End of explanation
r_TA_GLNe = apc.r_test(
apc.loss_TA(), family_null='gen_log_normal_response',
predictor='APC', data_format='CL'
)
print('R-statistic: {:.2f}'.format(r_TA_GLNe['R_stat']))
print('p_value: {:.4f}'.format(r_TA_GLNe['p_value']))
Explanation: This time, we reject the over-dispersed Poisson model. Thus, the results completely flipped by dropping the calendar effect. With calendar effect, we cannot reject the over-dispersed Poisson model but can reject the generalized log-normal model. By dropping the much needed calendar effect, we turn this on its head and reject the over-dispersed Poisson but not the generalized log-normal model. Thus, we should be careful what we use as a baseline model before testing.
6.3 A general to specific testing procedure
Taking into account the insights from above, we now consider a general to specific testing procedure. That is, we start with "the most general" model and test for possible reductions, stopping once we run into a rejection. For this application, we consider the data by Taylor and Ashe (1983) that has been become a kinda of benchmark data set.
First, we consider an extended generalized log-normal model and test it against its over-dispersed Poisson counterpart:
$$ H_0: \text{extended generalized log-normal} \quad \text{vs} \quad H_A: \text{extended over-dispersed Poisson}.$$
End of explanation
r_TA_ODPe = apc.r_test(
apc.loss_TA(), family_null='od_poisson_response',
predictor='APC', data_format='CL'
)
print('R-statistic: {:.2f}'.format(r_TA_ODPe['R_stat']))
print('p_value: {:.4f}'.format(r_TA_ODPe['p_value']))
print('Power at R: {:.2f}'.format(r_TA_ODPe['power_at_R']))
Explanation: The $R$-test rejects the extended generalized log-normal model. Thus, we do not proceed with this model.
Instead, we now consider the reverse test:
$$ H_0: \text{extended over-dispersed Poisson} \quad \text{vs} \quad H_A: \text{extended generalized log-normal}. $$
End of explanation
model_TAe = apc.Model()
model_TAe.data_from_df(apc.loss_TA(), data_format='CL')
model_TAe.fit('od_poisson_response', 'APC')
sub_models_TAe = [model_TAe.sub_model(per_from_to=(1,5)),
model_TAe.sub_model(
coh_from_to=(1,5), age_from_to=(1,5), per_from_to=(6,10)
),
model_TAe.sub_model(age_from_to=(6,10)),
model_TAe.sub_model(coh_from_to=(6,10))]
bartlett_TA_ODPe = apc.bartlett_test(sub_models_TAe)
print('Bartlett test p-value: {:.2f}'.format(bartlett_TA_ODPe['p_value']))
Explanation: We cannot reject the hypothesis. Thus, we got to hunt for further evidence against the extended over-dispersed Poisson model.
We consider the misspecification tests from Harnau (2018b). In the extended over-dipsersed Poisson model with calendar effect, we split the data into four sub-samples and then test
$$ H_{\sigma^2}: \sigma^2_\ell = \sigma^2 $$
with a Bartlett test.
End of explanation
f_TA_ODPe = apc.f_test(model_TAe, sub_models_TAe)
print('F-test p-value: {:.2f} \n'.format(f_TA_ODPe['p_value']))
Explanation: This is a somewhat close call. In light of the fact that simpler models tend to perform better in forecasting, we interpret the test result as the absence of strong evidence against the hypothesis.
Then, taking $H_{\sigma^2}$ as given, we can move on to test for breaks in linear predictors across sub-samples
$$ H_{\mu, \sigma^2}: \alpha_{i, \ell} + \beta_{j, \ell} + \gamma_{k, \ell} + \delta_\ell = \alpha_i + \beta_j + \gamma_k + \delta. $$
We do so using an $F$-test.
End of explanation
model_TAe.fit_table(attach_to_self=False).loc[['AC'],:]
Explanation: Similar to before, we take the result of the $F$-test as a lack of convincing evidence against the model.
Thus, we now consider whether we can reduce the model by dropping the calendar effect by means of an $F$-test for
$$ H_0: \text{over-dispersed Poisson} \quad \text{vs} \quad H_A: \text{extended over-dispersed Poisson}. $$
That is, we consider a reduction from $\mu_{ij} = \alpha_i + \beta_j + \gamma_k + \delta$ to $\mu_{ij} = \alpha_i + \beta_j + \delta$.
End of explanation
r_TA_ODP = apc.r_test(
apc.loss_TA(), family_null='od_poisson_response',
predictor='AC', data_format='CL'
)
print('R-statistic: {:.2f}'.format(r_TA_ODP['R_stat']))
print('p_value: {:.4f}'.format(r_TA_ODP['p_value']))
print('Power at R: {:.2f}'.format(r_TA_ODP['power_at_R']))
Explanation: With a p-value of $0.30$, we cannot reject this reduction. In the model without calendar effect, point forecasts match the chain-ladder technique forecasts.
We can now consider whether the model without calendar effect still survives the same tests it did before. First, we test it against a generalized log-normal model:
$$ H_0: \text{over-dispersed Poisson} \quad \text{vs} \quad H_A: \text{generalized log-normal}. $$
End of explanation
model_TA = apc.Model()
model_TA.data_from_df(apc.loss_TA(), data_format='CL')
model_TA.fit('od_poisson_response', 'AC')
sub_models_TA = [model_TA.sub_model(per_from_to=(1,5)),
model_TA.sub_model(
coh_from_to=(1,5), age_from_to=(1,5), per_from_to=(6,10)
),
model_TA.sub_model(age_from_to=(6,10)),
model_TA.sub_model(coh_from_to=(6,10))]
bartlett_TA_ODP = apc.bartlett_test(sub_models_TA)
f_TA_ODP = apc.f_test(model_TA, sub_models_TA)
print('Bartlett test p-value: {:.2f}'.format(bartlett_TA_ODP['p_value']))
print('F-test p-value: {:.2f} \n'.format(f_TA_ODP['p_value']))
Explanation: The model passes this test easily.
Now we can repeat the misspecification tests, testing
$$ H_{\sigma^2}: \sigma^2_\ell = \sigma^2 $$
with a Bartlett test and
$$ H_{\mu, \sigma^2}: \alpha_{i, \ell} + \beta_{j, \ell} + \delta_\ell = \alpha_i + \beta_j + \delta $$
with an $F$-test.
End of explanation |
11,110 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Step1: Treine sua primeira rede neural
Step2: Importe a base de dados Fashion MNIST
Esse tutorial usa a base de dados Fashion MNIST que contém 70,000 imagens em tons de cinza em 10 categorias. As imagens mostram artigos individuais de roupas com baixa resolução (28 por 28 pixels), como vemos aqui
Step3: Carregando a base de dados que retorna quatro NumPy arrays
Step4: Explore os dados
Vamos explorar o formato da base de dados antes de treinar o modelo. O próximo comando mostra que existem 60000 imagens no conjunto de treinamento, e cada imagem é representada em 28 x 28 pixels
Step5: Do mesmo modo, existem 60000 labels no conjunto de treinamento
Step6: Cada label é um inteiro entre 0 e 9
Step7: Existem 10000 imagens no conjunto de teste. Novamente, cada imagem é representada por 28 x 28 pixels
Step8: E um conjunto de teste contendo 10000 labels das imagens
Step9: Pré-processe os dados
Os dados precisam ser pré-processados antes de treinar a rede. Se você inspecionar a primeira imagem do conjunto de treinamento, você verá que os valores dos pixels estão entre 0 e 255
Step10: Escalaremos esses valores no intervalo de 0 e 1 antes de alimentar o modelo da rede neural. Para fazer isso, dividimos os valores por 255. É importante que o conjunto de treinamento e o conjunto de teste podem ser pré-processados do mesmo modo
Step11: Para verificar que os dados estão no formato correto e que estamos prontos para construir e treinar a rede, vamos mostrar as primeiras 25 imagens do conjunto de treinamento e mostrar o nome das classes de cada imagem abaixo.
Step12: Construindo o modelo
Construir a rede neural requer configurar as camadas do modelo, e depois, compilar o modelo.
Montar as camadas
O principal bloco de construção da rede neural é a camada (layer). As camadas (layers) extraem representações dos dados inseridos na rede. Com sorte, essas representações são significativas para o problema à mão.
Muito do deep learning consiste em encadear simples camadas. Muitas camadas, como tf.keras.layers.Dense, tem parâmetros que são aprendidos durante o treinamento.
Step13: A primeira camada da rede, tf.keras.layers.Flatten, transforma o formato da imagem de um array de imagens de duas dimensões (of 28 by 28 pixels) para um array de uma dimensão (de 28 * 28 = 784 pixels). Pense nessa camada como camadas não empilhadas de pixels de uma imagem e os enfilere. Essa camada não tem parâmetros para aprender; ela só reformata os dados.
Depois dos pixels serem achatados, a rede consiste de uma sequência de duas camadas tf.keras.layers.Dense. Essas são camadas neurais densely connected, ou fully connected. A primeira camada Dense tem 128 nós (ou neurônios). A segunda (e última) camada é uma softmax de 10 nós que retorna um array de 10 probabilidades, cuja soma resulta em 1. Cada nó contém um valor que indica a probabilidade de que aquela imagem pertence a uma das 10 classes.
Compile o modelo
Antes do modelo estar pronto para o treinamento, é necessário algumas configurações a mais. Essas serão adicionadas no passo de compilação
Step14: Treine o modelo
Treinar a rede neural requer os seguintes passos
Step15: À medida que o modelo treina, as métricas loss e acurácia são mostradas. O modelo atinge uma acurácia de 0.88 (ou 88%) com o conjunto de treinamento.
Avalie a acurácia
Depois, compare como o modelo performou com o conjunto de teste
Step16: Acabou que o a acurácia com o conjunto de teste é um pouco menor do que a acurácia de treinamento. Essa diferença entre as duas acurácias representa um overfitting. Overfitting é modelo de aprendizado de máquina performou de maneira pior em um conjunto de entradas novas, e não usadas anteriormente, que usando o conjunto de treinamento.
Faça predições
Com o modelo treinado, o usaremos para predições de algumas imagens.
Step17: Aqui, o modelo previu que a label de cada imagem no conjunto de treinamento. Vamos olhar na primeira predição
Step18: A predição é um array de 10 números. Eles representam um a confiança do modelo que a imagem corresponde a cada um dos diferentes artigos de roupa. Podemos ver cada label tem um maior valor de confiança
Step19: Então, o modelo é confiante de que esse imagem é uma bota (ankle boot) ou class_names[9]. Examinando a label do teste, vemos que essa classificação é correta
Step20: Podemos mostrar graficamente como se parece em um conjunto total de previsão de 10 classes.
Step21: Vamos olhar a previsão imagem na posição 0, do array de predição.
Step22: Vamos plotar algumas da previsão do modelo. Labels preditas corretamente são azuis e as predições erradas são vermelhas. O número dá a porcentagem (de 100) das labels preditas. Note que o modelo pode errar mesmo estando confiante.
Step23: Finamente, use o modelo treinado para fazer a predição de uma única imagem.
Step24: Modelos tf.keras são otimizados para fazer predições em um batch, ou coleções, de exemplos de uma vez. De acordo, mesmo que usemos uma única imagem, precisamos adicionar em uma lista
Step25: Agora prediremos a label correta para essa imagem
Step26: model.predict retorna a lista de listas — uma lista para cada imagem em um batch de dados. Pegue a predição de nossa (única) imagem no batch | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
# TensorFlow e tf.keras
import tensorflow as tf
from tensorflow import keras
# Bibliotecas Auxiliares
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
Explanation: Treine sua primeira rede neural: classificação básica
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/keras/classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />Veja em TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/pt-br/tutorials/keras/classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Execute em Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/pt-br/tutorials/keras/classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />Veja código fonte em GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/pt-br/tutorials/keras/classification.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Baixe o notebook</a>
</td>
</table>
Note: A nossa comunidade TensorFlow traduziu estes documentos. Como as traduções da comunidade são o melhor esforço, não há garantias de que sejam uma reflexão exata e atualizada da documentação oficial em Inglês. Se tem alguma sugestão para melhorar esta tradução, por favor envie um pull request para o repositório do GitHub tensorflow/docs. Para se voluntariar para escrever ou rever as traduções da comunidade, contacte a lista [email protected].
Este tutorial treina um modelo de rede neural para classificação de imagens de roupas, como tênis e camisetas. Tudo bem se você não entender todos os detalhes; este é um visão geral de um programa do TensorFlow com detalhes explicados enquanto progredimos.
O guia usa tf.keras, uma API alto-nível para construir e treinar modelos no TensorFlow.
End of explanation
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
Explanation: Importe a base de dados Fashion MNIST
Esse tutorial usa a base de dados Fashion MNIST que contém 70,000 imagens em tons de cinza em 10 categorias. As imagens mostram artigos individuais de roupas com baixa resolução (28 por 28 pixels), como vemos aqui:
<table>
<tr><td>
<img src="https://tensorflow.org/images/fashion-mnist-sprite.png"
alt="Fashion MNIST sprite" width="600">
</td></tr>
<tr><td align="center">
<b>Figure 1.</b> <a href="https://github.com/zalandoresearch/fashion-mnist">Amostras de Fashion-MNIST</a> (por Zalando, MIT License).<br/>
</td></tr>
</table>
Fashion MNIST tem como intenção substituir a clássica base de dados MNIST— frequentemente usada como "Hello, World" de programas de aprendizado de máquina (machine learning) para visão computacional. A base de dados MNIST contém imagens de dígitos escritos à mão (0, 1, 2, etc.) em um formato idêntico ao dos artigos de roupas que usaremos aqui.
Esse tutorial usa a Fashion MNIST para variar, e porque é um problema um pouco mais desafiador que o regular MNIST. Ambas bases são relativamente pequenas e são usadas para verificar se um algoritmo funciona como esperado. Elas são bons pontos de partida para testar e debugar código.
Usaremos 60,000 imagens para treinar nossa rede e 10,000 imagens para avaliar quão precisamente nossa rede aprendeu a classificar as imagens. Você pode acessar a Fashion MNIST diretamente do TensorFlow. Importe e carregue a base Fashion MNIST diretamente do TensorFlow:
End of explanation
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
Explanation: Carregando a base de dados que retorna quatro NumPy arrays:
Os arrays train_images e train_labels são o conjunto de treinamento— os dados do modelo usados para aprender.
O modelo é testado com o conjunto de teste, os arrays test_images e test_labels.
As imagens são arrays NumPy de 28x28, com os valores de pixels entre 0 to 255. As labels (alvo da classificação) são um array de inteiros, no intervalo de 0 a 9. Esse corresponde com a classe de roupa que cada imagem representa:
<table>
<tr>
<th>Label</th>
<th>Classe</th>
</tr>
<tr>
<td>0</td>
<td>Camisetas/Top (T-shirt/top)</td>
</tr>
<tr>
<td>1</td>
<td>Calça (Trouser)</td>
</tr>
<tr>
<td>2</td>
<td>Suéter (Pullover)</td>
</tr>
<tr>
<td>3</td>
<td>Vestidos (Dress)</td>
</tr>
<tr>
<td>4</td>
<td>Casaco (Coat)</td>
</tr>
<tr>
<td>5</td>
<td>Sandálias (Sandal)</td>
</tr>
<tr>
<td>6</td>
<td>Camisas (Shirt)</td>
</tr>
<tr>
<td>7</td>
<td>Tênis (Sneaker)</td>
</tr>
<tr>
<td>8</td>
<td>Bolsa (Bag)</td>
</tr>
<tr>
<td>9</td>
<td>Botas (Ankle boot)</td>
</tr>
</table>
Cada imagem é mapeada com um só label. Já que o nome das classes não são incluídas na base de dados, armazene os dados aqui para usá-los mais tarde quando plotarmos as imagens:
End of explanation
train_images.shape
Explanation: Explore os dados
Vamos explorar o formato da base de dados antes de treinar o modelo. O próximo comando mostra que existem 60000 imagens no conjunto de treinamento, e cada imagem é representada em 28 x 28 pixels:
End of explanation
len(train_labels)
Explanation: Do mesmo modo, existem 60000 labels no conjunto de treinamento:
End of explanation
train_labels
Explanation: Cada label é um inteiro entre 0 e 9:
End of explanation
test_images.shape
Explanation: Existem 10000 imagens no conjunto de teste. Novamente, cada imagem é representada por 28 x 28 pixels:
End of explanation
len(test_labels)
Explanation: E um conjunto de teste contendo 10000 labels das imagens :
End of explanation
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
Explanation: Pré-processe os dados
Os dados precisam ser pré-processados antes de treinar a rede. Se você inspecionar a primeira imagem do conjunto de treinamento, você verá que os valores dos pixels estão entre 0 e 255:
End of explanation
train_images = train_images / 255.0
test_images = test_images / 255.0
Explanation: Escalaremos esses valores no intervalo de 0 e 1 antes de alimentar o modelo da rede neural. Para fazer isso, dividimos os valores por 255. É importante que o conjunto de treinamento e o conjunto de teste podem ser pré-processados do mesmo modo:
End of explanation
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
Explanation: Para verificar que os dados estão no formato correto e que estamos prontos para construir e treinar a rede, vamos mostrar as primeiras 25 imagens do conjunto de treinamento e mostrar o nome das classes de cada imagem abaixo.
End of explanation
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10, activation='softmax')
])
Explanation: Construindo o modelo
Construir a rede neural requer configurar as camadas do modelo, e depois, compilar o modelo.
Montar as camadas
O principal bloco de construção da rede neural é a camada (layer). As camadas (layers) extraem representações dos dados inseridos na rede. Com sorte, essas representações são significativas para o problema à mão.
Muito do deep learning consiste em encadear simples camadas. Muitas camadas, como tf.keras.layers.Dense, tem parâmetros que são aprendidos durante o treinamento.
End of explanation
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
Explanation: A primeira camada da rede, tf.keras.layers.Flatten, transforma o formato da imagem de um array de imagens de duas dimensões (of 28 by 28 pixels) para um array de uma dimensão (de 28 * 28 = 784 pixels). Pense nessa camada como camadas não empilhadas de pixels de uma imagem e os enfilere. Essa camada não tem parâmetros para aprender; ela só reformata os dados.
Depois dos pixels serem achatados, a rede consiste de uma sequência de duas camadas tf.keras.layers.Dense. Essas são camadas neurais densely connected, ou fully connected. A primeira camada Dense tem 128 nós (ou neurônios). A segunda (e última) camada é uma softmax de 10 nós que retorna um array de 10 probabilidades, cuja soma resulta em 1. Cada nó contém um valor que indica a probabilidade de que aquela imagem pertence a uma das 10 classes.
Compile o modelo
Antes do modelo estar pronto para o treinamento, é necessário algumas configurações a mais. Essas serão adicionadas no passo de compilação:
Função Loss —Essa mede quão precisa o modelo é durante o treinamento. Queremos minimizar a função para guiar o modelo para a direção certa.
Optimizer —Isso é como o modelo se atualiza com base no dado que ele vê e sua função loss.
Métricas —usadas para monitorar os passos de treinamento e teste. O exemplo abaixo usa a acurácia, a fração das imagens que foram classificadas corretamente.
End of explanation
model.fit(train_images, train_labels, epochs=10)
Explanation: Treine o modelo
Treinar a rede neural requer os seguintes passos:
Alimente com os dados de treinamento, o modelo. Neste exemplo, os dados de treinamento são os arrays train_images e train_labels.
O modelo aprende como associar as imagens as labels.
Perguntamos ao modelo para fazer previsões sobre o conjunto de teste — nesse exemplo, o array test_images. Verificamos se as previsões combinaram com as labels do array test_labels.
Para começar a treinar, chame o método model.fit— assim chamado, porque ele "encaixa" o modelo no conjunto de treinamento:
End of explanation
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc)
Explanation: À medida que o modelo treina, as métricas loss e acurácia são mostradas. O modelo atinge uma acurácia de 0.88 (ou 88%) com o conjunto de treinamento.
Avalie a acurácia
Depois, compare como o modelo performou com o conjunto de teste:
End of explanation
predictions = model.predict(test_images)
Explanation: Acabou que o a acurácia com o conjunto de teste é um pouco menor do que a acurácia de treinamento. Essa diferença entre as duas acurácias representa um overfitting. Overfitting é modelo de aprendizado de máquina performou de maneira pior em um conjunto de entradas novas, e não usadas anteriormente, que usando o conjunto de treinamento.
Faça predições
Com o modelo treinado, o usaremos para predições de algumas imagens.
End of explanation
predictions[0]
Explanation: Aqui, o modelo previu que a label de cada imagem no conjunto de treinamento. Vamos olhar na primeira predição:
End of explanation
np.argmax(predictions[0])
Explanation: A predição é um array de 10 números. Eles representam um a confiança do modelo que a imagem corresponde a cada um dos diferentes artigos de roupa. Podemos ver cada label tem um maior valor de confiança:
End of explanation
test_labels[0]
Explanation: Então, o modelo é confiante de que esse imagem é uma bota (ankle boot) ou class_names[9]. Examinando a label do teste, vemos que essa classificação é correta:
End of explanation
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array[i], true_label[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
Explanation: Podemos mostrar graficamente como se parece em um conjunto total de previsão de 10 classes.
End of explanation
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
Explanation: Vamos olhar a previsão imagem na posição 0, do array de predição.
End of explanation
# Plota o primeiro X test images, e as labels preditas, e as labels verdadeiras.
# Colore as predições corretas de azul e as incorretas de vermelho.
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions, test_labels)
plt.show()
Explanation: Vamos plotar algumas da previsão do modelo. Labels preditas corretamente são azuis e as predições erradas são vermelhas. O número dá a porcentagem (de 100) das labels preditas. Note que o modelo pode errar mesmo estando confiante.
End of explanation
# Grab an image from the test dataset.
img = test_images[0]
print(img.shape)
Explanation: Finamente, use o modelo treinado para fazer a predição de uma única imagem.
End of explanation
# Adiciona a imagem em um batch que possui um só membro.
img = (np.expand_dims(img,0))
print(img.shape)
Explanation: Modelos tf.keras são otimizados para fazer predições em um batch, ou coleções, de exemplos de uma vez. De acordo, mesmo que usemos uma única imagem, precisamos adicionar em uma lista:
End of explanation
predictions_single = model.predict(img)
print(predictions_single)
plot_value_array(0, predictions_single, test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
Explanation: Agora prediremos a label correta para essa imagem:
End of explanation
np.argmax(predictions_single[0])
Explanation: model.predict retorna a lista de listas — uma lista para cada imagem em um batch de dados. Pegue a predição de nossa (única) imagem no batch:
End of explanation |
11,111 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
In this analysis report I would like to find some patterns or characteristics that make some players the best.<br>
After analysing the data I will use data analysis and statistics techniques using the libraries Pandas, Numpy and Matplotlib to manage datasets, make vectorized calculations and create visualisations to help understand the results.
Questions
Which school did best players go to?
Did best players get highest salaries?
Is there a correlation between best teams and managers?
Step8: Intro to Data Analysis
Final Project
Step9: Analyse data
Main Table
Step11: Missing Values
Step12: As you can see more than 50% of entries, only from Master table (players), have missing values. I could not find any relevant missing values directly on tables that could directly affect my calculations. However I had to deal with missing relationships between players and schools tables as you will see underneath.
Report found answers
To answer any question about best players first I will have define what a best player is and how I will score it.
Best Player
Step13: As you can see underneath, most of teams won between 50 to 100 games in the time window contained in this dataset. I took a 150 teams sample to show the tendency.
Step14: Some statistical numbers about Team Wins
Losses
Step15: Now, showing below the amount of games that most of teams lost, the average is slightly higher (from 60 to 100).<br>
Does it make sense? Well, not everyone can win...
Step17: And some more statistical numbers about Team Losses
Defining Scores
Any score will be calculated as follows
Step18: Statistical information and first entries of Team Results
Step19: Again, now displayed as a distribution, teams seem to be draw, but slightly negatively skewed. Which has the same meaning as the previous plot
Step20: Best Teams
Since Teams are too many, I will take a sample using Pandas' quantile function set to 0.8 in order to get only the best 20% of best teams.
Step21: Now we have only 30 teams. If you check the previous figure the minimum score_norm was -1. Now it is 0.064, and max of 0.664.
Best Teams
After selecting only teams with higher scores the difference is quite significant. This plot is clearly positive skewed, showing that best scores are reserved only for the best of the best teams.
Step22: Best Players
Score range goes now from -3 to 3.
Step23: Now looking only at the best players, the distribution of score is perfectly symmetric.
Step24: Now looking at the best 2% of players the shape of this positively skewed distribution represents only a small portion on the right side of the previous plot.
Step27: Question 1
Which school did winners go to?
Surprisingly, the best player regarding games played in winning teams, does not report school, as you can see below.
Step28: Almost none of best players report schools
Missing Relationships
I will have to exclude player entries with no schools in order to generate a ranking of best schools, based on best players who have reported one or more schools. To do so, I will count schools for each player, and then exclude those entries with school equal to zero.
Top10 Schools regarding best players
Step29: List ot top 10 schools
Question 2
Did best players get the highest salaries?
Apparently participating in more winning games doesn't have a strict correlation with highest salaries, but they are still in a massive salaries list.
To calculate the salaries, I took the maximum salary for each player/team.
As you can see underneath, highest salaries didn't go to players of teams with more successful games.
Step30: Statistical information of best players scores vs. salaries (expressed in millions of USD)
Step31: Here is the difference between the player getting the highest salary and the one participating the more winning games.
Step32: Do best players get higher salaries? It is pretty difficult to see using only the variables on this report. It might be interesting to include more variables like manager biases or getting a better normalization of salaries (money does not have the same value now and 100 years ago).
Question 3
Is there a correlation between winning teams and managers?
The plot displaying the distribution of manager scores seems to be perfectly symmetric. Which simply means that there are managers for all teams, from worst to best.
Step33: It seems that best teams were managed by the best managers (or managers are considered the best because they won more games). This shows me how important leadership is to succeed.
Step34: Limitations
I think there are many variables that I could include to see whether data could tell us anything more interesting, perhaps using Pitching, Fielding and Batting tables. Even curious patterns like birth or death places, or even doing the opposite job, like finding patterns correlating with losing teams.
The following and last plot shows where players die more often. Please don't play baseball in Philadelphia!!! | Python Code:
# import libraries
import os
import sys
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from IPython.display import display
%pylab inline
from bokeh.io import output_notebook, show
from bkcharts import Donut
output_notebook()
Explanation: Introduction
In this analysis report I would like to find some patterns or characteristics that make some players the best.<br>
After analysing the data I will use data analysis and statistics techniques using the libraries Pandas, Numpy and Matplotlib to manage datasets, make vectorized calculations and create visualisations to help understand the results.
Questions
Which school did best players go to?
Did best players get highest salaries?
Is there a correlation between best teams and managers?
End of explanation
# declare helper functions and load dataset
def load_files(data_path, filenames):
Load data files
Loads every csv file from the data directory.
Args:
data_path(string): Path to files
filenames(list): List of files to load from the data_path
Returns:
dict: A dictionary of table names as keys and DataFrame objects as values
dfs = {}
for f in filenames:
parts = f.split('.')
if parts[1] != 'csv':
continue
dfname = parts[0]
absfilename = data_path + '/' + f
dfs[dfname] = pd.read_csv(absfilename)
return dfs
def describe_df(df):
Describe a DataFrame
Prints DataFrame information, the result of describe() method, and a blank line underneath.
Args:
df(DataFrame): The DataFrame to be described
print(df)
display(dataframes[df].describe())
print('')
def describe_dataframes(dataframes):
Describe a list of DataFrame objects
Args:
dataframes(list): A list of DataFrame objects
for df in dataframes:
describe_df(df)
def display_1d_array(array, col_name='Data'):
Display 1D Numpy array nicely
Args:
array (np.array): 1D Numpy array
col_name (string, optional): Name for column to display
display(pd.DataFrame(array, columns=[col_name], index=[i+1 for i in range(len(array))]))
# declare visualization functions
def distribution(data, figsize=(7, 5), color='c', xlabel='X', ylabel='Y', title=None):
Draw a Seaborn Distribution plot
Draws a distribution plot setting proper size and title
Args:
data(DataFrame): Data to be used to draw
color(string, optional): Single character that represents a colour setting. Defaults to c
xlabel(string, optional): Label for X axis. Defaults to X
ylabel(string, optional): Label for Y axis. Defaults to Y
title(string, optional): Title to be displayed on top of the plot. Defaults to None
.. _Seaborn Documentation:
https://seaborn.pydata.org/generated/seaborn.distplot.html
f, axes = plt.subplots(1, 1, figsize=(7, 5), sharex=True)
sns.despine(left=True)
sns.distplot(data, color=color, kde_kws={"shade": True})
axes.set(xlabel=xlabel, ylabel=ylabel)
plt.setp(axes, yticks=[])
plt.tight_layout()
if title is not None:
f.suptitle(title)
def plot_correlation(x, y, xlabel, ylabel, legend=None, title=None):
Draw a Seaborn Regplot plot
Plot data and a linear regression model fit.
Args:
x(array): X data in Numpy array format
y(array): Y data in Numpy array format
xlabel(string): Label for X axis
ylabel(string): Label for Y axis
legend(string, optional): Title to be displayed inside the plot. Defaults to None
title(string, optional): Title to be displayed on top of the plot. Defaults to None
.. _Seaborn Documentation:
https://seaborn.pydata.org/generated/seaborn.regplot.html
if title is not None:
fig = plt.figure()
fig.suptitle(title)
ax = sns.regplot(x="x", y="y", data=pd.DataFrame({'x': x, 'y': y}), label=legend, x_jitter=.2)
ax.set(xlabel=xlabel, ylabel=ylabel)
if legend is not None:
ax.legend(loc="best")
def plot_correlation2(data, x_var, y_var, title=None):
Draw a Seaborn Joinplot plot
Draw a plot of two variables with bivariate and univariate graphs.
Args:
data(DataFrame): Data to draw the plot
x_var(string): Name of variable for X axis
y_var(string): Name of variable for Y axis
title(string, optional): Title to be displayed on top of the plot. Defaults to None
.. _Seaborn Documentation:
http://seaborn.pydata.org/generated/seaborn.jointplot.html
sns.set(style="darkgrid", color_codes=True)
g = sns.jointplot(x_var, y_var, data=data, kind="reg",
xlim=(0, data[x_var].max()), ylim=(0, data[y_var].max()), color="r", size=7)
if title is not None:
g.fig.suptitle(title)
# load data
data_path = 'data/baseballdatabank-2017.1/core'
filenames = os.listdir(data_path)
dataframes = load_files(data_path, filenames)
Explanation: Intro to Data Analysis
Final Project: Investigate Data
In this project I will:
1. Choose a dataset (titanic, baseball)
2. Analyse the data
3. Make questions based on the analysis
4. Report found answers
End of explanation
dataframes['Master'].head()
Explanation: Analyse data
Main Table: Master
End of explanation
rows_with_missing_values = len(dataframes['Master'][dataframes['Master'].isnull().any(axis=1)])
total = len(dataframes['Master'])
def bokeh_pie_chart(values, labels):
Pie chart
Draws a Bokeh pie chart
Args:
values(list): List of Numpy arrays, one per type of data to show
labels(list): List of labels matching amount of arrays in values argument
data = pd.Series(values, index=labels)
pie_chart = Donut(data)
show(pie_chart)
bokeh_pie_chart([rows_with_missing_values, total-rows_with_missing_values], ['Rows with missing values', 'Rows without missing values'])
Explanation: Missing Values
End of explanation
# Wins calculation
team_wins_by_year = dataframes['Teams'].groupby(['teamID', 'yearID'], as_index=False)['teamID', 'W'].sum()
team_wins_by_year.sort_values('W', ascending=False).describe()
display(team_wins_by_year.describe())
Explanation: As you can see more than 50% of entries, only from Master table (players), have missing values. I could not find any relevant missing values directly on tables that could directly affect my calculations. However I had to deal with missing relationships between players and schools tables as you will see underneath.
Report found answers
To answer any question about best players first I will have define what a best player is and how I will score it.
Best Player: Player who participated on most winning matches through time being part of one or more teams.
Player/Team Score: Ratio from the difference between Wins and Losses over amount of games played, assigned to every team.
Wins
End of explanation
team_wins_by_year_sample = team_wins_by_year.sample(n=150)
fig = plt.figure(figsize=(16,10))
ax = sns.stripplot(x="teamID", y="W", data=team_wins_by_year_sample, size=10)
for item in ax.get_xticklabels():
item.set_rotation(45)
ax.set(xlabel='Teams', ylabel='Wins')
fig.suptitle('Teams by Yearly Wins (based on sample of 150)', fontsize=16);
# Team Wins calculation
team_wins_total = team_wins_by_year.groupby('teamID', as_index=False)['W'].sum()
display(team_wins_total.describe())
Explanation: As you can see underneath, most of teams won between 50 to 100 games in the time window contained in this dataset. I took a 150 teams sample to show the tendency.
End of explanation
# Losses calculation
team_losses_by_year = dataframes['Teams'].groupby(['teamID', 'yearID'], as_index=False)['teamID', 'L'].sum()
team_losses_by_year.sort_values('L', ascending=False).describe()
display(team_losses_by_year.describe())
team_losses_by_year_sample = team_losses_by_year.sample(n=150)
Explanation: Some statistical numbers about Team Wins
Losses
End of explanation
fig = plt.figure(figsize=(16,10))
ax = sns.stripplot(x="teamID", y="L", data=team_losses_by_year_sample, size=8)
for item in ax.get_xticklabels():
item.set_rotation(45)
ax.set(xlabel='Teams', ylabel='Losses')
fig.suptitle('Teams by Yearly Losses (based on sample of 150)', fontsize=16);
# Team losses calculation
team_losses_total = team_losses_by_year.groupby('teamID', as_index=False)['L'].sum()
display(team_losses_total.describe())
Explanation: Now, showing below the amount of games that most of teams lost, the average is slightly higher (from 60 to 100).<br>
Does it make sense? Well, not everyone can win...
End of explanation
def calc_score(x):
Score calculation
Calculates score based on Wins vs. Losses over amount of games
Args:
x(DataFrame): One column DataFrame
Returns:
float: Formula result
return (x['W'] - x['L']) / (x['W'] + x['L'])
Explanation: And some more statistical numbers about Team Losses
Defining Scores
Any score will be calculated as follows: (wins - losses) / n_games.
End of explanation
# Team results calculation
teams_results = team_wins_total.merge(team_losses_total, on='teamID', how='inner')
tscores = teams_results.apply(calc_score, axis=1)
teams_results['score'] = tscores.values
display(tscores.describe())
display(teams_results.head())
Explanation: Statistical information and first entries of Team Results
End of explanation
distribution(tscores, xlabel='Score', ylabel='Teams', title='Team Score Distribution')
Explanation: Again, now displayed as a distribution, teams seem to be draw, but slightly negatively skewed. Which has the same meaning as the previous plot: there are more teams losing than winning.
End of explanation
# Best teams calculation
best_teams = teams_results[teams_results['score'] > teams_results.quantile(q=0.8)['score']].sort_values('score', ascending=False)
display(best_teams.describe())
Explanation: Best Teams
Since Teams are too many, I will take a sample using Pandas' quantile function set to 0.8 in order to get only the best 20% of best teams.
End of explanation
distribution(best_teams['score'].values, color='m', xlabel='Score', ylabel='Teams', title='Best Team Score Distribution')
Explanation: Now we have only 30 teams. If you check the previous figure the minimum score_norm was -1. Now it is 0.064, and max of 0.664.
Best Teams
After selecting only teams with higher scores the difference is quite significant. This plot is clearly positive skewed, showing that best scores are reserved only for the best of the best teams.
End of explanation
# Player results calculation
players_results = dataframes['Appearances'].merge(teams_results, on='teamID', how='inner').groupby(['playerID'], as_index=False)['W', 'L', 'score'].sum()
pscores = players_results['score'].values
display(players_results.describe())
Explanation: Best Players
Score range goes now from -3 to 3.
End of explanation
distribution(pscores, xlabel='Score', ylabel='Players', title='Best Players Score Distribution')
Explanation: Now looking only at the best players, the distribution of score is perfectly symmetric.
End of explanation
# Players participating on games calculation
players_appearences = players_results.merge(dataframes['Appearances'].groupby('playerID', as_index=False)['teamID'].count(), on='playerID', how='left')
best_players_appearences = players_appearences[players_appearences['score'] > players_appearences.quantile(q=0.98)['score']]
distribution(best_players_appearences['score'].values, color='r', xlabel='Score', ylabel='Players', title='2% Best Players Distribution')
Explanation: Now looking at the best 2% of players the shape of this positively skewed distribution represents only a small portion on the right side of the previous plot.
End of explanation
def set_to_str(x):
Set to string helper function
Converts a set to string
Args:
x(set): Set to be represented as a string
Returns:
string: String representation of set
x = ','.join([str(c) for c in x])
return x
def set_to_count(x):
Set to count helper function
Converts a set into the count of its elements
Args:
x(set): Set to be counted
Returns:
int: Number of elements contained in the set
if str(x) == '{nan}':
return 0
return len(x)
schools_of_winners = best_players_appearences.merge(dataframes['CollegePlaying'], on='playerID', how='left')
best_players_appearences_with_schools = best_players_appearences.merge(schools_of_winners, on='playerID', how='left').groupby('playerID', as_index=False).agg({'schoolID': lambda x: set(x)})
best_players_appearences_with_schools['school_count'] = best_players_appearences_with_schools['schoolID'].apply(set_to_count)
display(best_players_appearences_with_schools.merge(dataframes['Master'], on='playerID', how='left')[['playerID', 'nameGiven', 'schoolID', 'school_count']].head())
Explanation: Question 1
Which school did winners go to?
Surprisingly, the best player regarding games played in winning teams, does not report school, as you can see below.
End of explanation
best_players_with_school_and_score = best_players_appearences.merge(best_players_appearences_with_schools, on='playerID', how='left')
top10_schools = best_players_with_school_and_score[best_players_with_school_and_score['school_count'] > 0].sort_values('score', ascending=False).apply({'schoolID': lambda x: ','.join(x)}).iloc[0:10]
display_1d_array(top10_schools.values, col_name='School')
Explanation: Almost none of best players report schools
Missing Relationships
I will have to exclude player entries with no schools in order to generate a ranking of best schools, based on best players who have reported one or more schools. To do so, I will count schools for each player, and then exclude those entries with school equal to zero.
Top10 Schools regarding best players
End of explanation
def calculate_mean_salary(x):
count = len(x)
return ((1/count) * np.sum(x)/1000000)
best_players_salaries = dataframes['Salaries'].merge(best_players_with_school_and_score, on='playerID', how='right')[['playerID', 'salary', 'yearID', 'score']].dropna().sort_values(['salary'], ascending=False)
bpsalaries = best_players_salaries['salary'].values
#bscore_norm = MinMaxScaler().fit_transform(bpsalaries.astype(np.float64).reshape(-1, 1)).flatten()
#best_players_salaries['salary_norm'] = bpscore_norm
highest_salaries_players_with_score = best_players_salaries.groupby('playerID', as_index=False).agg({'salary': calculate_mean_salary}).merge(players_results, on='playerID', how='left')
display(highest_salaries_players_with_score[['salary', 'score']].describe())
Explanation: List ot top 10 schools
Question 2
Did best players get the highest salaries?
Apparently participating in more winning games doesn't have a strict correlation with highest salaries, but they are still in a massive salaries list.
To calculate the salaries, I took the maximum salary for each player/team.
As you can see underneath, highest salaries didn't go to players of teams with more successful games.
End of explanation
max_salary_idx = highest_salaries_players_with_score[['salary']].idxmax()
max_score_idx = highest_salaries_players_with_score[['score']].idxmax()
print('Highest salary (millions)')
display(highest_salaries_players_with_score.loc[max_salary_idx].merge(dataframes['Master'], on='playerID', how='left')[['playerID', 'nameGiven', 'salary', 'score']])
print('Highest score')
display(highest_salaries_players_with_score.loc[max_score_idx].merge(dataframes['Master'], on='playerID', how='left')[['playerID', 'nameGiven', 'salary', 'score']])
Explanation: Statistical information of best players scores vs. salaries (expressed in millions of USD)
End of explanation
plot_correlation2(data=highest_salaries_players_with_score, x_var="score", y_var="salary", title='Correlation between Salary and Score')
Explanation: Here is the difference between the player getting the highest salary and the one participating the more winning games.
End of explanation
managers = dataframes['Managers']
mscores = managers.apply(calc_score, axis=1)
managers['score'] = mscores
managers = managers.groupby('playerID', as_index=False).sum()[['playerID', 'score', 'rank']]
best_managers = managers.sort_values('score', ascending=False)
distribution(mscores, color='g', xlabel='Score', ylabel='Managers', title='Manager Score Distribution')
Explanation: Do best players get higher salaries? It is pretty difficult to see using only the variables on this report. It might be interesting to include more variables like manager biases or getting a better normalization of salaries (money does not have the same value now and 100 years ago).
Question 3
Is there a correlation between winning teams and managers?
The plot displaying the distribution of manager scores seems to be perfectly symmetric. Which simply means that there are managers for all teams, from worst to best.
End of explanation
manager_rank_score_corr = best_managers.sort_values('score', ascending=False).iloc[0:100][['playerID', 'score', 'rank']]
plot_correlation(x=manager_rank_score_corr['rank'], y=manager_rank_score_corr['score'], xlabel='Rank', ylabel='Score', title='Correlation between Manager Score and Ranking')
Explanation: It seems that best teams were managed by the best managers (or managers are considered the best because they won more games). This shows me how important leadership is to succeed.
End of explanation
player_death_city = dataframes['Master'][['playerID', 'deathCity']].dropna().merge(players_results, on='playerID', how='left')
worst_players = player_death_city.sort_values('score')[['playerID', 'deathCity', 'score']].dropna()
top10_worst = worst_players[worst_players['score'] < worst_players.quantile(q=0.1)['score']].sort_values('score').groupby('deathCity', as_index=False)['playerID'].count().sort_values('playerID', ascending=False).iloc[0:10]
#display(top10_worst.iloc[0:10])
f, ax = plt.subplots(figsize=(10, 7))
sns.barplot(x="deathCity", y="playerID", data=top10_worst)
#sns.stripplot(x="deathCity", y="playerID", data=top10_worst)
ax.set(xlabel='Death City', ylabel='Amount of Dead Players')
f.suptitle('Just for fun: Most deadly cities to play baseball');
Explanation: Limitations
I think there are many variables that I could include to see whether data could tell us anything more interesting, perhaps using Pitching, Fielding and Batting tables. Even curious patterns like birth or death places, or even doing the opposite job, like finding patterns correlating with losing teams.
The following and last plot shows where players die more often. Please don't play baseball in Philadelphia!!!
End of explanation |
11,112 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Project Euler
Step2: Now write a set of assert tests for your number_to_words function that verifies that it is working as expected.
Step4: Now define a count_letters(n) that returns the number of letters used to write out the words for all of the the numbers 1 to n inclusive.
Step5: Now write a set of assert tests for your count_letters function that verifies that it is working as expected.
Step6: Finally used your count_letters function to solve the original question. | Python Code:
def number_to_words(n):
Given a number n between 1-1000 inclusive return a list of words for the number.
x = []
a = {1:'one',2:'two',3:'three',4:'four',5:'five',6:'six',7:'seven',8:'eight',9:'nine',10:'ten',
11:'eleven',12:'twelve',13:'thirteen',14:'fourteen',15:'fifteen',16:'sixteen',17:'seventeen',18:'eighteen'
,19:'nineteen',20:'twenty',30:'thirty',40:'forty',50:'fifty',60:'sixty',70:'seventy',80:'eighty',90:'ninety'}
b = 'hundred'
c = 'thousand'
d = 'and'
if n <= 20 and n >= 1:
x.append(a[n])
return x
elif n > 20 and n < 100:
if n % 10 == 0:
x.append(a[n])
return x
else:
y = str(n)
x.append(a[int(y[0] + '0')])
x.append(a[int(y[1])])
return x
elif n >= 100 and n < 1000:
if n % 100 == 0:
y = str(n)
x.append(a[int(y[0])])
x.append(b)
return x
elif n % 10 == 0:
y = str(n)
x.append(a[int(y[0])])
x.append(b)
x.append(d)
x.append(a[int(y[1]+'0')])
return x
elif str(n)[1] == '0':
y = str(n)
x.append(a[int(y[0])])
x.append(b)
x.append(d)
x.append(a[int(y[2])])
return x
elif str(n)[1] == '1':
y = str(n)
x.append(a[int(y[0])])
x.append(b)
x.append(d)
x.append(a[int(y[1]+y[2])])
return x
else:
y = str(n)
x.append(a[int(y[0])])
x.append(b)
x.append(d)
x.append(a[int(y[1]+'0')])
x.append(a[int(y[2])])
return x
else:
x.append(a[1])
x.append(c)
return x
Explanation: Project Euler: Problem 17
https://projecteuler.net/problem=17
If the numbers 1 to 5 are written out in words: one, two, three, four, five, then there are 3 + 3 + 5 + 4 + 4 = 19 letters used in total.
If all the numbers from 1 to 1000 (one thousand) inclusive were written out in words, how many letters would be used?
NOTE: Do not count spaces or hyphens. For example, 342 (three hundred and forty-two) contains 23 letters and 115 (one hundred and fifteen) contains 20 letters. The use of "and" when writing out numbers is in compliance with British usage.
First write a number_to_words(n) function that takes an integer n between 1 and 1000 inclusive and returns a list of words for the number as described above
End of explanation
assert number_to_words(16) == ['sixteen']
assert number_to_words(507) == ['five','hundred','and','seven']
assert number_to_words(735) == ['seven', 'hundred', 'and', 'thirty', 'five']
assert len(''.join(number_to_words(342))) == 23
assert True # use this for grading the number_to_words tests.
Explanation: Now write a set of assert tests for your number_to_words function that verifies that it is working as expected.
End of explanation
def count_letters(n):
Count the number of letters used to write out the words for 1-n inclusive.
z = 0
x = range(1,n+1)
for m in x:
j = number_to_words(m)
k = len(''.join(j))
z += k
return z
Explanation: Now define a count_letters(n) that returns the number of letters used to write out the words for all of the the numbers 1 to n inclusive.
End of explanation
assert count_letters(6) == 22
assert True # use this for grading the count_letters tests.
Explanation: Now write a set of assert tests for your count_letters function that verifies that it is working as expected.
End of explanation
count_letters(1000)
assert True # use this for gradig the answer to the original question.
Explanation: Finally used your count_letters function to solve the original question.
End of explanation |
11,113 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
automaton.shuffle(a1, ...)
The (accessible part of the) shuffle product of automata.
Preconditions
Step1: Boolean Automata
The shuffle product of automata computes the shuffling of their languages
Step2: Weighted automata
In the case of weighted automata, weights are "kept" with the letters.
Step3: Associativity
This operator is associative, and it is actually implemented as a variadic operator; a.shuffle(b, c) is not exactly the same as a.shuffle(b).shuffle(c) | Python Code:
import vcsn
Explanation: automaton.shuffle(a1, ...)
The (accessible part of the) shuffle product of automata.
Preconditions:
- all the labelsets are letterized
See also:
- automaton.conjunction
- automaton.infiltration
- expression.shuffle
Examples
End of explanation
std = lambda exp: vcsn.B.expression(exp).standard()
a = std('abc')
a
a.shuffle(std('xyz'))
Explanation: Boolean Automata
The shuffle product of automata computes the shuffling of their languages: all the possible interleavings.
End of explanation
c = vcsn.context('lal_char, seriesset<lal_char, z>')
std = lambda exp: c.expression(exp).standard()
std('<A>a<B>b').shuffle(std('<X>x<Y>y'))
Explanation: Weighted automata
In the case of weighted automata, weights are "kept" with the letters.
End of explanation
x = std('<x>a')
y = std('<y>a')
z = std('<z>a')
x.shuffle(y, z)
x.shuffle(y).shuffle(z)
Explanation: Associativity
This operator is associative, and it is actually implemented as a variadic operator; a.shuffle(b, c) is not exactly the same as a.shuffle(b).shuffle(c): they are the same automata, but the former is labeled with 3-uples, not 2-tuples.
End of explanation |
11,114 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Working with Streaming Data
Learning Objectives
1. Learn how to process real-time data for ML models using Cloud Dataflow
2. Learn how to serve online predictions using real-time data
Introduction
It can be useful to leverage real time data in a machine learning model when making a prediction. However, doing so requires setting up a streaming data pipeline which can be non-trivial.
Typically you will have the following
Step1: Re-train our model with trips_last_5min feature
In this lab, we want to show how to process real-time data for training and prediction. So, we need to retrain our previous model with this additional feature. Go through the notebook 4a_streaming_data_training.ipynb. Open and run the notebook to train and save a model. This notebook is very similar to what we did in the Introduction to Tensorflow module but note the added feature for trips_last_5min in the model and the dataset.
Simulate Real Time Taxi Data
Since we don’t actually have real-time taxi data we will synthesize it using a simple python script. The script publishes events to Google Cloud Pub/Sub.
Inspect the iot_devices.py script in the taxicab_traffic folder. It is configured to send about 2,000 trip messages every five minutes with some randomness in the frequency to mimic traffic fluctuations. These numbers come from looking at the historical average of taxi ride frequency in BigQuery.
In production this script would be replaced with actual taxis with IoT devices sending trip data to Cloud Pub/Sub.
To execute the iot_devices.py script, launch a terminal and navigate to the asl-ml-immersion/notebooks/building_production_ml_systems/solutions directory. Then run the following two commands.
bash
PROJECT_ID=$(gcloud config list project --format "value(core.project)")
python3 ./taxicab_traffic/iot_devices.py --project=$PROJECT_ID
You will see new messages being published every 5 seconds. Keep this terminal open so it continues to publish events to the Pub/Sub topic. If you open Pub/Sub in your Google Cloud Console, you should be able to see a topic called taxi_rides.
Create a BigQuery table to collect the processed data
In the next section, we will create a dataflow pipeline to write processed taxifare data to a BigQuery Table, however that table does not yet exist. Execute the following commands to create a BigQuery dataset called taxifare and a table within that dataset called traffic_realtime.
Step2: Next, we create a table called traffic_realtime and set up the schema.
Step3: Launch Streaming Dataflow Pipeline
Now that we have our taxi data being pushed to Pub/Sub, and our BigQuery table set up, let’s consume the Pub/Sub data using a streaming DataFlow pipeline.
The pipeline is defined in ./taxicab_traffic/streaming_count.py. Open that file and inspect it.
There are 5 transformations being applied
Step5: Make predictions from the new data
In the rest of the lab, we'll referece the model we trained and deployed from the previous labs, so make sure you have run the code in the 4a_streaming_data_training.ipynb notebook.
The add_traffic_last_5min function below will query the traffic_realtime table to find the most recent traffic information and add that feature to our instance for prediction.
Step6: The traffic_realtime table is updated in realtime using Cloud Pub/Sub and Dataflow so, if you run the cell below periodically, you should see the traffic_last_5min feature added to the instance and change over time.
Step7: Finally, we'll use the python api to call predictions on an instance, using the realtime traffic information in our prediction. Just as above, you should notice that our resulting predicitons change with time as our realtime traffic information changes as well.
Copy the ENDPOINT_ID from the deployment in the previous lab to the beginning of the block below. | Python Code:
import numpy as np
import os
import shutil
import tensorflow as tf
from google.cloud import aiplatform
from google.cloud import bigquery
from google.protobuf import json_format
from google.protobuf.struct_pb2 import Value
from matplotlib import pyplot as plt
from tensorflow import keras
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import Dense, DenseFeatures
from tensorflow.keras.models import Sequential
print(tf.__version__)
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'cloud-training-demos' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# For Bash Code
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
%%bash
gcloud config set project $PROJECT
Explanation: Working with Streaming Data
Learning Objectives
1. Learn how to process real-time data for ML models using Cloud Dataflow
2. Learn how to serve online predictions using real-time data
Introduction
It can be useful to leverage real time data in a machine learning model when making a prediction. However, doing so requires setting up a streaming data pipeline which can be non-trivial.
Typically you will have the following:
- A series of IoT devices generating and sending data from the field in real-time (in our case these are the taxis)
- A messaging bus to that receives and temporarily stores the IoT data (in our case this is Cloud Pub/Sub)
- A streaming processing service that subscribes to the messaging bus, windows the messages and performs data transformations on each window (in our case this is Cloud Dataflow)
- A persistent store to keep the processed data (in our case this is BigQuery)
These steps happen continuously and in real-time, and are illustrated by the blue arrows in the diagram below.
Once this streaming data pipeline is established, we need to modify our model serving to leverage it. This simply means adding a call to the persistent store (BigQuery) to fetch the latest real-time data when a prediction request comes in. This flow is illustrated by the red arrows in the diagram below.
<img src='../assets/taxi_streaming_data.png' width='80%'>
In this lab we will address how to process real-time data for machine learning models. We will use the same data as our previous 'taxifare' labs, but with the addition of trips_last_5min data as an additional feature. This is our proxy for real-time traffic.
End of explanation
bq = bigquery.Client()
dataset = bigquery.Dataset(bq.dataset("taxifare"))
try:
bq.create_dataset(dataset) # will fail if dataset already exists
print("Dataset created.")
except:
print("Dataset already exists.")
Explanation: Re-train our model with trips_last_5min feature
In this lab, we want to show how to process real-time data for training and prediction. So, we need to retrain our previous model with this additional feature. Go through the notebook 4a_streaming_data_training.ipynb. Open and run the notebook to train and save a model. This notebook is very similar to what we did in the Introduction to Tensorflow module but note the added feature for trips_last_5min in the model and the dataset.
Simulate Real Time Taxi Data
Since we don’t actually have real-time taxi data we will synthesize it using a simple python script. The script publishes events to Google Cloud Pub/Sub.
Inspect the iot_devices.py script in the taxicab_traffic folder. It is configured to send about 2,000 trip messages every five minutes with some randomness in the frequency to mimic traffic fluctuations. These numbers come from looking at the historical average of taxi ride frequency in BigQuery.
In production this script would be replaced with actual taxis with IoT devices sending trip data to Cloud Pub/Sub.
To execute the iot_devices.py script, launch a terminal and navigate to the asl-ml-immersion/notebooks/building_production_ml_systems/solutions directory. Then run the following two commands.
bash
PROJECT_ID=$(gcloud config list project --format "value(core.project)")
python3 ./taxicab_traffic/iot_devices.py --project=$PROJECT_ID
You will see new messages being published every 5 seconds. Keep this terminal open so it continues to publish events to the Pub/Sub topic. If you open Pub/Sub in your Google Cloud Console, you should be able to see a topic called taxi_rides.
Create a BigQuery table to collect the processed data
In the next section, we will create a dataflow pipeline to write processed taxifare data to a BigQuery Table, however that table does not yet exist. Execute the following commands to create a BigQuery dataset called taxifare and a table within that dataset called traffic_realtime.
End of explanation
dataset = bigquery.Dataset(bq.dataset("taxifare"))
table_ref = dataset.table("traffic_realtime")
SCHEMA = [
bigquery.SchemaField("trips_last_5min", "INTEGER", mode="REQUIRED"),
bigquery.SchemaField("time", "TIMESTAMP", mode="REQUIRED"),
]
table = bigquery.Table(table_ref, schema=SCHEMA)
try:
bq.create_table(table)
print("Table created.")
except:
print("Table already exists.")
Explanation: Next, we create a table called traffic_realtime and set up the schema.
End of explanation
%%bigquery
SELECT
*
FROM
`taxifare.traffic_realtime`
ORDER BY
time DESC
LIMIT 10
Explanation: Launch Streaming Dataflow Pipeline
Now that we have our taxi data being pushed to Pub/Sub, and our BigQuery table set up, let’s consume the Pub/Sub data using a streaming DataFlow pipeline.
The pipeline is defined in ./taxicab_traffic/streaming_count.py. Open that file and inspect it.
There are 5 transformations being applied:
- Read from PubSub
- Window the messages
- Count number of messages in the window
- Format the count for BigQuery
- Write results to BigQuery
TODO: Open the file ./taxicab_traffic/streaming_count.py and find the TODO there. Specify a sliding window that is 5 minutes long, and gets recalculated every 15 seconds. Hint: Reference the beam programming guide for guidance. To check your answer reference the solution.
For the second transform, we specify a sliding window that is 5 minutes long, and recalculate values every 15 seconds.
In a new terminal, launch the dataflow pipeline using the command below. You can change the BUCKET variable, if necessary. Here it is assumed to be your PROJECT_ID.
bash
PROJECT_ID=$(gcloud config list project --format "value(core.project)")
BUCKET=$PROJECT_ID # CHANGE AS NECESSARY
python3 ./taxicab_traffic/streaming_count.py \
--input_topic taxi_rides \
--runner=DataflowRunner \
--project=$PROJECT_ID \
--temp_location=gs://$BUCKET/dataflow_streaming
Once you've submitted the command above you can examine the progress of that job in the Dataflow section of Cloud console.
Explore the data in the table
After a few moments, you should also see new data written to your BigQuery table as well.
Re-run the query periodically to observe new data streaming in! You should see a new row every 15 seconds.
End of explanation
# TODO 2a. Write a function to take most recent entry in `traffic_realtime` table and add it to instance.
def add_traffic_last_5min(instance):
bq = bigquery.Client()
query_string =
SELECT
*
FROM
`taxifare.traffic_realtime`
ORDER BY
time DESC
LIMIT 1
trips = bq.query(query_string).to_dataframe()['trips_last_5min'][0]
instance['traffic_last_5min'] = int(trips)
return instance
Explanation: Make predictions from the new data
In the rest of the lab, we'll referece the model we trained and deployed from the previous labs, so make sure you have run the code in the 4a_streaming_data_training.ipynb notebook.
The add_traffic_last_5min function below will query the traffic_realtime table to find the most recent traffic information and add that feature to our instance for prediction.
End of explanation
add_traffic_last_5min(instance={'dayofweek': 4,
'hourofday': 13,
'pickup_longitude': -73.99,
'pickup_latitude': 40.758,
'dropoff_latitude': 41.742,
'dropoff_longitude': -73.07})
Explanation: The traffic_realtime table is updated in realtime using Cloud Pub/Sub and Dataflow so, if you run the cell below periodically, you should see the traffic_last_5min feature added to the instance and change over time.
End of explanation
# TODO 2b. Write code to call prediction on instance using realtime traffic info.
#Hint: Look at this sample https://github.com/googleapis/python-aiplatform/blob/master/samples/snippets/predict_custom_trained_model_sample.py
ENDPOINT_ID = # TODO: Copy the `ENDPOINT_ID` from the deployment in the previous lab.
api_endpoint = f'{REGION}-aiplatform.googleapis.com'
# The AI Platform services require regional API endpoints.
client_options = {"api_endpoint": api_endpoint}
# Initialize client that will be used to create and send requests.
# This client only needs to be created once, and can be reused for multiple requests.
client = aiplatform.gapic.PredictionServiceClient(client_options=client_options)
instance = {'dayofweek': 4,
'hourofday': 13,
'pickup_longitude': -73.99,
'pickup_latitude': 40.758,
'dropoff_latitude': 41.742,
'dropoff_longitude': -73.07}
# The format of each instance should conform to the deployed model's prediction input schema.
instance_dict = add_traffic_last_5min(instance)
instance = json_format.ParseDict(instance, Value())
instances = [instance]
endpoint = client.endpoint_path(
project=PROJECT, location=REGION, endpoint=ENDPOINT_ID
)
response = client.predict(
endpoint=endpoint, instances=instances
)
# The predictions are a google.protobuf.Value representation of the model's predictions.
print(" prediction:", response.predictions[0][0])
Explanation: Finally, we'll use the python api to call predictions on an instance, using the realtime traffic information in our prediction. Just as above, you should notice that our resulting predicitons change with time as our realtime traffic information changes as well.
Copy the ENDPOINT_ID from the deployment in the previous lab to the beginning of the block below.
End of explanation |
11,115 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Part 2 - Core Query Builder Functions
Step1: Query builders
match_field
Forge has many helper functions to make constructing queries easier. The simplest of the helpers is match_field().
To use match_field(), provide it with the field and value to match on.
Step2: You can use match_field() as many times as you like to add more fields and values. (This applies to all of the query builder helpers.)
Step3: Once you're done adding fields, use the search() method to execute your search. You don't need to specify the advanced argument; when using the query builder functions it is always set to True.
After you execute a search, the query is cleared from memory.
Step4: exclude_field
exclude_field() is the opposite of match_field(); it excludes results with the specified value.
Step5: You can chain calls together if you want. | Python Code:
from mdf_forge.forge import Forge
mdf = Forge()
Explanation: Part 2 - Core Query Builder Functions
End of explanation
mdf.match_field("material.elements", "Al")
Explanation: Query builders
match_field
Forge has many helper functions to make constructing queries easier. The simplest of the helpers is match_field().
To use match_field(), provide it with the field and value to match on.
End of explanation
mdf.match_field("mdf.source_name", "oqmd*")
Explanation: You can use match_field() as many times as you like to add more fields and values. (This applies to all of the query builder helpers.)
End of explanation
res = mdf.search(limit=10)
res[0]
Explanation: Once you're done adding fields, use the search() method to execute your search. You don't need to specify the advanced argument; when using the query builder functions it is always set to True.
After you execute a search, the query is cleared from memory.
End of explanation
mdf.exclude_field("material.elements", "Cu")
Explanation: exclude_field
exclude_field() is the opposite of match_field(); it excludes results with the specified value.
End of explanation
mdf.exclude_field("mdf.source_name", "sluschi").match_field("material.elements", "Al").exclude_field("mdf.source_name", "oqmd")
res = mdf.search(limit=10)
res[0]
Explanation: You can chain calls together if you want.
End of explanation |
11,116 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Evaluation of classified contacts between rods and bipolar cells
This notebook contains the code to reproduce all plots in figure 6 showing statistics about the rod-BC contacts
Step1: Number of contacted rods per bipolar cell, averaged over BC type (Figure 6D)
Step2: Numer of contacted rod bipolar cells per rod (Figure 6E)
Step3: Numer of contacted OFF cone bipolar cells per rod (Figure 6F) | Python Code:
import numpy as np
import scipy.linalg
from scipy.stats import itemfreq
import matplotlib
import matplotlib.pyplot as plt
from scipy.io import loadmat
import pandas as pd
import seaborn as sns
from sklearn import cross_validation
from sklearn import svm
%matplotlib inline
matplotlib.rc('font',**{'family':'sans-serif','sans-serif':['Arial']})
matplotlib.rcParams.update({'mathtext.default': 'regular'})
matplotlib.rcParams.update({'font.size': 14})
sns.set_style("whitegrid")
BC_ids=np.loadtxt('data/BC_IDs_new')
BC_in_rod_area=np.loadtxt('data/BC_in_rod_area')
BC_excluded=np.array([691,709,827,836])
rod_excluded=np.array([3309])
contact_summary=pd.read_pickle('data/rod_contact_predictions')
true_contacts=contact_summary.ix[(contact_summary['prediction']==1)]
true_contacts=true_contacts[np.in1d(true_contacts['cell'],BC_in_rod_area)].reset_index().drop('index',axis=1)
stat_bc_contacts=pd.DataFrame(BC_ids[(((BC_ids[:,4]>=58)&(BC_ids[:,4]<=62))|(BC_ids[:,4]==71))\
&np.in1d(BC_ids[:,0],BC_excluded,invert=True)][:,[0,4]],columns=['cell','type'])
contact_freq_type=itemfreq(true_contacts['cell'].as_matrix())
for i in range(stat_bc_contacts.shape[0]):
stat_bc_contacts.loc[i,'count']=0
try:
stat_bc_contacts.ix[i,'count']=contact_freq_type[contact_freq_type[:,0]==stat_bc_contacts.ix[i,'cell'],1]
except ValueError:
continue
stat_bc_contacts=stat_bc_contacts[np.in1d(stat_bc_contacts['cell'],BC_in_rod_area)]
rod_ids=np.unique(contact_summary['rod'].as_matrix())
stat_rod_contacts=pd.DataFrame(np.concatenate((np.tile(rod_ids,14).reshape(-1,1),np.repeat(np.arange(58,72),rod_ids.shape[0]).reshape(-1,1)),axis=1),columns=['rod','type'])
for i in range(stat_rod_contacts.shape[0]):
stat_rod_contacts.loc[i,'count']=np.sum((true_contacts['rod']==stat_rod_contacts.ix[i,'rod'])&\
(true_contacts['type']==stat_rod_contacts.ix[i,'type']))
stat_rod_contacts=stat_rod_contacts[np.in1d(stat_rod_contacts['type'],np.array([58,59,60,61,62,71]))]
rod_ids_off=np.unique(true_contacts['rod'])
stat_rod_contacts_off=pd.DataFrame(rod_ids_off,columns=['rod'])
for i in range(stat_rod_contacts_off.shape[0]):
stat_rod_contacts_off.loc[i,'count']=np.sum((true_contacts['rod']==stat_rod_contacts.ix[i,'rod'])&\
(true_contacts['type']<71))
Explanation: Evaluation of classified contacts between rods and bipolar cells
This notebook contains the code to reproduce all plots in figure 6 showing statistics about the rod-BC contacts
End of explanation
labels = ['1','2','3A','3B','4','RBC']
plt.figure(figsize=(3/2.54,3/2.54))
sns.set(font='Arial',style='white',context='paper',rc={"xtick.major.size": 0, "ytick.major.size": 4})
with matplotlib.rc_context({"lines.linewidth": 0.7}):
ax=sns.barplot(x='type',y='count',data=stat_bc_contacts,order=[58,59,60,61,62,71],ci=95,color='grey')
ax.set_xticklabels(labels)
ax.set(ylabel='# rods',ylim=(0,40),xlabel='BC types',yticks=[0,10,20,30,40])
ax.spines['left'].set_position(('outward',3))
sns.despine()
# plt.savefig('figures/rod_contacts_per_bc.svg',bbox_inches='tight',dpi=300)
plt.show()
Explanation: Number of contacted rods per bipolar cell, averaged over BC type (Figure 6D)
End of explanation
plt.figure(figsize=(3/2.54,3/2.54))
sns.set(font='Arial',style='white',context='paper',rc={"xtick.major.size": 0, "ytick.major.size": 4})
ax=sns.countplot(x='count',data=stat_rod_contacts[stat_rod_contacts['type']==71],order=np.arange(0,4),color='grey')
ncount=len(stat_rod_contacts[stat_rod_contacts['type']==71]['rod'])
ax.set(xlabel='Contacted RBCs',ylabel='# rods',yticks=[0,200,400,600,800])
ax2=ax.twinx()
ax2.set(ylim=([0,ax.get_ylim()[1]/ncount*100]),yticks=[0,10,20,30,40,50],yticklabels=['0','10','20','30','40','50'],ylabel='Fraction [%]')
ax.spines['left'].set_position(('outward',3))
ax.spines['right'].set_position(('outward',3))
ax2.spines['left'].set_position(('outward',3))
ax2.spines['right'].set_position(('outward',3))
sns.despine(right=False)
# plt.savefig('figures/rbc_contacts_per_rod.svg',bbox_inches='tight',dpi=300)
plt.show()
Explanation: Numer of contacted rod bipolar cells per rod (Figure 6E)
End of explanation
plt.figure(figsize=(3/2.54,3/2.54))
sns.set(font='Arial',style='white',context='paper',rc={"xtick.major.size": 0, "ytick.major.size": 4})
ax=sns.countplot(x='count',data=stat_rod_contacts_off,order=np.arange(0,4),color='grey')
ncount=len(stat_rod_contacts_off['rod'])
ax.set(xlabel='Contacted CBCs',ylabel='# rods',yticks=[0,500,1000,1500])
ax2=ax.twinx()
ax2.set(ylim=([0,ax.get_ylim()[1]/ncount*100]),yticks=[0,20,40,60,80,100],yticklabels=['0','20','40','60','80','100'],ylabel='Fraction [%]')
ax.spines['left'].set_position(('outward',3))
ax.spines['right'].set_position(('outward',3))
ax2.spines['left'].set_position(('outward',3))
ax2.spines['right'].set_position(('outward',3))
sns.despine(right=False)
# plt.savefig('figures/cbc_contacts_per_rod.svg',bbox_inches='tight',dpi=300)
plt.show()
Explanation: Numer of contacted OFF cone bipolar cells per rod (Figure 6F)
End of explanation |
11,117 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
FICHEROS
En Python, para abrir un fichero usaremos la función open, que recibe el nombre del archivo a abrir. Por defecto, si no indicamos nada, el fichero se abre en modo lectura.
OPEN
Step1: La función open abrirá el fichero con el nombre indicado, en este caso el fichero cuna.txt. Si no tiene éxito, se lanzará una excepción. Si se ha podido abrir el fichero correctamente, la variable fichero nos permitirá manipularlo.
Step2: Es posible, además, obtener todas las líneas del archivo utilizando una sola llamada a función readlines.
Step3: En este caso, la variable líneas tendrá una lista de cadenas con todas las líneas del fichero.
Es importante tener en cuenta que cuando se utilizan funciones como archivo.readlines(), se está cargando en memoria el fichero completo. Siempre que una instrucción cargue un fichero completo debe tenerse cuidado de utilizarla sólo con ficheros pequeños, ya que de otro modo podría agotarse la memoria.
Step4: <br/>
OPEN | Python Code:
%pwd
fichero = open("../datos/cuna.txt")
Explanation: FICHEROS
En Python, para abrir un fichero usaremos la función open, que recibe el nombre del archivo a abrir. Por defecto, si no indicamos nada, el fichero se abre en modo lectura.
OPEN: MODO LECTURA
End of explanation
ls "../datos"
fichero= open("../datos/cuna.txt")
for linea in fichero:
print(linea)
Explanation: La función open abrirá el fichero con el nombre indicado, en este caso el fichero cuna.txt. Si no tiene éxito, se lanzará una excepción. Si se ha podido abrir el fichero correctamente, la variable fichero nos permitirá manipularlo.
End of explanation
fichero = open("../datos/cuna.txt")
lineas = fichero.readlines()
lineas
Explanation: Es posible, además, obtener todas las líneas del archivo utilizando una sola llamada a función readlines.
End of explanation
# Uso de rstrip para eliminar saltos de línea
lineas[0].rstrip()
Explanation: En este caso, la variable líneas tendrá una lista de cadenas con todas las líneas del fichero.
Es importante tener en cuenta que cuando se utilizan funciones como archivo.readlines(), se está cargando en memoria el fichero completo. Siempre que una instrucción cargue un fichero completo debe tenerse cuidado de utilizarla sólo con ficheros pequeños, ya que de otro modo podría agotarse la memoria.
End of explanation
# Abrimos el archivo nuevo y con el parámetro 'w' le indicamos que estamos en modo escritura
arc_write = open('../datos/nuevo.txt', 'w')
# Usamos los datos del archivo que hemos abierto antes 'cuna.txt':
# fichero = open("../datos/cuna.txt")
# lineas = fichero.readlines()
# Copia en el nuevo archivo solo las líneas pares de la variable 'lineas' que a su vez contiene todas las líneas
# del archivo 'cuna.txt'
for i, line in enumerate(lineas):
if i % 2 == 0:
arc_write.write(str(i) + ' ' + line)
else:
pass
arc_write.close() # cerramos el fichero
open('../datos/nuevo.txt').readlines()
open('../datos/nuevo.txt', 'a').write('\nEste es el final')
open('../datos/nuevo.txt').readlines()
open('../datos/nuevo.txt').readline()
Explanation: <br/>
OPEN: MODO ESCRITURA
Si queremos abrir un fichero en modo escritura, hay que indicarlo como segundo parámetro de la función open.
End of explanation |
11,118 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Part3
Using the models.ldamodel module from the gensim library, run topic modeling over the corpus. Explore different numbers of topics (varying from 5 to 50), and settle for the parameter which returns topics that you consider to be meaningful at first sight.
Step1: To improve the result of the lda model, we group the mails by Subject. We filter the subjects to remove the keywords 're', 'fw', 'fvv', 'fwd'
Step2: we concat the e-mail body wit the subject.
Step3: to get more meaningfull topics, we filter out english stop words and some custom words that don't have much meaning as topics.
Step4: To use the LDA model, we need to transform every document into a list of tuple (ID, term frequency).
Step5: Now we generate the LDA models with different numbers of topics.
Step6: when we choose the number of topics to be 25, the topics seem to be the most meaningfull. | Python Code:
# imports
import pandas as pd
import numpy as np
from nltk.corpus import stopwords
from gensim import corpora, models, utils
from nltk.stem import WordNetLemmatizer
data = pd.read_csv('hillary-clinton-emails/Emails.csv', index_col=0).dropna()
texts = pd.concat((data.ExtractedBodyText ,data.ExtractedSubject), axis=1)
sw = ['re', 'fw', 'fvv', 'fwd']
Explanation: Part3
Using the models.ldamodel module from the gensim library, run topic modeling over the corpus. Explore different numbers of topics (varying from 5 to 50), and settle for the parameter which returns topics that you consider to be meaningful at first sight.
End of explanation
def filt(row):
t = utils.simple_preprocess(row.ExtractedSubject)
filt = list(filter(lambda x: x not in sw, t))
return ' '.join(filt)
texts['ExtractedSubject'] = texts.apply(filt, axis=1)
texts = texts.groupby(by='ExtractedSubject', as_index=False).apply(lambda x: (x + ' ').sum())
texts.head()
Explanation: To improve the result of the lda model, we group the mails by Subject. We filter the subjects to remove the keywords 're', 'fw', 'fvv', 'fwd'
End of explanation
texts.ExtractedBodyText.fillna('',inplace=True)
texts.ExtractedSubject.fillna('',inplace=True)
texts['SubjectBody'] = texts.ExtractedBodyText + ' ' + texts.ExtractedSubject
mails = texts.SubjectBody
Explanation: we concat the e-mail body wit the subject.
End of explanation
documents = []
custom = ['like', 'think', 'know', 'want', 'sure', 'thing', 'send', 'sent', 'speech', 'print', 'time','want', 'said', 'maybe', 'today', 'tomorrow', 'thank', 'thanks']
english_stop_words = ["a", "about", "above", "above", "across", "after", "afterwards", "again", "against", "all", "almost", "alone", "along", "already", "also","although","always","am","among", "amongst", "amoungst", "amount", "an", "and", "another", "any","anyhow","anyone","anything","anyway", "anywhere", "are", "around", "as", "at", "back","be","became", "because","become","becomes", "becoming", "been", "before", "beforehand", "behind", "being", "below", "beside", "besides", "between", "beyond", "bill", "both", "bottom","but", "by", "call", "can", "cannot", "cant", "co", "con", "could", "couldnt", "cry", "de", "describe", "detail", "do", "done", "down", "due", "during", "each", "eg", "eight", "either", "eleven","else", "elsewhere", "empty", "enough", "etc", "even", "ever", "every", "everyone", "everything", "everywhere", "except", "few", "fifteen", "fify", "fill", "find", "fire", "first", "five", "for", "former", "formerly", "forty", "found", "four", "from", "front", "full", "further", "get", "give", "go", "had", "has", "hasnt", "have", "he", "hence", "her", "here", "hereafter", "hereby", "herein", "hereupon", "hers", "herself", "him", "himself", "his", "how", "however", "hundred", "ie", "if", "in", "inc", "indeed", "interest", "into", "is", "it", "its", "itself", "keep", "last", "latter", "latterly", "least", "less", "ltd", "made", "many", "may", "me", "meanwhile", "might", "mill", "mine", "more", "moreover", "most", "mostly", "move", "much", "must", "my", "myself", "name", "namely", "neither", "never", "nevertheless", "next", "nine", "no", "nobody", "none", "noone", "nor", "not", "nothing", "now", "nowhere", "of", "off", "often", "on", "once", "one", "only", "onto", "or", "other", "others", "otherwise", "our", "ours", "ourselves", "out", "over", "own","part", "per", "perhaps", "please", "put", "rather", "re", "same", "see", "seem", "seemed", "seeming", "seems", "serious", "several", "she", "should", "show", "side", "since", "sincere", "six", "sixty", "so", "some", "somehow", "someone", "something", "sometime", "sometimes", "somewhere", "still", "such", "system", "take", "ten", "than", "that", "the", "their", "them", "themselves", "then", "thence", "there", "thereafter", "thereby", "therefore", "therein", "thereupon", "these", "they", "thickv", "thin", "third", "this", "those", "though", "three", "through", "throughout", "thru", "thus", "to", "together", "too", "top", "toward", "towards", "twelve", "twenty", "two", "un", "under", "until", "up", "upon", "us", "very", "via", "was", "we", "well", "were", "what", "whatever", "when", "whence", "whenever", "where", "whereafter", "whereas", "whereby", "wherein", "whereupon", "wherever", "whether", "which", "while", "whither", "who", "whoever", "whole", "whom", "whose", "why", "will", "with", "within", "without", "would", "yet", "you", "your", "yours", "yourself", "yourselves", "the"]
sw =stopwords.words('english') + sw + custom + english_stop_words
for text in mails:
t = utils.simple_preprocess(text)
filt = list(filter(lambda x: (x not in sw) and len(x) > 3, t))
lemmatizer = WordNetLemmatizer()
lemmatized = [lemmatizer.lemmatize(x) for x in filt]
filt2 = list(filter(lambda x: (x not in sw) and len(x) > 3, lemmatized))
documents.append(filt2)
Explanation: to get more meaningfull topics, we filter out english stop words and some custom words that don't have much meaning as topics.
End of explanation
dictionary = corpora.Dictionary(documents)
corpus = [dictionary.doc2bow(doc) for doc in documents]
Explanation: To use the LDA model, we need to transform every document into a list of tuple (ID, term frequency).
End of explanation
import pprint
pp = pprint.PrettyPrinter(depth=2)
for i in range(5, 50, 10):
print('------------------------------------')
print(i, 'topics')
lda = models.LdaModel(corpus, num_topics=i, id2word = dictionary)
pp.pprint(lda.print_topics(lda.num_topics))
print()
Explanation: Now we generate the LDA models with different numbers of topics.
End of explanation
lda = models.LdaModel(corpus, num_topics=25, id2word = dictionary)
pp.pprint(lda.print_topics(lda.num_topics))
Explanation: when we choose the number of topics to be 25, the topics seem to be the most meaningfull.
End of explanation |
11,119 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncc', 'noresm2-mm', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: NCC
Source ID: NORESM2-MM
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:24
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
11,120 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Examples of plots and calculations using the tmm and colorpy package
This example uses tmm and colorpy package to calculate the surface of stacked layers.
Note that tmm and colorpy packages are slightly altered from their original version. To successfully run this script, you have to download them from the following github repo
Step1: Set up
Step2: Color calculations
Color calculations
Step3: Caluclate absorption vs. depth
Step4: Calculate total absorption in each layer
Step5: Make two dimension plot
Step6: Create chroma map
Step7: Create hue map | Python Code:
from __future__ import division, print_function, absolute_import
%load_ext autoreload
%autoreload 2
from pypvcell.tmm_core import (coh_tmm, unpolarized_RT, ellips, absorp_in_each_layer,
position_resolved, find_in_structure_with_inf)
from numpy import pi, linspace, inf, array
import numpy as np
from scipy.interpolate import interp1d
import matplotlib.pyplot as plt
from pypvcell.transfer_matrix_optics import get_ntotal_fn
%matplotlib inline
Explanation: Examples of plots and calculations using the tmm and colorpy package
This example uses tmm and colorpy package to calculate the surface of stacked layers.
Note that tmm and colorpy packages are slightly altered from their original version. To successfully run this script, you have to download them from the following github repo:
colorpy: https://github.com/kanhua/ColorPy
Imports
End of explanation
try:
import colorpy.illuminants
import colorpy.colormodels
import colorpy.plots
from pypvcell import color
colors_were_imported = True
except ImportError:
# without colorpy, you can't run sample5(), but everything else is fine.
colors_were_imported = False
# "5 * degree" is 5 degrees expressed in radians
# "1.2 / degree" is 1.2 radians expressed in degrees
degree = pi/180
import colorpy.illuminants
import colorpy.colormodels
import colorpy.plots
if not colors_were_imported:
print('Colorpy was not detected (or perhaps an error occurred when',
'loading it). You cannot do color calculations, sorry!',
'http://pypi.python.org/pypi/colorpy')
else:
print("Colorpy is successfully installed.")
from pypvcell import color
def hsvc_from_rgb(rgb):
r=rgb[0]
g=rgb[1]
b=rgb[2]
arg_M=np.argmax(rgb)
arg_m=np.argmin(rgb)
M=rgb[arg_M]
m=rgb[arg_m]
C=M-m
if C==0:
H=np.nan
elif arg_M==0:
H=np.mod(float(g-b)/float(C),6.0)
elif arg_M==1:
H=float(b-r)/float(C)+2
elif arg_M==2:
H=float(r-g)/float(C)+4
H*=60
V=M
if V==0:
S=0
else:
S=C/V
return np.array([H,S,V,C])
def chroma_from_irgb(irgb):
rgb=colorpy.colormodels.rgb_from_irgb(irgb)
arg_M=np.argmax(rgb)
arg_m=np.argmin(rgb)
M=rgb[arg_M]
m=rgb[arg_m]
C=M-m
return C
def hue_from_irgb(irgb):
return hsvc_from_rgb(irgb)[0]
Explanation: Set up
End of explanation
%%time
# Crystalline silicon refractive index. Data from Palik via
# http://refractiveindex.info, I haven't checked it, but this is just for
# demonstration purposes anyway.
Si_n_fn = get_ntotal_fn('Si')
# SiO2 refractive index (approximate): 1.46 regardless of wavelength
SiO2_n_fn = get_ntotal_fn('SiO2_2')
TiO2_n_fn=get_ntotal_fn("TiO2_2")
aSi_n_fn=get_ntotal_fn("Si")
# air refractive index
air_n_fn = get_ntotal_fn("Air")
n_fn_list = [air_n_fn, SiO2_n_fn, TiO2_n_fn, Si_n_fn]
d_list = [inf, 200, 50, inf]
th_0 = 0
# Print the colors, and show plots, for the special case of 300nm-thick SiO2
reflectances = color.calc_reflectances(n_fn_list, d_list, th_0)
illuminant = colorpy.illuminants.get_illuminant_D65()
spectrum = color.calc_spectrum(reflectances, illuminant)
color_dict = color.calc_color(spectrum)
print('air / 300nm SiO2 / Si --- rgb =', color_dict['rgb'], ', xyY =', color_dict['xyY'])
plt.figure()
color.plot_reflectances(reflectances,
title='air / 300nm SiO2 / Si -- '
'Fraction reflected at each wavelength')
plt.figure()
color.plot_spectrum(spectrum,
title='air / 300nm SiO2 / Si -- '
'Reflected spectrum under D65 illumination')
Si_n_fn(500)
plt.plot(reflectances[:,0],reflectances[:,1])
Explanation: Color calculations
Color calculations: What color is a air / thin SiO2 / Si wafer?
End of explanation
lam_vac_arr=np.linspace(500,800,5)
for lam_vac in lam_vac_arr:
n_list = [np.asscalar(n_fn_list[i](lam_vac)) for i in range(len(n_fn_list))]
coh_data=coh_tmm('s',n_list,d_list,th_0,lam_vac)
zx=np.linspace(-50,1000,100)
z_response=np.ones_like(zx)
for i in range(zx.shape[0]):
d_layer,d=find_in_structure_with_inf(d_list,zx[i])
za=position_resolved(d_layer,d,coh_data)['absor']
z_response[i]=za
plt.plot(zx,z_response,label="wl=%s"%lam_vac)
plt.legend()
plt.xlabel("depth from sample surface (nm)")
plt.ylabel("absorption")
plt.show()
Explanation: Caluclate absorption vs. depth
End of explanation
lam_vac_arr=np.linspace(500,1000,30)
abs_si=[]
index_of_si_layer=3
for lam_vac in lam_vac_arr:
n_list = [np.asscalar(n_fn_list[i](lam_vac)) for i in range(len(n_fn_list))]
coh_data=coh_tmm('s',n_list,d_list,th_0,lam_vac)
result=absorp_in_each_layer(coh_data)
abs_si.append(result[index_of_si_layer])
plt.plot(lam_vac_arr,abs_si)
plt.xlabel("wavelength (nm)")
plt.ylabel("total absorption in Si layer")
plt.title("absorption in silicon layer")
plt.show()
%%time
# Calculate irgb color (i.e. gamma-corrected sRGB display color rounded to
# integers 0-255) versus thickness of SiO2
max_SiO2_thickness = 300
SiO2_thickness_list = linspace(0,max_SiO2_thickness,num=80)
irgb_list = []
n_fn_list_2 = [air_n_fn, SiO2_n_fn, Si_n_fn]
for SiO2_d in SiO2_thickness_list:
d_list = [inf, SiO2_d ,inf]
reflectances_p = color.calc_reflectances(n_fn_list_2, d_list, th_0,pol='p')
reflectances_s = color.calc_reflectances(n_fn_list_2, d_list, th_0,pol='s')
reflectances_unp=(reflectances_p+reflectances_s)/2
reflectances_unp[:,0]=reflectances_p[:,0]
illuminant = colorpy.illuminants.get_illuminant_D65()
spectrum = color.calc_spectrum(reflectances_unp, illuminant)
color_dict = color.calc_color(spectrum)
irgb_list.append(colorpy.colormodels.irgb_string_from_irgb(color_dict['irgb']))
# Plot those colors
print('Making color vs SiO2 thickness graph. Compare to (for example)')
print('http://www.htelabs.com/appnotes/sio2_color_chart_thermal_silicon_dioxide.htm')
colorpy.plots.color_tile_vs_1param_plot(SiO2_thickness_list,irgb_list,
xlabel_name="SiO2 thickness (nm)",
title_name="Air/SiO2/Si (nm)")
Explanation: Calculate total absorption in each layer
End of explanation
%%time
# Calculate irgb color (i.e. gamma-corrected sRGB display color rounded to
# integers 0-255) versus thickness of SiO2
max_inc_angle = 80
max_d=150
inc_angle = linspace(0,max_inc_angle,num=10)
SiO2_d_list=linspace(0,max_d,num=10)
irgb_list =[]
for i,ang in enumerate(inc_angle):
for j,SiO2_d in enumerate(SiO2_d_list):
d_list = [inf, SiO2_d, 50, inf]
reflectances = color.calc_reflectances(n_fn_list, d_list, ang/180*pi)
illuminant = colorpy.illuminants.get_illuminant_D65()
spectrum = color.calc_spectrum(reflectances, illuminant)
color_dict = color.calc_color(spectrum)
color_string = colorpy.colormodels.irgb_string_from_irgb(color_dict['irgb'])
irgb_list.append(color_string)
colorpy.plots.color_tile_vs_2param_plot(inc_angle,SiO2_d_list,irgb_list,
xlabel_name="incident angle (pol-s)",
ylabel_name="SiO2 thickness (nm)",
title_name="Air/SiO2/Si color")
plt.tight_layout()
plt.savefig("color_tiles_demo.pdf")
plt.show()
%%time
# Calculate irgb color (i.e. gamma-corrected sRGB display color rounded to
# integers 0-255) versus thickness of SiO2
max_inc_angle = 80
max_d=150
inc_angle = linspace(0,max_inc_angle,num=10)
SiO2_d_list=linspace(0,max_d,num=15)
TiO2_d_list=linspace(0,max_d,num=15)
irgb_list =[]
for i,TiO2_d in enumerate(TiO2_d_list):
for j,SiO2_d in enumerate(SiO2_d_list):
d_list = [inf, SiO2_d, TiO2_d,inf]
reflectances = color.calc_reflectances(n_fn_list, d_list, 0/180*pi)
illuminant = colorpy.illuminants.get_illuminant_D65()
spectrum = color.calc_spectrum(reflectances, illuminant)
color_dict = color.calc_color(spectrum)
color_string = colorpy.colormodels.irgb_string_from_irgb(color_dict['irgb'])
irgb_list.append(color_string)
colorpy.plots.color_tile_vs_2param_plot(TiO2_d_list,SiO2_d_list,irgb_list,
xlabel_name="TiO2 thickness (nm)",
ylabel_name="SiO2 thickness (nm)",
title_name="Air/SiO2/TiO2/Si color")
plt.savefig("color_tiles_demo.png")
Explanation: Make two dimension plot
End of explanation
# convert the color string to hsv values
rgb_list=list(map(colorpy.colormodels.irgb_from_irgb_string,irgb_list))
chroma_list=list(map(chroma_from_irgb,rgb_list))
chroma_list=np.reshape(chroma_list,(TiO2_d_list.shape[0],SiO2_d_list.shape[0]))
plt.pcolormesh(TiO2_d_list,SiO2_d_list,chroma_list.T)
plt.colorbar()
plt.title("Air/SiO2/TiO2/Si Chroma")
plt.xlabel("TiO2 thicknesses (nm)")
plt.ylabel("SiO2 thicknesses (nm)")
plt.tight_layout()
plt.savefig("chroma.pdf")
plt.show()
Explanation: Create chroma map
End of explanation
# convert the color string to hsv values
rgb_list=list(map(colorpy.colormodels.irgb_from_irgb_string,irgb_list))
hue_list=list(map(hue_from_irgb,rgb_list))
hue_list=np.reshape(hue_list,(TiO2_d_list.shape[0],SiO2_d_list.shape[0]))
plt.pcolormesh(TiO2_d_list,SiO2_d_list,hue_list.T,cmap='hsv',vmin=0,vmax=360)
plt.colorbar()
plt.title("Air/SiO2/TiO2/Si Hue")
plt.xlabel("TiO2 thicknesses (nm)")
plt.ylabel("SiO2 thicknesses (nm)")
plt.tight_layout()
plt.savefig("hue.pdf")
plt.show()
Explanation: Create hue map
End of explanation |
11,121 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Party game
Step1: 'numbers' is a list of lists. Using a list comprehension, flatten 'numbers' so it is a list of only numbers (not list of lists).
use the newly flattened 'numbers' and filter it in a way that it only contains odd numbers.
using a list comprehension, remove all words containing an 'i' from 'words'
using a list comprehension, remove all words containing more than two vowels from 'words'.
find all prime numbers between 1 and 100 using a single list comprehension
3. Cartesian/Polar Coordinates
Points may be given in polar $(r, \theta)$ or cartesian coordinates $(x, y)$, see Figure 1.
<img src="https
Step2: 5.2 Compute $\pi$
Step3: 5.3 Twist and turn
Convert this image | Python Code:
numbers = [[1,2,3],[4,5,6],[7,8,9]]
words = ['if','i','could','just','go','outside','and','have','an','ice','cream']
Explanation: 1. Party game: squeezed
One guessing game, called “squeezed”, is very common in parties. It consists of a player,
the chooser, who writes down a number between 00–99. The other players then take
turns guessing numbers, with a catch: if one says the chosen number, he loses and has
to do something daft. If the guessed number is not the chosen one, it splits the range.
The chooser then states the part which contains the chosen number. If the new region
only has one number, the chooser is said to be “squeezed” and is punished. An example
of gameplay would be:
Chooser writes down (secretly) his number (let’s say, 30).
Chooser: “State a number between 00 and 99.”
Player: “42”.
Chooser: “State a number between 00 and 42.”
Player: “26”.
Chooser: “State a number between 26 and 42.”
$\vdots$
Chooser: “State a number between 29 and 32.”
Player: “31”.
Chooser dances some very silly children song.
Implement this game in Python, where the computer is the chooser.
Useful: $\mathtt{random.randint()}$ and $\mathtt{input()}$.
2. List coprehensions
Given the following lists:
End of explanation
import scipy.ndimage
im = scipy.ndimage.imread("images/ill.png")
plt.imshow(im)
plt.grid(0)
plt.axis('Off')
Explanation: 'numbers' is a list of lists. Using a list comprehension, flatten 'numbers' so it is a list of only numbers (not list of lists).
use the newly flattened 'numbers' and filter it in a way that it only contains odd numbers.
using a list comprehension, remove all words containing an 'i' from 'words'
using a list comprehension, remove all words containing more than two vowels from 'words'.
find all prime numbers between 1 and 100 using a single list comprehension
3. Cartesian/Polar Coordinates
Points may be given in polar $(r, \theta)$ or cartesian coordinates $(x, y)$, see Figure 1.
<img src="https://upload.wikimedia.org/wikipedia/commons/1/18/Polar_coordinates_.png" />
Figure 1. Relationship between polar and cartesian coordinates.
3.1 Polar to cartesian
Write a function $\mathtt{pol2cart}$, that takes a tuple $\mathtt{(r, θ)}$ in polar coordinates and
returns a tuple in cartesian coordinates.
3.2 Cartesian to polar
Write the inverse function $\mathtt{cart2pol}$, such that $\mathtt{pol2cart( cart2pol( ( x,y) ) )}$ is $\mathtt{(x, y)}$ for any input $\mathtt{(x, y)}$.
3.3 Extend the two functions:
such that they can in addition handle lists of tuples.
4. A bit of statistics
Draw $N=10000$ unifromly distributed random numbers (use np.random.uniform, for example). Plot it's histogram and check that it looks uniform.
Now draw another such sample, and sum the two. How does the histogram of the sum look like?
Continue to sum $3,4,5,..$ such samples and keep plotting the histogram. It should quickly start to look like a gaussian.
5. Some numpy foo
5.1 Defeat optical illusions
This is a quite famous optical illusion:
<img src="images/ill.png"/>
The rows are perfectly straight, althought they appear crooked.
Use numpy and slicing operations to verify for yourself that they are indeed so.
The code for loading the image as a numpy array is provided below:
End of explanation
x = np.arange(-1,1, 0.01)
y = np.arange(-1,1, 0.01)
X,Y = np.meshgrid(x,y)
Z = X**2 + Y**2
Z = np.where(Z<1, 1, 0)
plt.matshow(Z)
Explanation: 5.2 Compute $\pi$:
Below is an array $Z$ which, when plotted, produces an image of a circle.
Compute the value of $\pi$ by counting the number of black pixels in the
array.
End of explanation
mos = scipy.ndimage.imread("images/mosaic_grey.png")
plt.imshow(mos)
Explanation: 5.3 Twist and turn
Convert this image:
<img width = "400px" src = "images/mosaic_grey.png" />
to
<img width = "400px" src = "images/mosaic_conv.png" />
End of explanation |
11,122 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Mesh examples
this notebook illustrates the basic ways of interacting with the pyro2 mesh module. We create some data that lives on a grid and show how to fill the ghost cells. The pretty_print() function shows us that they work as expected.
Step1: Setup a Grid with Variables
There are a few core classes that we deal with when creating a grid with associated variables
Step2: Then create a dataset that lives on this grid and add a variable name. For each variable that lives on the grid, we need to define the boundary conditions -- this is done through the BC object.
Step3: Working with the data
Now we fill the grid with random data. get_var() returns an ArrayIndexer object that has methods for accessing views into the data. Here we use a.v() to get the "valid" region, i.e. excluding ghost cells.
Step4: when we pretty_print() the variable, we see the ghost cells colored red. Note that we just filled the interior above.
Step5: pretty_print() can also take an argumet, specifying the format string to be used for the output.
Step6: now fill the ghost cells -- notice that the left and right are periodic, the upper is outflow, and the lower is reflect, as specified when we registered the data above.
Step7: We can find the L2 norm of the data easily
Step8: and the min and max
Step9: ArrayIndexer
We we access the data, an ArrayIndexer object is returned. The ArrayIndexer sub-classes the NumPy ndarray, so it can do all of the methods that a NumPy array can, but in addition, we can use the ip(), jp(), or ipjp() methods to the ArrayIndexer object shift our view in the x, y, or x & y directions.
To make this clearer, we'll change our data set to be nicely ordered numbers. We index the ArrayIndex the same way we would a NumPy array. The index space includes ghost cells, so the ilo and ihi attributes from the grid object are useful to index just the valid region. The .v() method is a shortcut that also gives a view into just the valid data.
Note
Step10: We index our arrays as {i,j}, so x (indexed by i) is the row and y (indexed by j) is the column in the NumPy array. Note that python arrays are stored in row-major order, which means that all of the entries in the same row are adjacent in memory. This means that when we simply print out the ndarray, we see constant-x horizontally, which is the transpose of what we are used to.
Step11: We can offset our view into the array by one in x -- this would be like {i+1, j} when we loop over data. The ip() method is used here, and takes an argument which is the (positive) shift in the x (i) direction. So here's a shift by 1
Step12: A shifted view is necessarily smaller than the original array, and relies on ghost cells to bring new data into view. Because of this, the underlying data is no longer the same size as the original data, so we return it as an ndarray (which is actually just a view into the data in the ArrayIndexer object, so no copy is made.
To see that it is simply a view, lets shift and edit the data
Step13: Here, since d was really a view into $a_{i+1,j}$, and we accessed element (1,1) into that view (with 0,0 as the origin), we were really accessing the element (2,1) in the valid region
Differencing
ArrayIndexer objects are easy to use to construct differences, like those that appear in a stencil for a finite-difference, without having to explicitly loop over the elements of the array.
Here's we'll create a new dataset that is initialized with a sine function
Step14: Our grid object can provide us with a scratch array (an ArrayIndexer object) define on the same grid
Step15: We can then fill the data in this array with differenced data from our original array -- since b has a separate data region in memory, its elements are independent of a. We do need to make sure that we have the same number of elements on the left and right of the =. Since by default, ip() will return a view with the same size as the valid region, we can use .v() on the left to accept the differences.
Here we compute a centered-difference approximation to the first derivative
Step16: Coarsening and prolonging
we can get a new ArrayIndexer object on a coarser grid for one of our variables
Step17: or a finer grid | Python Code:
from __future__ import print_function
import numpy as np
import mesh.boundary as bnd
import mesh.patch as patch
import matplotlib.pyplot as plt
%matplotlib inline
# for unit testing, we want to ensure the same random numbers
np.random.seed(100)
Explanation: Mesh examples
this notebook illustrates the basic ways of interacting with the pyro2 mesh module. We create some data that lives on a grid and show how to fill the ghost cells. The pretty_print() function shows us that they work as expected.
End of explanation
g = patch.Grid2d(4, 6, ng=2)
print(g)
help(g)
Explanation: Setup a Grid with Variables
There are a few core classes that we deal with when creating a grid with associated variables:
Grid2d : this holds the size of the grid (in zones) and the physical coordinate information, including coordinates of cell edges and centers
BC : this is a container class that simply holds the type of boundary condition on each domain edge.
ArrayIndexer : this is an array of data along with methods that know how to access it with different offsets into the data that usually arise in stencils (like {i+1, j})
CellCenterData2d : this holds the data that lives on a grid. Each variable that is part of this class has its own boundary condition type.
We start by creating a Grid2d object with 4 x 6 cells and 2 ghost cells
End of explanation
bc = bnd.BC(xlb="periodic", xrb="periodic", ylb="reflect", yrb="outflow")
print(bc)
d = patch.CellCenterData2d(g)
d.register_var("a", bc)
d.create()
print(d)
Explanation: Then create a dataset that lives on this grid and add a variable name. For each variable that lives on the grid, we need to define the boundary conditions -- this is done through the BC object.
End of explanation
a = d.get_var("a")
a.v()[:,:] = np.random.rand(g.nx, g.ny)
Explanation: Working with the data
Now we fill the grid with random data. get_var() returns an ArrayIndexer object that has methods for accessing views into the data. Here we use a.v() to get the "valid" region, i.e. excluding ghost cells.
End of explanation
a.pretty_print()
Explanation: when we pretty_print() the variable, we see the ghost cells colored red. Note that we just filled the interior above.
End of explanation
a.pretty_print(fmt="%7.3g")
Explanation: pretty_print() can also take an argumet, specifying the format string to be used for the output.
End of explanation
d.fill_BC("a")
a.pretty_print()
Explanation: now fill the ghost cells -- notice that the left and right are periodic, the upper is outflow, and the lower is reflect, as specified when we registered the data above.
End of explanation
a.norm()
Explanation: We can find the L2 norm of the data easily
End of explanation
print(a.min(), a.max())
Explanation: and the min and max
End of explanation
type(a)
type(a.v())
a[:,:] = np.arange(g.qx*g.qy).reshape(g.qx, g.qy)
a.pretty_print()
Explanation: ArrayIndexer
We we access the data, an ArrayIndexer object is returned. The ArrayIndexer sub-classes the NumPy ndarray, so it can do all of the methods that a NumPy array can, but in addition, we can use the ip(), jp(), or ipjp() methods to the ArrayIndexer object shift our view in the x, y, or x & y directions.
To make this clearer, we'll change our data set to be nicely ordered numbers. We index the ArrayIndex the same way we would a NumPy array. The index space includes ghost cells, so the ilo and ihi attributes from the grid object are useful to index just the valid region. The .v() method is a shortcut that also gives a view into just the valid data.
Note: when we use one of the ip(), jp(), ipjp(), or v() methods, the result is a regular NumPy ndarray, not an ArrayIndexer object. This is because it only spans part of the domain (e.g., no ghost cells), and therefore cannot be associated with the Grid2d object that the ArrayIndexer is built from.
End of explanation
a.v()
Explanation: We index our arrays as {i,j}, so x (indexed by i) is the row and y (indexed by j) is the column in the NumPy array. Note that python arrays are stored in row-major order, which means that all of the entries in the same row are adjacent in memory. This means that when we simply print out the ndarray, we see constant-x horizontally, which is the transpose of what we are used to.
End of explanation
a.ip(-1, buf=1)
Explanation: We can offset our view into the array by one in x -- this would be like {i+1, j} when we loop over data. The ip() method is used here, and takes an argument which is the (positive) shift in the x (i) direction. So here's a shift by 1
End of explanation
d = a.ip(1)
d[1,1] = 0.0
a.pretty_print()
Explanation: A shifted view is necessarily smaller than the original array, and relies on ghost cells to bring new data into view. Because of this, the underlying data is no longer the same size as the original data, so we return it as an ndarray (which is actually just a view into the data in the ArrayIndexer object, so no copy is made.
To see that it is simply a view, lets shift and edit the data
End of explanation
g = patch.Grid2d(8, 8, ng=2)
d = patch.CellCenterData2d(g)
bc = bnd.BC(xlb="periodic", xrb="periodic", ylb="periodic", yrb="periodic")
d.register_var("a", bc)
d.create()
a = d.get_var("a")
a[:,:] = np.sin(2.0*np.pi*a.g.x2d)
d.fill_BC("a")
Explanation: Here, since d was really a view into $a_{i+1,j}$, and we accessed element (1,1) into that view (with 0,0 as the origin), we were really accessing the element (2,1) in the valid region
Differencing
ArrayIndexer objects are easy to use to construct differences, like those that appear in a stencil for a finite-difference, without having to explicitly loop over the elements of the array.
Here's we'll create a new dataset that is initialized with a sine function
End of explanation
b = g.scratch_array()
type(b)
Explanation: Our grid object can provide us with a scratch array (an ArrayIndexer object) define on the same grid
End of explanation
b.v()[:,:] = (a.ip(1) - a.ip(-1))/(2.0*a.g.dx)
# normalization was 2.0*pi
b[:,:] /= 2.0*np.pi
plt.plot(g.x[g.ilo:g.ihi+1], a[g.ilo:g.ihi+1,a.g.jc])
plt.plot(g.x[g.ilo:g.ihi+1], b[g.ilo:g.ihi+1,b.g.jc])
print (a.g.dx)
Explanation: We can then fill the data in this array with differenced data from our original array -- since b has a separate data region in memory, its elements are independent of a. We do need to make sure that we have the same number of elements on the left and right of the =. Since by default, ip() will return a view with the same size as the valid region, we can use .v() on the left to accept the differences.
Here we compute a centered-difference approximation to the first derivative
End of explanation
c = d.restrict("a")
c.pretty_print()
Explanation: Coarsening and prolonging
we can get a new ArrayIndexer object on a coarser grid for one of our variables
End of explanation
f = d.prolong("a")
f.pretty_print(fmt="%6.2g")
Explanation: or a finer grid
End of explanation |
11,123 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reversible (Diffusion-limited)
This is for an integrated test of E-Cell4. Here, we test a simple reversible association/dissociation model in volume.
Step1: Parameters are given as follows. D, radius, N_A, U, and ka_factor mean a diffusion constant, a radius of molecules, an initial number of molecules of A and B, a ratio of dissociated form of A at the steady state, and a ratio between an intrinsic association rate and collision rate defined as ka andkD below, respectively. Dimensions of length and time are assumed to be micro-meter and second.
Step2: Calculating optimal reaction rates. ka and kd are intrinsic, kon and koff are effective reaction rates.
Step3: Start with no C molecules, and simulate 3 seconds.
Step4: Make a model with effective rates. This model is for macroscopic simulation algorithms.
Step5: Save a result with ode as obs, and plot it
Step6: Make a model with intrinsic rates. This model is for microscopic (particle) simulation algorithms.
Step7: Simulating with spatiocyte. voxel_radius is given as radius. Use alpha enough less than 1.0 for a diffusion-limited case (Bars represent standard error of the mean)
Step8: Simulating with egfrd | Python Code:
%matplotlib inline
from ecell4.prelude import *
Explanation: Reversible (Diffusion-limited)
This is for an integrated test of E-Cell4. Here, we test a simple reversible association/dissociation model in volume.
End of explanation
D = 1
radius = 0.005
N_A = 60
U = 0.5
ka_factor = 10 # 10 is for diffusion-limited
N = 20 # a number of samples
Explanation: Parameters are given as follows. D, radius, N_A, U, and ka_factor mean a diffusion constant, a radius of molecules, an initial number of molecules of A and B, a ratio of dissociated form of A at the steady state, and a ratio between an intrinsic association rate and collision rate defined as ka andkD below, respectively. Dimensions of length and time are assumed to be micro-meter and second.
End of explanation
import numpy
kD = 4 * numpy.pi * (radius * 2) * (D * 2)
ka = kD * ka_factor
kd = ka * N_A * U * U / (1 - U)
kon = ka * kD / (ka + kD)
koff = kd * kon / ka
Explanation: Calculating optimal reaction rates. ka and kd are intrinsic, kon and koff are effective reaction rates.
End of explanation
y0 = {'A': N_A, 'B': N_A}
duration = 0.35
opt_kwargs = {'legend': True}
Explanation: Start with no C molecules, and simulate 3 seconds.
End of explanation
with species_attributes():
A | B | C | {'radius': radius, 'D': D}
with reaction_rules():
A + B == C | (kon, koff)
m = get_model()
Explanation: Make a model with effective rates. This model is for macroscopic simulation algorithms.
End of explanation
ret1 = run_simulation(duration, y0=y0, model=m)
ret1.plot(**opt_kwargs)
Explanation: Save a result with ode as obs, and plot it:
End of explanation
with species_attributes():
A | B | C | {'radius': radius, 'D': D}
with reaction_rules():
A + B == C | (ka, kd)
m = get_model()
Explanation: Make a model with intrinsic rates. This model is for microscopic (particle) simulation algorithms.
End of explanation
# alpha = 0.03
ret2 = ensemble_simulations(duration, ndiv=20, y0=y0, model=m, solver=('spatiocyte', radius), repeat=N)
ret2.plot('o', ret1, '-', **opt_kwargs)
Explanation: Simulating with spatiocyte. voxel_radius is given as radius. Use alpha enough less than 1.0 for a diffusion-limited case (Bars represent standard error of the mean):
End of explanation
ret2 = ensemble_simulations(duration, ndiv=20, y0=y0, model=m, solver=('egfrd', Integer3(4, 4, 4)), repeat=N)
ret2.plot('o', ret1, '-', **opt_kwargs)
Explanation: Simulating with egfrd:
End of explanation |
11,124 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework Part 2
Step1: Load 120 seconds of an audio file
Step2: Plot the time-domain waveform of the audio signal
Step3: Play the audio file
Step4: Step 2
Step5: Scale the features to have zero mean and unit variance
Step6: Verify that the scaling worked
Step7: Repeat steps 1 and 2 for another audio file
Step8: Load 120 seconds of an audio file
Step9: Listen to the second audio file.
Step10: Plot the time-domain waveform and spectrogram of the second audio file. In what ways does the time-domain waveform look different than the first audio file? What differences in musical attributes might this reflect? What additional insights are gained from plotting the spectrogram? Explain.
Step11: [Please share your answer in this editable text cell.]
Extract MFCCs from the second audio file. Be sure to transpose the resulting matrix such that each row is one observation, i.e. one set of MFCCs. Also be sure that the shape and size of the resulting MFCC matrix is equivalent to that for the first audio file.
Step12: Scale the resulting MFCC features to have approximately zero mean and unit variance. Re-use the scaler from above.
Step13: Verify that the mean of the MFCCs for the second audio file is approximately equal to zero and the variance is approximately equal to one.
Step14: Step 3
Step15: Construct a vector of ground-truth labels, where 0 refers to the first audio file, and 1 refers to the second audio file.
Step16: Create a classifer model object
Step17: Train the classifier
Step18: Step 4
Step19: Listen to both of the test audio excerpts
Step20: Compute MFCCs from both of the test audio excerpts
Step21: Scale the MFCCs using the previous scaler
Step22: Concatenate all test features together
Step23: Concatenate all test labels together
Step24: Compute the predicted labels
Step25: Finally, compute the accuracy score of the classifier on the test data
Step26: Currently, the classifier returns one prediction for every MFCC vector in the test audio signal. Can you modify the procedure above such that the classifier returns a single prediction for a 10-second excerpt?
Step27: [Explain your approach in this editable text cell.]
Step 5
Step28: Compute the pairwise correlation of every pair of 12 MFCCs against one another for both test audio excerpts. For each audio excerpt, which pair of MFCCs are the most correlated? least correlated?
Step29: [Explain your answer in this editable text cell.]
Display a scatter plot of any two of the MFCC dimensions (i.e. columns of the data frame) against one another. Try for multiple pairs of MFCC dimensions.
Step30: Display a scatter plot of any two of the MFCC dimensions (i.e. columns of the data frame) against one another. Try for multiple pairs of MFCC dimensions.
Step31: Plot a histogram of all values across a single MFCC, i.e. MFCC coefficient number. Repeat for a few different MFCC numbers | Python Code:
filename1 = 'brahms_hungarian_dance_5.mp3'
url = "http://audio.musicinformationretrieval.com/" + filename1
if not os.path.exists(filename1):
urllib.urlretrieve(url, filename=filename1)
Explanation: Homework Part 2: Genre Classification
Goals
Extract features from an audio signal.
Train a genre classifier.
Use the classifier to classify genre in a song.
Step 1: Retrieve Audio
Download an audio file onto your local machine.
End of explanation
librosa.load?
x1, fs1 = librosa.load(filename1, duration=120)
Explanation: Load 120 seconds of an audio file:
End of explanation
plt.plot?
# Your code here:
Explanation: Plot the time-domain waveform of the audio signal:
End of explanation
IPython.display.Audio?
# Your code here:
Explanation: Play the audio file:
End of explanation
librosa.feature.mfcc?
mfcc1 = librosa.feature.mfcc(x1, sr=fs1, n_mfcc=12).T
mfcc1.shape
Explanation: Step 2: Extract Features
For each segment, compute the MFCCs. Experiment with n_mfcc to select a different number of coefficients, e.g. 12.
End of explanation
scaler = sklearn.preprocessing.StandardScaler()
mfcc1_scaled = scaler.fit_transform(mfcc1)
Explanation: Scale the features to have zero mean and unit variance:
End of explanation
mfcc1_scaled.mean(axis=0)
mfcc1_scaled.std(axis=0)
Explanation: Verify that the scaling worked:
End of explanation
filename2 = 'busta_rhymes_hits_for_days.mp3'
url = "http://audio.musicinformationretrieval.com/" + filename2
urllib.urlretrieve?
# Your code here. Download the second audio file in the same manner as the first audio file above.
Explanation: Repeat steps 1 and 2 for another audio file:
End of explanation
librosa.load?
# Your code here. Load the second audio file in the same manner as the first audio file.
Explanation: Load 120 seconds of an audio file:
End of explanation
IPython.display.Audio?
Explanation: Listen to the second audio file.
End of explanation
plt.plot?
# See http://musicinformationretrieval.com/stft.html for more details on displaying spectrograms.
librosa.feature.melspectrogram?
librosa.logamplitude?
librosa.display.specshow?
Explanation: Plot the time-domain waveform and spectrogram of the second audio file. In what ways does the time-domain waveform look different than the first audio file? What differences in musical attributes might this reflect? What additional insights are gained from plotting the spectrogram? Explain.
End of explanation
librosa.feature.mfcc?
mfcc2.shape
Explanation: [Please share your answer in this editable text cell.]
Extract MFCCs from the second audio file. Be sure to transpose the resulting matrix such that each row is one observation, i.e. one set of MFCCs. Also be sure that the shape and size of the resulting MFCC matrix is equivalent to that for the first audio file.
End of explanation
scaler.transform?
Explanation: Scale the resulting MFCC features to have approximately zero mean and unit variance. Re-use the scaler from above.
End of explanation
mfcc2_scaled.mean?
mfcc2_scaled.std?
Explanation: Verify that the mean of the MFCCs for the second audio file is approximately equal to zero and the variance is approximately equal to one.
End of explanation
features = numpy.vstack((mfcc1_scaled, mfcc2_scaled))
features.shape
Explanation: Step 3: Train a Classifier
Concatenate all of the scaled feature vectors into one feature table.
End of explanation
labels = numpy.concatenate((numpy.zeros(len(mfcc1_scaled)), numpy.ones(len(mfcc2_scaled))))
Explanation: Construct a vector of ground-truth labels, where 0 refers to the first audio file, and 1 refers to the second audio file.
End of explanation
# Support Vector Machine
model = sklearn.svm.SVC()
Explanation: Create a classifer model object:
End of explanation
model.fit?
# Your code here
Explanation: Train the classifier:
End of explanation
x1_test, fs1 = librosa.load(filename1, duration=10, offset=120)
x2_test, fs2 = librosa.load(filename2, duration=10, offset=120)
Explanation: Step 4: Run the Classifier
To test the classifier, we will extract an unused 10-second segment from the earlier audio fields as test excerpts:
End of explanation
IPython.display.Audio?
IPython.display.Audio?
Explanation: Listen to both of the test audio excerpts:
End of explanation
librosa.feature.mfcc?
librosa.feature.mfcc?
Explanation: Compute MFCCs from both of the test audio excerpts:
End of explanation
scaler.transform?
scaler.transform?
Explanation: Scale the MFCCs using the previous scaler:
End of explanation
numpy.vstack?
Explanation: Concatenate all test features together:
End of explanation
numpy.concatenate?
Explanation: Concatenate all test labels together:
End of explanation
model.predict?
Explanation: Compute the predicted labels:
End of explanation
score = model.score(test_features, test_labels)
score
Explanation: Finally, compute the accuracy score of the classifier on the test data:
End of explanation
# Your code here.
Explanation: Currently, the classifier returns one prediction for every MFCC vector in the test audio signal. Can you modify the procedure above such that the classifier returns a single prediction for a 10-second excerpt?
End of explanation
df1 = pandas.DataFrame(mfcc1_test_scaled)
df1.shape
df1.head()
df2 = pandas.DataFrame(mfcc2_test_scaled)
Explanation: [Explain your approach in this editable text cell.]
Step 5: Analysis in Pandas
Read the MFCC features from the first test audio excerpt into a data frame:
End of explanation
df1.corr()
df2.corr()
Explanation: Compute the pairwise correlation of every pair of 12 MFCCs against one another for both test audio excerpts. For each audio excerpt, which pair of MFCCs are the most correlated? least correlated?
End of explanation
df1.plot.scatter?
Explanation: [Explain your answer in this editable text cell.]
Display a scatter plot of any two of the MFCC dimensions (i.e. columns of the data frame) against one another. Try for multiple pairs of MFCC dimensions.
End of explanation
df2.plot.scatter?
Explanation: Display a scatter plot of any two of the MFCC dimensions (i.e. columns of the data frame) against one another. Try for multiple pairs of MFCC dimensions.
End of explanation
df1[0].plot.hist()
df1[11].plot.hist()
Explanation: Plot a histogram of all values across a single MFCC, i.e. MFCC coefficient number. Repeat for a few different MFCC numbers:
End of explanation |
11,125 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: PWC-Net-small model training (with multisteps learning rate schedule)
In this notebook, we
Step2: TODO
Step3: Pre-train on FlyingChairs+FlyingThings3DHalfRes mix
Load the dataset
Step4: Configure the training
Step5: Train the model | Python Code:
pwcnet_train.ipynb
PWC-Net model training.
Written by Phil Ferriere
Licensed under the MIT License (see LICENSE for details)
Tensorboard:
[win] tensorboard --logdir=E:\\repos\\tf-optflow\\tfoptflow\\pwcnet-sm-6-2-multisteps-chairsthingsmix
[ubu] tensorboard --logdir=/media/EDrive/repos/tf-optflow/tfoptflow/pwcnet-sm-6-2-multisteps-chairsthingsmix
from __future__ import absolute_import, division, print_function
import sys
from copy import deepcopy
from dataset_base import _DEFAULT_DS_TRAIN_OPTIONS
from dataset_flyingchairs import FlyingChairsDataset
from dataset_flyingthings3d import FlyingThings3DHalfResDataset
from dataset_mixer import MixedDataset
from model_pwcnet import ModelPWCNet, _DEFAULT_PWCNET_TRAIN_OPTIONS
Explanation: PWC-Net-small model training (with multisteps learning rate schedule)
In this notebook, we:
- Use a PWC-Net-small model (no dense or residual connections), 6 level pyramid, uspample level 2 by 4 as the final flow prediction
- Train the model on a mix of the FlyingChairs and FlyingThings3DHalfRes dataset using a S<sub>long</sub> schedule described in [2018a]
- The S<sub>long</sub> schedule is borrowed from [2016a] and looks as follows:
Below, look for TODO references and customize this notebook based on your own machine setup.
Reference
[2018a]<a name="2018a"></a> Sun et al. 2018. PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume. [arXiv] [web] [PyTorch (Official)] [Caffe (Official)]
[2016a]<a name="2016a"></a> Ilg et al. 2016. FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks. [arXiv] [PyTorch (Official)] [TensorFlow]
End of explanation
# TODO: You MUST set dataset_root to the correct path on your machine!
if sys.platform.startswith("win"):
_DATASET_ROOT = 'E:/datasets/'
else:
_DATASET_ROOT = '/media/EDrive/datasets/'
_FLYINGCHAIRS_ROOT = _DATASET_ROOT + 'FlyingChairs_release'
_FLYINGTHINGS3DHALFRES_ROOT = _DATASET_ROOT + 'FlyingThings3D_HalfRes'
# TODO: You MUST adjust the settings below based on the number of GPU(s) used for training
# Set controller device and devices
# A one-gpu setup would be something like controller='/device:GPU:0' and gpu_devices=['/device:GPU:0']
# Here, we use a dual-GPU setup, as shown below
gpu_devices = ['/device:GPU:0', '/device:GPU:1']
controller = '/device:CPU:0'
# TODO: You MUST adjust this setting below based on the amount of memory on your GPU(s)
# Batch size
batch_size = 8
Explanation: TODO: Set this first!
End of explanation
# TODO: You MUST set the batch size based on the capabilities of your GPU(s)
# Load train dataset
ds_opts = deepcopy(_DEFAULT_DS_TRAIN_OPTIONS)
ds_opts['in_memory'] = False # Too many samples to keep in memory at once, so don't preload them
ds_opts['aug_type'] = 'heavy' # Apply all supported augmentations
ds_opts['batch_size'] = batch_size * len(gpu_devices) # Use a multiple of 8; here, 16 for dual-GPU mode (Titan X & 1080 Ti)
ds_opts['crop_preproc'] = (256, 448) # Crop to a smaller input size
ds1 = FlyingChairsDataset(mode='train_with_val', ds_root=_FLYINGCHAIRS_ROOT, options=ds_opts)
ds_opts['type'] = 'into_future'
ds2 = FlyingThings3DHalfResDataset(mode='train_with_val', ds_root=_FLYINGTHINGS3DHALFRES_ROOT, options=ds_opts)
ds = MixedDataset(mode='train_with_val', datasets=[ds1, ds2], options=ds_opts)
# Display dataset configuration
ds.print_config()
Explanation: Pre-train on FlyingChairs+FlyingThings3DHalfRes mix
Load the dataset
End of explanation
# Start from the default options
nn_opts = deepcopy(_DEFAULT_PWCNET_TRAIN_OPTIONS)
nn_opts['verbose'] = True
nn_opts['ckpt_dir'] = './pwcnet-sm-6-2-multisteps-chairsthingsmix/'
nn_opts['batch_size'] = ds_opts['batch_size']
nn_opts['x_shape'] = [2, ds_opts['crop_preproc'][0], ds_opts['crop_preproc'][1], 3]
nn_opts['y_shape'] = [ds_opts['crop_preproc'][0], ds_opts['crop_preproc'][1], 2]
nn_opts['use_tf_data'] = True # Use tf.data reader
nn_opts['gpu_devices'] = gpu_devices
nn_opts['controller'] = controller
# Use the PWC-Net-small model in quarter-resolution mode
nn_opts['use_dense_cx'] = False
nn_opts['use_res_cx'] = False
nn_opts['pyr_lvls'] = 6
nn_opts['flow_pred_lvl'] = 2
# Set the learning rate schedule. This schedule is for a single GPU using a batch size of 8.
# Below,we adjust the schedule to the size of the batch and the number of GPUs.
nn_opts['lr_policy'] = 'multisteps'
nn_opts['lr_boundaries'] = [400000, 600000, 800000, 1000000, 1200000]
nn_opts['lr_values'] = [0.0001, 5e-05, 2.5e-05, 1.25e-05, 6.25e-06, 3.125e-06]
nn_opts['max_steps'] = 1200000
# Below, we adjust the schedule to the size of the batch and our number of GPUs (2).
nn_opts['max_steps'] = int(nn_opts['max_steps'] * 8 / ds_opts['batch_size'])
nn_opts['lr_boundaries'] = [int(boundary * 8 / ds_opts['batch_size']) for boundary in nn_opts['lr_boundaries']]
# More options
nn_opts['max_to_keep'] = 50
nn_opts['display_step'] = 1000
# Instantiate the model and display the model configuration
nn = ModelPWCNet(mode='train_with_val', options=nn_opts, dataset=ds)
nn.print_config()
Explanation: Configure the training
End of explanation
# Train the model
nn.train()
Explanation: Train the model
End of explanation |
11,126 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fitting a diagonal covariance Gaussian mixture model to text data
In a previous assignment, we explored k-means clustering for a high-dimensional Wikipedia dataset. We can also model this data with a mixture of Gaussians, though with increasing dimension we run into two important issues associated with using a full covariance matrix for each component.
* Computational cost becomes prohibitive in high dimensions
Step1: We also have a Python file containing implementations for several functions that will be used during the course of this assignment.
Step2: Load Wikipedia data and extract TF-IDF features
Load Wikipedia data and transform each of the first 5000 document into a TF-IDF representation.
Step3: Using a utility we provide, we will create a sparse matrix representation of the documents. This is the same utility function you used during the previous assignment on k-means with text data.
Step4: As in the previous assignment, we will normalize each document's TF-IDF vector to be a unit vector.
Step5: We can check that the length (Euclidean norm) of each row is now 1.0, as expected.
Step6: EM in high dimensions
EM for high-dimensional data requires some special treatment
Step7: Initializing cluster weights
We will initialize each cluster weight to be the proportion of documents assigned to that cluster by k-means above.
Step8: Initializing covariances
To initialize our covariance parameters, we compute $\hat{\sigma}{k, j}^2 = \sum{i=1}^{N}(x_{i,j} - \hat{\mu}_{k, j})^2$ for each feature $j$. For features with really tiny variances, we assign 1e-8 instead to prevent numerical instability. We do this computation in a vectorized fashion in the following code block.
Step9: Running EM
Now that we have initialized all of our parameters, run EM.
Step10: Interpret clustering results
In contrast to k-means, EM is able to explicitly model clusters of varying sizes and proportions. The relative magnitude of variances in the word dimensions tell us much about the nature of the clusters.
Write yourself a cluster visualizer as follows. Examining each cluster's mean vector, list the 5 words with the largest mean values (5 most common words in the cluster). For each word, also include the associated variance parameter (diagonal element of the covariance matrix).
A sample output may be
Step11: Quiz Question. Select all the topics that have a cluster in the model created above. [multiple choice]
Comparing to random initialization
Create variables for randomly initializing the EM algorithm. Complete the following code block.
Step12: Quiz Question | Python Code:
import graphlab
Explanation: Fitting a diagonal covariance Gaussian mixture model to text data
In a previous assignment, we explored k-means clustering for a high-dimensional Wikipedia dataset. We can also model this data with a mixture of Gaussians, though with increasing dimension we run into two important issues associated with using a full covariance matrix for each component.
* Computational cost becomes prohibitive in high dimensions: score calculations have complexity cubic in the number of dimensions M if the Gaussian has a full covariance matrix.
* A model with many parameters require more data: bserve that a full covariance matrix for an M-dimensional Gaussian will have M(M+1)/2 parameters to fit. With the number of parameters growing roughly as the square of the dimension, it may quickly become impossible to find a sufficient amount of data to make good inferences.
Both of these issues are avoided if we require the covariance matrix of each component to be diagonal, as then it has only M parameters to fit and the score computation decomposes into M univariate score calculations. Recall from the lecture that the M-step for the full covariance is:
\begin{align}
\hat{\Sigma}k &= \frac{1}{N_k^{soft}} \sum{i=1}^N r_{ik} (x_i-\hat{\mu}_k)(x_i - \hat{\mu}_k)^T
\end{align}
Note that this is a square matrix with M rows and M columns, and the above equation implies that the (v, w) element is computed by
\begin{align}
\hat{\Sigma}{k, v, w} &= \frac{1}{N_k^{soft}} \sum{i=1}^N r_{ik} (x_{iv}-\hat{\mu}{kv})(x{iw} - \hat{\mu}_{kw})
\end{align}
When we assume that this is a diagonal matrix, then non-diagonal elements are assumed to be zero and we only need to compute each of the M elements along the diagonal independently using the following equation.
\begin{align}
\hat{\sigma}^2_{k, v} &= \hat{\Sigma}{k, v, v} \
&= \frac{1}{N_k^{soft}} \sum{i=1}^N r_{ik} (x_{iv}-\hat{\mu}_{kv})^2
\end{align}
In this section, we will use an EM implementation to fit a Gaussian mixture model with diagonal covariances to a subset of the Wikipedia dataset. The implementation uses the above equation to compute each variance term.
We'll begin by importing the dataset and coming up with a useful representation for each article. After running our algorithm on the data, we will explore the output to see whether we can give a meaningful interpretation to the fitted parameters in our model.
Note to Amazon EC2 users: To conserve memory, make sure to stop all the other notebooks before running this notebook.
Import necessary packages
End of explanation
from em_utilities import *
Explanation: We also have a Python file containing implementations for several functions that will be used during the course of this assignment.
End of explanation
wiki = graphlab.SFrame('people_wiki.gl/').head(5000)
wiki['tf_idf'] = graphlab.text_analytics.tf_idf(wiki['text'])
Explanation: Load Wikipedia data and extract TF-IDF features
Load Wikipedia data and transform each of the first 5000 document into a TF-IDF representation.
End of explanation
tf_idf, map_index_to_word = sframe_to_scipy(wiki, 'tf_idf')
Explanation: Using a utility we provide, we will create a sparse matrix representation of the documents. This is the same utility function you used during the previous assignment on k-means with text data.
End of explanation
tf_idf = normalize(tf_idf)
Explanation: As in the previous assignment, we will normalize each document's TF-IDF vector to be a unit vector.
End of explanation
for i in range(5):
doc = tf_idf[i]
print(np.linalg.norm(doc.todense()))
Explanation: We can check that the length (Euclidean norm) of each row is now 1.0, as expected.
End of explanation
from sklearn.cluster import KMeans
np.random.seed(5)
num_clusters = 25
# Use scikit-learn's k-means to simplify workflow
kmeans_model = KMeans(n_clusters=num_clusters, n_init=5, max_iter=400, random_state=1, n_jobs=-1)
kmeans_model.fit(tf_idf)
centroids, cluster_assignment = kmeans_model.cluster_centers_, kmeans_model.labels_
means = [centroid for centroid in centroids]
Explanation: EM in high dimensions
EM for high-dimensional data requires some special treatment:
* E step and M step must be vectorized as much as possible, as explicit loops are dreadfully slow in Python.
* All operations must be cast in terms of sparse matrix operations, to take advantage of computational savings enabled by sparsity of data.
* Initially, some words may be entirely absent from a cluster, causing the M step to produce zero mean and variance for those words. This means any data point with one of those words will have 0 probability of being assigned to that cluster since the cluster allows for no variability (0 variance) around that count being 0 (0 mean). Since there is a small chance for those words to later appear in the cluster, we instead assign a small positive variance (~1e-10). Doing so also prevents numerical overflow.
We provide the complete implementation for you in the file em_utilities.py. For those who are interested, you can read through the code to see how the sparse matrix implementation differs from the previous assignment.
You are expected to answer some quiz questions using the results of clustering.
Initializing mean parameters using k-means
Recall from the lectures that EM for Gaussian mixtures is very sensitive to the choice of initial means. With a bad initial set of means, EM may produce clusters that span a large area and are mostly overlapping. To eliminate such bad outcomes, we first produce a suitable set of initial means by using the cluster centers from running k-means. That is, we first run k-means and then take the final set of means from the converged solution as the initial means in our EM algorithm.
End of explanation
num_docs = tf_idf.shape[0]
weights = []
for i in xrange(num_clusters):
# Compute the number of data points assigned to cluster i:
num_assigned = ... # YOUR CODE HERE
w = float(num_assigned) / num_docs
weights.append(w)
Explanation: Initializing cluster weights
We will initialize each cluster weight to be the proportion of documents assigned to that cluster by k-means above.
End of explanation
covs = []
for i in xrange(num_clusters):
member_rows = tf_idf[cluster_assignment==i]
cov = (member_rows.power(2) - 2*member_rows.dot(diag(means[i]))).sum(axis=0).A1 / member_rows.shape[0] \
+ means[i]**2
cov[cov < 1e-8] = 1e-8
covs.append(cov)
Explanation: Initializing covariances
To initialize our covariance parameters, we compute $\hat{\sigma}{k, j}^2 = \sum{i=1}^{N}(x_{i,j} - \hat{\mu}_{k, j})^2$ for each feature $j$. For features with really tiny variances, we assign 1e-8 instead to prevent numerical instability. We do this computation in a vectorized fashion in the following code block.
End of explanation
out = EM_for_high_dimension(tf_idf, means, covs, weights, cov_smoothing=1e-10)
out['loglik']
Explanation: Running EM
Now that we have initialized all of our parameters, run EM.
End of explanation
# Fill in the blanks
def visualize_EM_clusters(tf_idf, means, covs, map_index_to_word):
print('')
print('==========================================================')
num_clusters = len(means)
for c in xrange(num_clusters):
print('Cluster {0:d}: Largest mean parameters in cluster '.format(c))
print('\n{0: <12}{1: <12}{2: <12}'.format('Word', 'Mean', 'Variance'))
# The k'th element of sorted_word_ids should be the index of the word
# that has the k'th-largest value in the cluster mean. Hint: Use np.argsort().
sorted_word_ids = ... # YOUR CODE HERE
for i in sorted_word_ids[:5]:
print '{0: <12}{1:<10.2e}{2:10.2e}'.format(map_index_to_word['category'][i],
means[c][i],
covs[c][i])
print '\n=========================================================='
'''By EM'''
visualize_EM_clusters(tf_idf, out['means'], out['covs'], map_index_to_word)
Explanation: Interpret clustering results
In contrast to k-means, EM is able to explicitly model clusters of varying sizes and proportions. The relative magnitude of variances in the word dimensions tell us much about the nature of the clusters.
Write yourself a cluster visualizer as follows. Examining each cluster's mean vector, list the 5 words with the largest mean values (5 most common words in the cluster). For each word, also include the associated variance parameter (diagonal element of the covariance matrix).
A sample output may be:
```
==========================================================
Cluster 0: Largest mean parameters in cluster
Word Mean Variance
football 1.08e-01 8.64e-03
season 5.80e-02 2.93e-03
club 4.48e-02 1.99e-03
league 3.94e-02 1.08e-03
played 3.83e-02 8.45e-04
...
```
End of explanation
np.random.seed(5)
num_clusters = len(means)
num_docs, num_words = tf_idf.shape
random_means = []
random_covs = []
random_weights = []
for k in range(num_clusters):
# Create a numpy array of length num_words with random normally distributed values.
# Use the standard univariate normal distribution (mean 0, variance 1).
# YOUR CODE HERE
mean = ...
# Create a numpy array of length num_words with random values uniformly distributed between 1 and 5.
# YOUR CODE HERE
cov = ...
# Initially give each cluster equal weight.
# YOUR CODE HERE
weight = ...
random_means.append(mean)
random_covs.append(cov)
random_weights.append(weight)
Explanation: Quiz Question. Select all the topics that have a cluster in the model created above. [multiple choice]
Comparing to random initialization
Create variables for randomly initializing the EM algorithm. Complete the following code block.
End of explanation
# YOUR CODE HERE. Use visualize_EM_clusters, which will require you to pass in tf_idf and map_index_to_word.
...
Explanation: Quiz Question: Try fitting EM with the random initial parameters you created above. (Use cov_smoothing=1e-5.) Store the result to out_random_init. What is the final loglikelihood that the algorithm converges to?
Quiz Question: Is the final loglikelihood larger or smaller than the final loglikelihood we obtained above when initializing EM with the results from running k-means?
Quiz Question: For the above model, out_random_init, use the visualize_EM_clusters method you created above. Are the clusters more or less interpretable than the ones found after initializing using k-means?
End of explanation |
11,127 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Explicit feedback movie recommendations
In this example, we'll build a quick explicit feedback recommender system
Step1: The dataset object is an instance of an Interactions class, a fairly light-weight wrapper that Spotlight users to hold the arrays that contain information about an interactions dataset (such as user and item ids, ratings, and timestamps).
The model
We can feed our dataset to the ExplicitFactorizationModel class - and sklearn-like object that allows us to train and evaluate the explicit factorization models.
Internally, the model uses the BilinearNet class to represents users and items. It's composed of a 4 embedding layers
Step2: In order to fit and evaluate the model, we need to split it into a train and a test set
Step3: With the data ready, we can go ahead and fit the model. This should take less than a minute on the CPU, and we should see the loss decreasing as the model is learning better and better representations for the user and items in our dataset.
Step4: Now that the model is estimated, how good are its predictions? | Python Code:
import numpy as np
from spotlight.datasets.movielens import get_movielens_dataset
dataset = get_movielens_dataset(variant='100K')
print(dataset)
Explanation: Explicit feedback movie recommendations
In this example, we'll build a quick explicit feedback recommender system: that is, a model that takes into account explicit feedback signals (like ratings) to recommend new content.
We'll use an approach first made popular by the Netflix prize contest: matrix factorization.
The basic idea is very simple:
Start with user-item-rating triplets, conveying the information that user i gave some item j rating r.
Represent both users and items as high-dimensional vectors of numbers. For example, a user could be represented by [0.3, -1.2, 0.5] and an item by [1.0, -0.3, -0.6].
The representations should be chosen so that, when we multiplied together (via dot products), we can recover the original ratings.
The utility of the model then is derived from the fact that if we multiply the user vector of a user with the item vector of some item they have not rated, we hope to obtain a predicition for the rating they would have given to it had they seen it.
<img src="static/matrix_factorization.png" alt="Matrix factorization" style="width: 600px;"/>
Spotlight fits models such as these using stochastic gradient descent. The procedure goes roughly as follows:
Start with representing users and items by randomly chosen vectors. Because they are random, they are not going to give useful recommendations, but we are going to improve them as we fit the model.
Go through the (user, item, rating) triplets in the dataset. For every triplet, compute the rating that the model predicts by multiplying the user and item vectors together, and compare the result with the actual rating: the closer they are, the better the model.
If the predicted rating is too low, adjust the user and item vectors (by a small amount) to increase the prediction.
If the predicted rating is too high, adjust the vectors to decrease it.
Continue iterating over the training triplets until the model's accuracy stabilizes.
The data
We start with importing a famous dataset, the Movielens 100k dataset. It contains 100,000 ratings (between 1 and 5) given to 1683 movies by 944 users:
End of explanation
import torch
from spotlight.factorization.explicit import ExplicitFactorizationModel
model = ExplicitFactorizationModel(loss='regression',
embedding_dim=128, # latent dimensionality
n_iter=10, # number of epochs of training
batch_size=1024, # minibatch size
l2=1e-9, # strength of L2 regularization
learning_rate=1e-3,
use_cuda=torch.cuda.is_available())
Explanation: The dataset object is an instance of an Interactions class, a fairly light-weight wrapper that Spotlight users to hold the arrays that contain information about an interactions dataset (such as user and item ids, ratings, and timestamps).
The model
We can feed our dataset to the ExplicitFactorizationModel class - and sklearn-like object that allows us to train and evaluate the explicit factorization models.
Internally, the model uses the BilinearNet class to represents users and items. It's composed of a 4 embedding layers:
a (num_users x latent_dim) embedding layer to represent users,
a (num_items x latent_dim) embedding layer to represent items,
a (num_users x 1) embedding layer to represent user biases, and
a (num_items x 1) embedding layer to represent item biases.
Together, these give us the predictions. Their accuracy is evaluated using one of the Spotlight losses. In this case, we'll use the regression loss, which is simply the squared difference between the true and the predicted rating.
End of explanation
from spotlight.cross_validation import random_train_test_split
train, test = random_train_test_split(dataset, random_state=np.random.RandomState(42))
print('Split into \n {} and \n {}.'.format(train, test))
Explanation: In order to fit and evaluate the model, we need to split it into a train and a test set:
End of explanation
model.fit(train, verbose=True)
Explanation: With the data ready, we can go ahead and fit the model. This should take less than a minute on the CPU, and we should see the loss decreasing as the model is learning better and better representations for the user and items in our dataset.
End of explanation
from spotlight.evaluation import rmse_score
train_rmse = rmse_score(model, train)
test_rmse = rmse_score(model, test)
print('Train RMSE {:.3f}, test RMSE {:.3f}'.format(train_rmse, test_rmse))
Explanation: Now that the model is estimated, how good are its predictions?
End of explanation |
11,128 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Anna KaRNNa
In this notebook, we'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
Step1: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
Step2: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
Step3: And we can see the characters encoded as integers.
Step4: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
Step5: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this
Step6: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
Step7: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob. This will be a scalar, that is a 0-D tensor. To make a scalar, you create a placeholder without giving it a size.
Exercise
Step8: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow 1.0 will create different weight matrices for all cell objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like
```python
def build_cell(num_units, keep_prob)
Step9: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character, so we want this layer to have size $C$, the number of classes/characters we have in our text.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. We get the LSTM output as a list, lstm_output. First we need to concatenate this whole list into one array with tf.concat. Then, reshape it (with tf.reshape) to size $(M * N) \times L$.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
Exercise
Step10: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
Exercise
Step11: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
Step12: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
Exercise
Step13: Hyperparameters
Here are the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular
Step14: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
Exercise
Step15: Notes on this network/training strategy
We're jumping between parts of various books, yet we're continuing the training as if it were one book
We should consider re-starting the LSTM state every time. This might vary in effect depending on how long the memory is.
Saved checkpoints
Read up on saving and loading checkpoints here
Step16: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
Step17: Here, pass in the path to a checkpoint and sample from the network. | Python Code:
import time
from collections import namedtuple
import re
import numpy as np
import tensorflow as tf
Explanation: Anna KaRNNa
In this notebook, we'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
from glob import glob
files = [f for f in glob('*.txt') if f != 'requirements.txt']
files
para_exp = re.compile('\n{2,}')
line_exp = re.compile('\s+')
fixes = {'‘': "'", '’': "'", '“': '"', '”': '"', '`': "'", '_': ' ', '•': '-', '…': '...', '†': '*',
'—': '-'}
def fixit(c):
if c in fixes:
return fixes[c]
return c
def fix_line(l):
return ' '.join(line_exp.split(l))
def readit(f):
with open(f, 'r') as f:
mytext = f.read()
mytext = '\n\n'.join(map(fix_line, para_exp.split(mytext)))
return ''.join(map(fixit, mytext))
text = '\n\n'.join(map(readit, files))
#text = ''.join(map(fixit, text))
vocab = sorted(list(set(text)))
with open('anna.txt', 'r') as f:
text=f.read()
vocab = sorted(set(text))
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
print(vocab)
my_vocab = ['\n', ' ', '!', '"', '#', '$', '%', '&', "'", '(', ')', '*', '+', ',', '-', '.', '/', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', ':', ';', '<', '=', '>', '?', '@', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z', '[', '\\', ']', '^', '_', '`', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', '{', '|', '}', '~', '£', '©', '°', 'À', 'Æ', 'Ç', 'È', 'É', 'Ü', 'à', 'á', 'â', 'ä', 'æ', 'ç', 'è', 'é', 'ê', 'ë', 'î', 'ï', 'ñ', 'ò', 'ó', 'ô', 'ö', 'ù', 'ú', 'û', 'ü', 'œ', '—', '†', '•', '…']
Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
End of explanation
print(str(text[:250]))
test = str(text[:201])
test
Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
End of explanation
encoded[:100]
Explanation: And we can see the characters encoded as integers.
End of explanation
len(vocab)
Explanation: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
End of explanation
def get_batches(arr, n_seqs, n_steps):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
n_seqs: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the batch size and number of batches we can make
batch_size = n_seqs * n_steps
n_batches = len(arr) // batch_size
old_len = len(arr)
# Keep only enough characters to make full batches
new_len = batch_size*n_batches
# print(old_len,new_len)
arr = arr[:new_len]
# Reshape into n_seqs rows
arr = arr.reshape((n_seqs, -1))
for n in range(0, arr.shape[1], n_steps):
# The features
x = arr[:,n:n+n_steps]
# The targets, shifted by one
y = np.zeros_like(x)
#print(x.shape,y.shape)
y[:,:-1], y[:,-1] = x[:,1:], x[:,0]
yield x, y
Explanation: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:
<img src="assets/[email protected]" width=500px>
<br>
We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.
The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.
After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches.
Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:
python
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
where x is the input batch and y is the target batch.
The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.
Exercise: Write the code for creating batches in the function below. The exercises in this notebook will not be easy. I've provided a notebook with solutions alongside this notebook. If you get stuck, checkout the solutions. The most important thing is that you don't copy and paste the code into here, type out the solution code yourself.
End of explanation
batches = get_batches(encoded, 10, 50)
x, y = next(batches)
1985000/50
batches = get_batches(encoded, 10, 50)
x, y = next(batches)
for x1,y1 in batches:
# print(x.shape,x1.shape)
assert(x.shape == x1.shape)
assert(y.shape == y1.shape)
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
End of explanation
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, shape=(batch_size,num_steps), name='x')
targets = tf.placeholder(tf.int32, shape=(batch_size,num_steps), name='y')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder('float32', name='keep_prob')
return inputs, targets, keep_prob
Explanation: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob. This will be a scalar, that is a 0-D tensor. To make a scalar, you create a placeholder without giving it a size.
Exercise: Create the input placeholders in the function below.
End of explanation
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell outputs
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop]*num_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
return cell, initial_state
Explanation: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow 1.0 will create different weight matrices for all cell objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like
```python
def build_cell(num_units, keep_prob):
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
return drop
tf.contrib.rnn.MultiRNNCell([build_cell(num_units, keep_prob) for _ in range(num_layers)])
```
Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
python
initial_state = cell.zero_state(batch_size, tf.float32)
Below, we implement the build_lstm function to create these LSTM cells and the initial state.
End of explanation
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
lstm_output: List of output tensors from the LSTM layer
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
'''
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# Concatenate lstm_output over axis 1 (the columns)
seq_output = tf.concat(lstm_output, 1)
# Reshape seq_output to a 2D tensor with lstm_size columns
x = tf.reshape(seq_output, (-1, in_size))
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
# Create the weight and bias variables here
softmax_w = tf.Variable(tf.truncated_normal((in_size, out_size), stddev=0.01))
softmax_b = tf.Variable(tf.zeros(out_size))
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits = tf.matmul(x, softmax_w) + softmax_b
# Use softmax to get the probabilities for predicted characters
out = tf.nn.softmax(logits)
return out, logits
Explanation: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character, so we want this layer to have size $C$, the number of classes/characters we have in our text.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. We get the LSTM output as a list, lstm_output. First we need to concatenate this whole list into one array with tf.concat. Then, reshape it (with tf.reshape) to size $(M * N) \times L$.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
Exercise: Implement the output layer in the function below.
End of explanation
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
# One-hot encode targets and reshape to match logits, one row per sequence per step
y_one_hot = tf.one_hot(targets, num_classes)
y_reshaped = tf.reshape(y_one_hot, logits.shape)
# Softmax cross entropy loss
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped))
return loss
Explanation: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
Exercise: Implement the loss calculation in the function below.
End of explanation
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
Explanation: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
End of explanation
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps)
# Build the LSTM cell
cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, keep_prob)
### Run the data through the RNN layers
# First, one-hot encode the input tokens
x_one_hot = tf.one_hot(self.inputs, num_classes)
# Run each sequence step through the RNN with tf.nn.dynamic_rnn
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state)
self.final_state = state
# Get softmax predictions and logits
self.prediction, self.logits = build_output(outputs, lstm_size, num_classes)
# Loss and optimizer (with gradient clipping)
self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes)
self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip)
Explanation: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
Exercise: Use the functions you've implemented previously and tf.nn.dynamic_rnn to build the network.
End of explanation
batch_size = 100 # Sequences per batch
num_steps = 100 # Number of sequence steps per batch
lstm_size = 512 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.001 # Learning rate
keep_prob = 0.4 # Dropout keep probability
Explanation: Hyperparameters
Here are the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)
Approximate number of parameters
The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:
The number of parameters in your model. This is printed when you start training.
The size of your dataset. 1MB file is approximately 1 million characters.
These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
Best models strategy
The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
import sys
all_batches = list(get_batches(encoded, batch_size, num_steps))
np.random.shuffle(all_batches)
pivot = (len(all_batches) * 19) // 20
train, valid = all_batches[:pivot], all_batches[pivot:]
print(len(train), len(valid))
def validate(sess, model):
new_state_ = sess.run(model.initial_state)
vl = 0.0
for x_,y_ in valid:
feed_ = {model.inputs: x_,
model.targets: y_,
model.keep_prob: 1.0,
model.initial_state: new_state_}
valid_loss_, new_state_ = sess.run([model.loss, model.final_state],
feed_dict=feed_)
vl += valid_loss_
# samp = sample_live(sess, model, 80)
# print(samp)
return vl / len(valid)
epochs = 40
# Save every N iterations
save_every_n = 200
train_losses = []
valid_losses = []
valid_loss_iters = []
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
valid_losses.append(validate(sess, model))
valid_loss_iters.append(0)
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
batch_counter = 0
batches = [x for x in train]#list(get_batches(encoded, batch_size, num_steps))
np.random.shuffle(batches)
for x, y in batches:
counter += 1
batch_counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
end = time.time()
line = ''.join(['\rEpoch: {}/{}... '.format(e+1, epochs),
'Training Step: {} ({}/{})... '.format(counter,batch_counter,len(batches)),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start))])
sys.stdout.write(line)
sys.stdout.flush()
train_losses.append(batch_loss)
if counter == 15 or (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
valid_losses.append(validate(sess, model))
valid_loss_iters.append(counter)
print('\n\n Validation loss: {:.4f}\n'.format(valid_losses[-1]))
plt.plot(range(counter), train_losses)
plt.plot(valid_loss_iters, valid_losses)
plt.ylim([0,5])
plt.show()
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
valid_losses.append(validate(sess, model))
valid_loss_iters.append(counter)
print('\n\n Validation loss: {:.4f}\n'.format(valid_losses[-1]))
plt.plot(range(counter), train_losses)
plt.plot(valid_loss_iters, valid_losses)
plt.ylim([0,5])
plt.show()
ck = tf.train.latest_checkpoint('checkpoints')
epochs = 20
# Save every N iterations
save_every_n = 200
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
saver.restore(sess, ck)
# valid_losses.append(validate(sess, model))
# valid_loss_iters.append(counter)
print('\n\n Validation loss: {:.4f}\n'.format(valid_losses[-1]))
plt.plot(range(counter), train_losses)
plt.plot(valid_loss_iters, valid_losses)
plt.ylim([0,5])
plt.show()
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
batch_counter = 0
batches = [x for x in train]#list(get_batches(encoded, batch_size, num_steps))
np.random.shuffle(batches)
for x, y in batches:
counter += 1
batch_counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
end = time.time()
line = ''.join(['\rEpoch: {}/{}... '.format(e+1, epochs),
'Training Step: {} ({}/{})... '.format(counter,batch_counter,len(batches)),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start))])
sys.stdout.write(line)
sys.stdout.flush()
train_losses.append(batch_loss)
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
valid_losses.append(validate(sess, model))
valid_loss_iters.append(counter)
print('\n\n Validation loss: {:.4f}\n'.format(valid_losses[-1]))
plt.plot(range(counter), train_losses)
plt.plot(valid_loss_iters, valid_losses)
plt.ylim([0,5])
plt.show()
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
valid_losses.append(validate(sess, model))
valid_loss_iters.append(counter)
print('\n\n Validation loss: {:.4f}\n'.format(valid_losses[-1]))
plt.plot(range(counter), train_losses)
plt.plot(valid_loss_iters, valid_losses)
plt.ylim([0,5])
plt.show()
Explanation: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
Exercise: Set the hyperparameters above to train the network. Watch the training loss, it should be consistently dropping. Also, I highly advise running this on a GPU.
End of explanation
tf.train.get_checkpoint_state('checkpoints')
Explanation: Notes on this network/training strategy
We're jumping between parts of various books, yet we're continuing the training as if it were one book
We should consider re-starting the LSTM state every time. This might vary in effect depending on how long the memory is.
Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
End of explanation
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample_live(sess, model, n_samples, prime="The "):
samples = [c for c in prime]
# model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
# saver = tf.train.Saver()
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
print(checkpoint)
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = tf.train.latest_checkpoint('checkpoints')
print(checkpoint)
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = tf.train.latest_checkpoint('checkpoints')
print(checkpoint)
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = tf.train.latest_checkpoint('checkpoints')
checkpoint
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="I was not alone, for foolhardiness was not then ")
print(samp)
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="1. ")
print(samp)
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="A long time ago, in a galaxy far, far away...\n")
print(samp)
checkpoint = 'checkpoints/i3960_l512.ckpt'
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i200_l1024.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i600_l1024.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i1400_l768.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i1800_l1024.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
Explanation: Here, pass in the path to a checkpoint and sample from the network.
End of explanation |
11,129 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Notebook 6
Step1: Download the sequence data
Sequence data for this study are archived on the NCBI sequence read archive (SRA). Below I read in SraRunTable.txt for this project which contains all of the information we need to download the data.
Project SRA
Step3: For each ERS (individuals) get all of the ERR (sequence file accessions).
Step4: Here we pass the SRR number and the sample name to the wget_download function so that the files are saved with their sample names.
Step5: Merge technical replicates
This study includes several technical replicates per sequenced individuals, which we combine into a single file for each individual here.
Make a params file
Step6: Note
Step7: Assemble in pyrad
Step8: Results
We are interested in the relationship between the amount of input (raw) data between any two samples, the average coverage they recover when clustered together, and the phylogenetic distances separating samples.
Raw data amounts
The average number of raw reads per sample is 1.36M.
Step9: Look at distributions of coverage
pyrad v.3.0.63 outputs depth information for each sample which I read in here and plot. First let's ask which sample has the highest depth of coverage. The std of coverages is pretty low in this data set compared to several others.
Step10: Plot the coverage for the sample with highest mean coverage
Green shows the loci that were discarded and orange the loci that were retained. The majority of data were discarded for being too low of coverage.
Step11: Print final stats table
Step12: Infer ML phylogeny in raxml as an unrooted tree
Step13: Plot the tree in R using ape
Step14: Get phylo distances (GTRgamma dist)
Step15: Translation to taxon names | Python Code:
### Notebook 6
### Data set 6 (Finches)
### Authors: DaCosta & Sorenson (2016)
### Data Location: SRP059199
Explanation: Notebook 6:
This is an IPython notebook. Most of the code is composed of bash scripts, indicated by %%bash at the top of the cell, otherwise it is IPython code. This notebook includes code to download, assemble and analyze a published RADseq data set.
End of explanation
%%bash
## make a new directory for this analysis
mkdir -p empirical_6/fastq/
Explanation: Download the sequence data
Sequence data for this study are archived on the NCBI sequence read archive (SRA). Below I read in SraRunTable.txt for this project which contains all of the information we need to download the data.
Project SRA: SRP059199
BioProject ID: PRJNA285779
Biosample numbers: SAMN03753600 - SAMN03753623
Runs: SRR2053224 -- SRR2053247
SRA link: http://trace.ncbi.nlm.nih.gov/Traces/study/?acc=SRP059199
End of explanation
## IPython code
import pandas as pd
import numpy as np
import urllib2
import os
## open the SRA run table from github url
url = "https://raw.githubusercontent.com/"+\
"dereneaton/RADmissing/master/empirical_6_SraRunTable.txt"
intable = urllib2.urlopen(url)
indata = pd.read_table(intable, sep="\t")
## print first few rows
print indata.head()
def wget_download(SRR, outdir, outname):
Python function to get sra data from ncbi and write to
outdir with a new name using bash call wget
## get output name
output = os.path.join(outdir, outname+".sra")
## create a call string
call = "wget -q -r -nH --cut-dirs=9 -O "+output+" "+\
"ftp://ftp-trace.ncbi.nlm.nih.gov/"+\
"sra/sra-instant/reads/ByRun/sra/SRR/"+\
"{}/{}/{}.sra;".format(SRR[:6], SRR, SRR)
## call bash script
! $call
Explanation: For each ERS (individuals) get all of the ERR (sequence file accessions).
End of explanation
for ID, SRR in zip(indata.Library_Name_s, indata.Run_s):
wget_download(SRR, "empirical_6/fastq/", ID)
%%bash
## convert sra files to fastq using fastq-dump tool
## output as gzipped into the fastq directory
fastq-dump --gzip -O empirical_6/fastq/ empirical_6/fastq/*.sra
## remove .sra files
rm empirical_6/fastq/*.sra
%%bash
ls -l empirical_6/fastq/
Explanation: Here we pass the SRR number and the sample name to the wget_download function so that the files are saved with their sample names.
End of explanation
%%bash
pyrad --version
%%bash
## remove old params file if it exists
rm params.txt
## create a new default params file
pyrad -n
Explanation: Merge technical replicates
This study includes several technical replicates per sequenced individuals, which we combine into a single file for each individual here.
Make a params file
End of explanation
%%bash
## substitute new parameters into file
sed -i '/## 1. /c\empirical_6/ ## 1. working directory ' params.txt
sed -i '/## 6. /c\CCTGCAGG,AATTC ## 6. cutters ' params.txt
sed -i '/## 7. /c\20 ## 7. N processors ' params.txt
sed -i '/## 9. /c\6 ## 9. NQual ' params.txt
sed -i '/## 10./c\.85 ## 10. clust threshold ' params.txt
sed -i '/## 12./c\4 ## 12. MinCov ' params.txt
sed -i '/## 13./c\10 ## 13. maxSH ' params.txt
sed -i '/## 14./c\empirical_6_m4 ## 14. output name ' params.txt
sed -i '/## 18./c\empirical_6/fastq/*.gz ## 18. data location ' params.txt
sed -i '/## 29./c\2,2 ## 29. trim overhang ' params.txt
sed -i '/## 30./c\p,n,s ## 30. output formats ' params.txt
cat params.txt
Explanation: Note:
The data here are from Illumina Casava <1.8, so the phred scores are offset by 64 instead of 33, so we use that in the params file below.
End of explanation
%%bash
pyrad -p params.txt -s 234567 >> log.txt 2>&1
%%bash
sed -i '/## 12./c\2 ## 12. MinCov ' params.txt
sed -i '/## 14./c\empirical_6_m2 ## 14. output name ' params.txt
%%bash
pyrad -p params.txt -s 7 >> log.txt 2>&1
Explanation: Assemble in pyrad
End of explanation
import pandas as pd
## read in the data
s2dat = pd.read_table("empirical_6/stats/s2.rawedit.txt", header=0, nrows=25)
## print summary stats
print s2dat["passed.total"].describe()
## find which sample has the most raw data
maxraw = s2dat["passed.total"].max()
print "\nmost raw data in sample:"
print s2dat['sample '][s2dat['passed.total']==maxraw]
Explanation: Results
We are interested in the relationship between the amount of input (raw) data between any two samples, the average coverage they recover when clustered together, and the phylogenetic distances separating samples.
Raw data amounts
The average number of raw reads per sample is 1.36M.
End of explanation
## read in the s3 results
s6dat = pd.read_table("empirical_6/stats/s3.clusters.txt", header=0, nrows=25)
## print summary stats
print "summary of means\n=================="
print s6dat['dpt.me'].describe()
## print summary stats
print "\nsummary of std\n=================="
print s6dat['dpt.sd'].describe()
## print summary stats
print "\nsummary of proportion lowdepth\n=================="
print pd.Series(1-s6dat['d>5.tot']/s6dat["total"]).describe()
## find which sample has the greatest depth of retained loci
max_hiprop = (s6dat["d>5.tot"]/s6dat["total"]).max()
print "\nhighest coverage in sample:"
print s6dat['taxa'][s6dat['d>5.tot']/s6dat["total"]==max_hiprop]
import numpy as np
## print mean and std of coverage for the highest coverage sample
with open("empirical_6/clust.85/A167.depths", 'rb') as indat:
depths = np.array(indat.read().strip().split(","), dtype=int)
print depths.mean(), depths.std()
Explanation: Look at distributions of coverage
pyrad v.3.0.63 outputs depth information for each sample which I read in here and plot. First let's ask which sample has the highest depth of coverage. The std of coverages is pretty low in this data set compared to several others.
End of explanation
import toyplot
import toyplot.svg
import numpy as np
## read in the depth information for this sample
with open("empirical_6/clust.85/A167.depths", 'rb') as indat:
depths = np.array(indat.read().strip().split(","), dtype=int)
## make a barplot in Toyplot
canvas = toyplot.Canvas(width=350, height=300)
axes = canvas.axes(xlabel="Depth of coverage (N reads)",
ylabel="N loci",
label="dataset6/sample=A167")
## select the loci with depth > 5 (kept)
keeps = depths[depths>5]
## plot kept and discarded loci
edat = np.histogram(depths, range(30)) # density=True)
kdat = np.histogram(keeps, range(30)) #, density=True)
axes.bars(edat)
axes.bars(kdat)
#toyplot.svg.render(canvas, "empirical_6_depthplot.svg")
Explanation: Plot the coverage for the sample with highest mean coverage
Green shows the loci that were discarded and orange the loci that were retained. The majority of data were discarded for being too low of coverage.
End of explanation
cat empirical_6/stats/empirical_6_m4.stats
%%bash
head -n 20 empirical_6/stats/empirical_6_m2.stats
Explanation: Print final stats table
End of explanation
%%bash
## raxml argumement w/ ...
raxmlHPC-PTHREADS-AVX -f a -m GTRGAMMA -N 100 -x 12345 -p 12345 -T 20 \
-w /home/deren/Documents/RADmissing/empirical_6/ \
-n empirical_6_m4 -s empirical_6/outfiles/empirical_6_m4.phy
%%bash
## raxml argumement w/ ...
raxmlHPC-PTHREADS-AVX -f a -m GTRGAMMA -N 100 -x 12345 -p 12345 -T 20 \
-w /home/deren/Documents/RADmissing/empirical_6/ \
-n empirical_6_m2 -s empirical_6/outfiles/empirical_6_m2.phy
%%bash
head -n 20 empirical_6/RAxML_info.empirical_6_m4
%%bash
head -n 20 empirical_6/RAxML_info.empirical_6_m2
Explanation: Infer ML phylogeny in raxml as an unrooted tree
End of explanation
%load_ext rpy2.ipython
%%R -h 800 -w 800
library(ape)
tre <- read.tree("empirical_6/RAxML_bipartitions.empirical_6")
ltre <- ladderize(tre)
par(mfrow=c(1,2))
plot(ltre, use.edge.length=F)
nodelabels(ltre$node.label)
plot(ltre, type='u')
Explanation: Plot the tree in R using ape
End of explanation
%%R
mean(cophenetic.phylo(ltre))
Explanation: Get phylo distances (GTRgamma dist)
End of explanation
print pd.DataFrame([indata.Library_Name_s, indata.Organism_s]).T
Explanation: Translation to taxon names
End of explanation |
11,130 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Type
Is Required
Step7: 1.4. Elemental Stoichiometry
Is Required
Step8: 1.5. Elemental Stoichiometry Details
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 1.7. Diagnostic Variables
Is Required
Step11: 1.8. Damping
Is Required
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required
Step13: 2.2. Timestep If Not From Ocean
Is Required
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required
Step15: 3.2. Timestep If Not From Ocean
Is Required
Step16: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required
Step17: 4.2. Scheme
Is Required
Step18: 4.3. Use Different Scheme
Is Required
Step19: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required
Step20: 5.2. River Input
Is Required
Step21: 5.3. Sediments From Boundary Conditions
Is Required
Step22: 5.4. Sediments From Explicit Model
Is Required
Step23: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required
Step24: 6.2. CO2 Exchange Type
Is Required
Step25: 6.3. O2 Exchange Present
Is Required
Step26: 6.4. O2 Exchange Type
Is Required
Step27: 6.5. DMS Exchange Present
Is Required
Step28: 6.6. DMS Exchange Type
Is Required
Step29: 6.7. N2 Exchange Present
Is Required
Step30: 6.8. N2 Exchange Type
Is Required
Step31: 6.9. N2O Exchange Present
Is Required
Step32: 6.10. N2O Exchange Type
Is Required
Step33: 6.11. CFC11 Exchange Present
Is Required
Step34: 6.12. CFC11 Exchange Type
Is Required
Step35: 6.13. CFC12 Exchange Present
Is Required
Step36: 6.14. CFC12 Exchange Type
Is Required
Step37: 6.15. SF6 Exchange Present
Is Required
Step38: 6.16. SF6 Exchange Type
Is Required
Step39: 6.17. 13CO2 Exchange Present
Is Required
Step40: 6.18. 13CO2 Exchange Type
Is Required
Step41: 6.19. 14CO2 Exchange Present
Is Required
Step42: 6.20. 14CO2 Exchange Type
Is Required
Step43: 6.21. Other Gases
Is Required
Step44: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required
Step45: 7.2. PH Scale
Is Required
Step46: 7.3. Constants If Not OMIP
Is Required
Step47: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required
Step48: 8.2. Sulfur Cycle Present
Is Required
Step49: 8.3. Nutrients Present
Is Required
Step50: 8.4. Nitrous Species If N
Is Required
Step51: 8.5. Nitrous Processes If N
Is Required
Step52: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required
Step53: 9.2. Upper Trophic Levels Treatment
Is Required
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required
Step55: 10.2. Pft
Is Required
Step56: 10.3. Size Classes
Is Required
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required
Step58: 11.2. Size Classes
Is Required
Step59: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required
Step60: 12.2. Lability
Is Required
Step61: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required
Step62: 13.2. Types If Prognostic
Is Required
Step63: 13.3. Size If Prognostic
Is Required
Step64: 13.4. Size If Discrete
Is Required
Step65: 13.5. Sinking Speed If Prognostic
Is Required
Step66: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required
Step67: 14.2. Abiotic Carbon
Is Required
Step68: 14.3. Alkalinity
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'cmcc-esm2-hr5', 'ocnbgchem')
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: CMCC
Source ID: CMCC-ESM2-HR5
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:50
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation |
11,131 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Aiida and the aiida-plugins
1. aiida-v0.12.1 installation (released in Jan 2018)
aiida-v0.12.1 was released in Summer, 2018, hence I removed the previous v0.11.0 in my mac, and installed the new version
In the following, I'll show how to install it and other aiida-plugins in a mac os.
1.1 set up a virtual environment for aiida running
Step1: 1.2 install aiida_core of epfl version
Now, we need download the aiida_core v0.11.0 and put it in the folder you like. For me, the directory of the aiida_core is '/Users/ywfang/FANG/study/software/all-aiida'
Step2: 1.3 postgresql settings
Run 'verdi daemon start', the computer will remind you that you haven't set up the daemon user, then just follow it (As you will see below, I use my email address as the default user).
Please check the status of daemon
Step3: 2. Installing plugins
2.1 Installing aiida-vasp plugin
Step4: After git, you'll see a new folder named aiida-vasp
Step5: pymatgen parser setting
Step6: 2.2 Installing aiida-phonopy plugin
Step7: You then see a folder "aiida-phonopy"
Step8: 2.3 Verdi command completion configuration
For older versions of aiida_core, use
eval "$(verdi completioncommand)"
For new versions, use
eval "$(verdi completioncommand)"
eval "$(_VERDI_COMPLETE=source verdi)"
Since in 'verdi quicksetup', I didn't use the passward, hence I run 'verdi setup aiida1', and reset my profile. The profile information is stored in a file like
Step9: Set up local computer as localhost
Appended note in March 5th 2018
Reference
Step10: 2.4.2 Test the computer
Step11: 2.4.3 Some useful commands
verdi computer list
to get a list of existing computers, and
Step12: Attention For setting up code for phonopy and phono3py, the "Defult input plugins" are phonopy.phonopy and phonopy.phono3py respectively.
Some useful commands
Step13: The command to run this job
Step16: The calculation of primitive_axis in phonopy
Step17: The solved x is the primitive_aixs that can be used in the phononpy calculations
now, let's add another example of Boron crystal
Step18: an example of spider
Step19: aiida phono3py data analysis
verdi calculation list to get the pk number corresponding to the raw data.
Run verdi shell | Python Code:
conda create -n aiida-debug python=2.7 #set a veritual environment
conda activate aiida-debug
#sometimes in mac, such a command might be requested
# sudo ln -s /Users/ywfang/miniconda3/etc/profile.d/conda.sh /etc/profile.d/conda.sh
conda install postgresql
Explanation: Aiida and the aiida-plugins
1. aiida-v0.12.1 installation (released in Jan 2018)
aiida-v0.12.1 was released in Summer, 2018, hence I removed the previous v0.11.0 in my mac, and installed the new version
In the following, I'll show how to install it and other aiida-plugins in a mac os.
1.1 set up a virtual environment for aiida running
End of explanation
git clone [email protected]:aiidateam/aiida_core.git
cd aiida_core
git checkout -b v0.12.1
pip install .
#make sure that you have pip, if it is not available,
#use 'conda install pip'
Explanation: 1.2 install aiida_core of epfl version
Now, we need download the aiida_core v0.11.0 and put it in the folder you like. For me, the directory of the aiida_core is '/Users/ywfang/FANG/study/software/all-aiida'
End of explanation
verdi profile setdefault verdi aiida_profile_201808
verdi profile setdefault daemon aiida_profile_201808 #
Explanation: 1.3 postgresql settings
Run 'verdi daemon start', the computer will remind you that you haven't set up the daemon user, then just follow it (As you will see below, I use my email address as the default user).
Please check the status of daemon:
aiida allows multiple profiles for a same user, but we need set one of them as a default. One profile corresponds to a data
End of explanation
git clone [email protected]:DropD/aiida-vasp.git
Explanation: 2. Installing plugins
2.1 Installing aiida-vasp plugin
End of explanation
cd aiida-vasp/
git checkout develop #switch to develop version
Explanation: After git, you'll see a new folder named aiida-vasp
End of explanation
pip install . # installed successfully
Explanation: pymatgen parser setting: this step is quite important for aiida-vasp plugin. Although I don't like this feature very much because I usually forgot this step. Luckily, Abel have reminded me for many times ( at least 5, -_- ) in the latest month. Great thanks to him.
vi aiida_vasp/calcs/vasp.py, change the line
default_parser = 'vasp.vasp'
into
default_parser = 'vasp.pymatgen'
End of explanation
git clone [email protected]:abelcarreras/aiida-phonopy.git #this code was developped by Abel
Explanation: 2.2 Installing aiida-phonopy plugin
End of explanation
cd aiida-phonopy
git checkout development #switch to develop version
pip install . # installed successfully
Explanation: You then see a folder "aiida-phonopy"
End of explanation
(aiida) h44:aiida_plugins ywfang$ verdi computer setup
At any prompt, type ? to get some help.
---------------------------------------
=> Computer name: stern-lab
Creating new computer with name 'stern-lab'
=> Fully-qualified hostname: 88.88.88.88 #here is the IP address of your computer
=> Description: go to stern-lab from mac
=> Enabled: True
=> Transport type: ssh
=> Scheduler type: sge
=> shebang line at the beginning of the submission script: #!/bin/bash
=> AiiDA work directory: /home/ywfang/aiida_run_mac
=> mpirun command: mpirun -np {tot_num_mpiprocs}
=> Text to prepend to each command execution:
# This is a multiline input, press CTRL+D on a
# empty line when you finish
# ------------------------------------------
# End of old input. You can keep adding
# lines, or press CTRL+D to store this value
# ------------------------------------------
export PATH=/usr/local/calc/openmpi-1.10.2-ifort-togo/bin/:$PATH #change these two lines according to your system
export LD_LIBRARY_PATH=/opt/intel/Compiler/11.1/069/lib/intel64/:$LD_LIBRARY_PATH
=> Text to append to each command execution:
# This is a multiline input, press CTRL+D on a
# empty line when you finish
# ------------------------------------------
# End of old input. You can keep adding
# lines, or press CTRL+D to store this value
# ------------------------------------------
Computer 'stern-lab' successfully stored in DB.
pk: 1, uuid: f54386aa-0b7f-4576-8faa-666d9429980c
Note: before using it with AiiDA, configure it using the command
verdi computer configure stern-lab
(Note: machine_dependent transport parameters cannot be set via
the command-line interface at the moment)
Explanation: 2.3 Verdi command completion configuration
For older versions of aiida_core, use
eval "$(verdi completioncommand)"
For new versions, use
eval "$(verdi completioncommand)"
eval "$(_VERDI_COMPLETE=source verdi)"
Since in 'verdi quicksetup', I didn't use the passward, hence I run 'verdi setup aiida1', and reset my profile. The profile information is stored in a file like:
~/.aiida/config.json
2.4 Setup of computers
2.4.1 Setup the computer
For the supercomputer you want to perform calculations, before settupng up the computer, you have to prepar a password-less connection to the computer. (use ssh-keygen, there are many blogs on this topic.) Here, I skip this step, and go to the 'verdi comptuer setup' directly.
End of explanation
(aiida) h44:aiida_plugins ywfang$ verdi computer configure stern-lab
Configuring computer 'stern-lab' for the AiiDA user '[email protected]'
Computer stern-lab has transport of type ssh
Note: to leave a field unconfigured, leave it empty and press [Enter]
=> username = ywfang
=> port = 22
=> look_for_keys = ~/.ssh/id_rsa
Error in the inserted value: look_for_keys must be an boolean
=> look_for_keys =
=> key_filename = ~/.ssh/id_rsa
=> timeout = 60
=> allow_agent =
=> proxy_command =
=> compress = False
=> gss_auth = no
=> gss_kex = no
=> gss_deleg_creds = no
=> gss_host = 88.88.88.88
=> load_system_host_keys = True
=> key_policy = WarningPolicy
Configuration stored for your user on computer 'stern-lab'.
Explanation: Set up local computer as localhost
Appended note in March 5th 2018
Reference: https://github.com/ltalirz/aiida-zeopp
It is also possible to setup a local computer as a host to do computations. For example, I show the lcoal computer information:
Computer name: localhost
* PK: 4
* UUID: ..................
* Description: my local computer
* Hostname: localhost
* Enabled: True
* Transport type: local
* Scheduler type: direct
* Work directory: /home/ywfang/aiidalocal-run
* mpirun command:
* Default number of cpus per machine: 8
* Used by: 4 nodes
* prepend text:
# No prepend text.
* append text:
# No append text.
After setting up the local computer, remeber to configure this computer.
Here, correspondingly I give the code information in my local computer as an example:
PK: 186209
UUID:
Label: phonopy
Description: phonopy in localhost
Default plugin: phonopy.phonopy
Used by: 1 calculations
Type: remote
Remote machine: localhost
Remote absolute path:
/home/ywfang/miniconda3/envs/aiida/bin/phonopy
prepend text:
# No prepend text.
append text:
# No append text.
A comment to the 'direct' scheduler:
The direct scheduler, to be used mainly for debugging, is an implementation of a scheduler plugin that does not require a real scheduler installed, but instead directly executes a command, puts it in the background, and checks for its process ID (PID) to discover if the execution is completed.
Warning
The direct execution mode is very fragile. Currently, it spawns a separate Bash shell to execute a job and track each shell by process ID (PID). This poses following problems:
PID numeration is reset during reboots;
PID numeration is different from machine to machine, thus direct execution is not possible in multi-machine clusters, redirecting each SSH login to a different node in round-robin fashion;
there is no real queueing, hence, all calculation started will be run in parallel.
2.4.1 Configure the computer
End of explanation
(aiida) h44:aiida_plugins ywfang$ verdi computer test stern-lab
Testing computer 'stern-lab' for user [email protected]...
> Testing connection...
> Getting job list...
`-> OK, 30 jobs found in the queue.
> Creating a temporary file in the work directory...
>>>>>
>>>>>>.....
..........
.........
[Deleted successfully]
Test completed (all 3 tests succeeded)
Explanation: 2.4.2 Test the computer
End of explanation
(aiida) h44:aiida_plugins ywfang$ verdi code setup
At any prompt, type ? to get some help.
---------------------------------------
=> Label: vasp
=> Description: vasp541 at stern-lab
=> Local: False
=> Default input plugin: vasp.vasp # you can use verdi calculation plugins to check what plugins you have
=> Remote computer name: stern-lab
=> Remote absolute path: /usr/local/vasp/vasp541
=> Text to prepend to each command execution
FOR INSTANCE, MODULES TO BE LOADED FOR THIS CODE:
# This is a multiline input, press CTRL+D on a
# empty line when you finish
# ------------------------------------------
# End of old input. You can keep adding
# lines, or press CTRL+D to store this value
# ------------------------------------------
=> Text to append to each command execution:
# This is a multiline input, press CTRL+D on a
# empty line when you finish
# ------------------------------------------
# End of old input. You can keep adding
# lines, or press CTRL+D to store this value
# ------------------------------------------
Code 'vasp' successfully stored in DB.
pk: 1, uuid: 03875075-0cc1-4938-8943-c46d3ee2aecd
Explanation: 2.4.3 Some useful commands
verdi computer list
to get a list of existing computers, and:
verdi computer show COMPUTERNAME
to get detailed information on the specific computer named COMPUTERNAME. You have also the:
verdi computer rename OLDCOMPUTERNAME NEWCOMPUTERNAME
and:
verdi computer delete COMPUTERNAME
2.5 Setup the code
Basically there are two kinds of code for aiida. One is local code, the other one is the socalled remote code that is in a supercomputer or cluster. Please ref to (aiida)[http://aiida-core.readthedocs.io/en/release_v0.11.0/get_started/index.html] about their differences.
Here I show a example to setup a remote code (vasp):
End of explanation
import click
import numpy
@click.command()
@click.option('--paw-family', type=str, default='vasp-test')
@click.option('--import-from', type=click.Path(), default='.')
@click.option('--queue', type=str, default='')
@click.argument('code', type=str)
@click.argument('computer', type=str)
def test_vasp(paw_family, import_from, queue, code, computer):
load_dbenv_if_not_loaded()
from aiida.orm import CalculationFactory, Code
if import_from:
import_paws(import_from, paw_family)
# try:
paw_si = get_paws(paw_family)
# except ValueError as err:
# click.echo(err.msg, err=True)
# raise ValueError(
# 'give a valid family or import a new one (run with --help)')
vasp_calc = CalculationFactory('vasp.vasp')()
vasp_calc.use_structure(create_structure2())
vasp_calc.use_kpoints(create_kpoints())
vasp_calc.use_parameters(create_params())
code = Code.get_from_string('{}@{}'.format(code, computer))
vasp_calc.use_code(code)
# vasp_calc.use_paw(paw_in, 'In')
# vasp_calc.use_paw(paw_as, 'As')
vasp_calc.use_paw(paw_si, 'Si')
vasp_calc.set_computer(code.get_computer())
vasp_calc.set_queue_name(queue)
vasp_calc.set_resources({
'num_machines': 1,
'parallel_env': 'mpi*',
'tot_num_mpiprocs': 16
})
vasp_calc.label = 'Test VASP run'
vasp_calc.store_all()
vasp_calc.submit()
vasp_calc = CalculationFactory('vasp.vasp')
#from aiida_vasp.parsers.pymatgen_vasp import PymatgenParser
#vasp_calc.set_parser_cls(PymatgenParser)
#vasp_calc.use_settings(DataFactory('parameter')(dict={
# 'pymatgen_parser': {
# 'exception_on_bad_xml': False,
# 'parse_dos': False
# }
# }
#))
def load_dbenv_if_not_loaded():
from aiida import load_dbenv, is_dbenv_loaded
if not is_dbenv_loaded():
load_dbenv()
def get_data_cls(descriptor):
load_dbenv_if_not_loaded()
from aiida.orm import DataFactory
return DataFactory(descriptor)
def create_structure():
structure_cls = get_data_cls('structure')
structure = structure_cls(
cell=numpy.array([[0, .5, .5], [.5, 0, .5], [.5, .5, 0]]) * 6.058, )
structure.append_atom(position=(0, 0, 0), symbols='In')
structure.append_atom(position=(0.25, 0.25, 0.25), symbols='As')
return structure
def create_structure2():
import numpy as np
from aiida.orm import DataFactory
StructureData = DataFactory('structure')
a = 5.404
cell = [[a, 0, 0],
[0, a, 0],
[0, 0, a]]
symbols=['Si'] * 8
scaled_positions = [(0.875, 0.875, 0.875),
(0.875, 0.375, 0.375),
(0.375, 0.875, 0.375),
(0.375, 0.375, 0.875),
(0.125, 0.125, 0.125),
(0.125, 0.625, 0.625),
(0.625, 0.125, 0.625),
(0.625, 0.625, 0.125)]
structure = StructureData(cell=cell)
positions = np.dot(scaled_positions, cell)
for i, scaled_position in enumerate(scaled_positions):
structure.append_atom(position=np.dot(scaled_position, cell).tolist(),
symbols=symbols[i])
return structure
def create_kpoints():
kpoints_cls = get_data_cls('array.kpoints')
return kpoints_cls(kpoints_mesh=[8, 8, 8])
def create_params():
param_cls = get_data_cls('parameter')
return param_cls(dict={
'SYSTEM': 'InAs',
'EDIFF': 1e-5,
'ISMEAR': 0,
'SIGMA': 0.05,
'ENCUT': '280.00 eV',
'LEPSILON': '.TRUE.'
})
def import_paws(folder_path, family_name):
load_dbenv_if_not_loaded()
from aiida.orm import DataFactory
paw_cls = DataFactory('vasp.paw')
paw_cls.import_family(
folder_path, familyname=family_name, family_desc='Test family')
def get_paws(family_name):
load_dbenv_if_not_loaded()
from aiida.orm import DataFactory
paw_cls = DataFactory('vasp.paw')
#paw_in = paw_cls.load_paw(family=family_name, symbol='In')[0]
#paw_as = paw_cls.load_paw(family=family_name, symbol='As')[0]
paw_si = paw_cls.load_paw(family=family_name, symbol='Si')[0]
return paw_si
if __name__ == '__main__':
test_vasp()
Explanation: Attention For setting up code for phonopy and phono3py, the "Defult input plugins" are phonopy.phonopy and phonopy.phono3py respectively.
Some useful commands:
verdi code rename "ID" (for example, vasp@stern-lab is an ID)
verdi code list
verdi code show "ID"
verdi code delete "ID"
2.6 Run example
Now, you can now test some examples. Please always remeber to restart your daemon if you make some midifications on the environment or something related to the computer or code: verdi daemon restart
The example I tested comes from the aiida-vasp plugin (prabably I made some small modifications):
End of explanation
(aiida) h44:test-example-for-mac ywfang$ verdi calculation list -a
# Last daemon state_updater check: 0h:00m:07s ago (at 18:02:35 on 2018-01-19)
PK Creation State Sched. state Computer Type
---- ---------- -------- -------------- ---------- ---------
18 4m ago FINISHED DONE boston-lab vasp.vasp
Total results: 1
Explanation: The command to run this job:
"python run_vasp_old-boston.py --import-from /Users/ywfang/FANG/study/software/all-aiida/aiida_plugins/test-example-for-mac/VASP_TEST/Si vasp boston-lab"
Check the status of the calculations
End of explanation
(*Take KF compound as an example*)
(*In the unit cell of cubic KF,
there are 8 atoms, half of them are K, and the other half are F*)
This is the unit cell structure:
F K
1.0
5.3683051700000002 0.0000000000000000 0.0000000000000000
0.0000000000000000 5.3683051700000002 0.0000000000000000
0.0000000000000000 0.0000000000000000 5.3683051700000002
F K
4 4
Direct
0.0000000000000000 0.0000000000000000 0.0000000000000000
0.0000000000000000 0.5000000000000000 0.5000000000000000
0.5000000000000000 0.0000000000000000 0.5000000000000000
0.5000000000000000 0.5000000000000000 0.0000000000000000
0.5000000000000000 0.0000000000000000 0.0000000000000000
0.5000000000000000 0.5000000000000000 0.5000000000000000
0.0000000000000000 0.0000000000000000 0.5000000000000000
0.0000000000000000 0.5000000000000000 0.0000000000000000
#(*using "phonopy --symmetry --tol=0.01" to get the primitive psocar*)
The primitive poscar is
F K
1.0
0.0000000000000000 2.6841525850000001 2.6841525850000001
2.6841525850000001 0.0000000000000000 2.6841525850000001
2.6841525850000001 2.6841525850000001 0.0000000000000000
1 1
Direct
0.0000000000000000 0.0000000000000000 0.0000000000000000
0.5000000000000000 0.5000000000000000 0.5000000000000000
import numpy as np
uc_vector = np.matrix([
[5.3683051700000002, 0, 0],
[0, 5.3683051700000002, 0],
[0, 0, 5.3683051700000002]
])
uc_vector
primitive_vector = np.matrix([
[0, 2.6841525850000001, 2.6841525850000001],
[2.6841525850000001, 0, 2.6841525850000001],
[2.6841525850000001, 2.6841525850000001, 0]
])
primitive_vector
x = np.linalg.solve(uc_vector.T, primitive_vector.T)
x
Explanation: The calculation of primitive_axis in phonopy
End of explanation
#the conventional cell of B contains 36 atoms
B
1.0
4.8786175399999996 0.0000000000000000 0.0000000000000000
-2.4393087699999998 4.2250067299999996 0.0000000000000000
0.0000000000000000 0.0000000000000000 12.5104310900000009
B
36
Direct
0.8810421900000001 0.1189578100000000 0.1082863700000000
0.2379156300000000 0.1189578100000000 0.1082863700000000
0.8810421900000001 0.7620843700000000 0.1082863700000000
0.7856244800000000 0.5712489599999999 0.2250469600000000
0.7856244800000000 0.2143755200000000 0.2250469600000000
0.4287510400000000 0.2143755200000000 0.2250469600000000
0.4692953000000000 0.5307047000000000 0.3089958100000000
0.0614094100000000 0.5307047000000000 0.3089958100000000
0.4692953000000000 0.9385905900000000 0.3089958100000000
0.1973713700000000 0.3947427400000000 0.0243375300000000
0.1973713700000000 0.8026286300000001 0.0243375300000000
0.6052572600000000 0.8026286300000001 0.0243375300000000
0.5477088500000000 0.4522911500000000 0.4416197100000000
0.9045823000000000 0.4522911500000000 0.4416197100000000
0.5477088500000000 0.0954177000000000 0.4416197100000000
0.4522911500000000 0.9045823000000000 0.5583802900000000
0.4522911500000000 0.5477088500000000 0.5583802900000000
0.0954177000000000 0.5477088500000000 0.5583802900000000
0.1359619600000000 0.8640380400000001 0.6423291400000000
0.7280760700000000 0.8640380400000001 0.6423291400000000
0.1359619600000000 0.2719239300000000 0.6423291400000000
0.8640380400000001 0.7280760700000000 0.3576708600000000
0.8640380400000001 0.1359619600000000 0.3576708600000000
0.2719239300000000 0.1359619600000000 0.3576708600000000
0.2143755200000000 0.7856244800000000 0.7749530400000000
0.5712489599999999 0.7856244800000000 0.7749530400000000
0.2143755200000000 0.4287510400000000 0.7749530400000000
0.1189578100000000 0.2379156300000000 0.8917136300000000
0.1189578100000000 0.8810421900000001 0.8917136300000000
0.7620843700000000 0.8810421900000001 0.8917136300000000
0.8026286300000001 0.1973713700000000 0.9756624699999999
0.3947427400000000 0.1973713700000000 0.9756624699999999
0.8026286300000001 0.6052572600000000 0.9756624699999999
0.5307047000000000 0.0614094100000000 0.6910041900000000
0.5307047000000000 0.4692953000000000 0.6910041900000000
0.9385905900000000 0.4692953000000000 0.6910041900000000
#the primitive cell of B can be written as (use phonopy command to get this primitive cell)
B
1.0
2.4393087710850549 1.4083355756225715 4.1701436966666670
-2.4393087710850549 1.4083355756225715 4.1701436966666670
0.0000000000000000 -2.8166711512451430 4.1701436966666670
12
Direct
0.9893285600000000 0.3462020033333332 0.9893285600000000
0.3462020033333332 0.9893285600000000 0.9893285600000000
0.9893285600000000 0.9893285600000000 0.3462020033333332
0.0106714399999999 0.0106714399999998 0.6537979966666668
0.0106714399999999 0.6537979966666666 0.0106714400000000
0.6537979966666667 0.0106714400000000 0.0106714400000000
0.7782911033333333 0.3704052166666665 0.7782911033333333
0.3704052166666665 0.7782911033333333 0.7782911033333333
0.7782911033333333 0.7782911033333333 0.3704052166666666
0.2217088966666666 0.2217088966666666 0.6295947833333334
0.2217088966666666 0.6295947833333333 0.2217088966666668
0.6295947833333333 0.2217088966666665 0.2217088966666668
import numpy as np
uc_vector_B = np.matrix([
[4.8786175399999996, 0.0000000000000000, 0.0000000000000000],
[-2.4393087699999998, 4.2250067299999996, 0.0000000000000000],
[0.0000000000000000, 0.0000000000000000, 12.5104310900000009]
])
primitive_vector_B = np.matrix([
[2.4393087710850549, 1.4083355756225715, 4.1701436966666670],
[-2.4393087710850549, 1.4083355756225715, 4.1701436966666670],
[0.0000000000000000, -2.8166711512451430, 4.1701436966666670]
])
x_B = np.linalg.solve(uc_vector_B.T, primitive_vector_B.T)
x_B
#The structures of the two above examples are taken from Togo-sensei's phonon databse (I could have relaxed them with higher accuracy)
#the phonopy.conf in the database includes the information of primitive_aixs that can be referenced here as a comprision with my
#calculations here.
#for the definition of primitive_axis in phonopy, please see the manual or visit
#https://atztogo.github.io/phonopy/setting-tags.html#primitive-axis-or-primitive-axes
Explanation: The solved x is the primitive_aixs that can be used in the phononpy calculations
now, let's add another example of Boron crystal
End of explanation
import urllib2 # needed for functions,classed for opening urls.
url = raw_input( "enter the url needed for downloading file(pdf,mp3,zip...etc)\n");
usock = urllib2.urlopen(url) #function for opening desired url
file_name = url.split('/')[-1] #Example : for given url "www.cs.berkeley.edu/~vazirani/algorithms/chap6.pdf" file_name will store "chap6.pdf"
f = open(file_name, 'wb') #opening file for write and that too in binary mode.
file_size = int(usock.info().getheaders("Content-Length")[0]) #getting size in bytes of file(pdf,mp3...)
print "Downloading: %s Bytes: %s" % (file_name, file_size)
downloaded = 0
block_size = 8192 #bytes to be downloaded in each loop till file pointer does not return eof
while True:
buff = usock.read(block_size)
if not buff: #file pointer reached the eof
break
downloaded = downloaded + len(buff)
f.write(buff)
download_status = r"%3.2f%%" % (downloaded * 100.00 / file_size) #Simple mathematics
download_status = download_status + (len(download_status)+1) * chr(8)
print download_status,"done"
f.close()
Explanation: an example of spider
End of explanation
#we can run do some interactive programming in the verdi shell
# verdi shell is a ipython based api of aiida
#Abel taught me about this analysis in Jan 19 2018
n = load_node(50442)
n.get_outputs_dict()
n.out.kappa
n.out.kappa.get_arraynames()
n.out.kappa.get_array('kappa')
array = n.out.kappa.get_array('kappa')
n.out.kappa.get_arraynames()
Explanation: aiida phono3py data analysis
verdi calculation list to get the pk number corresponding to the raw data.
Run verdi shell
End of explanation |
11,132 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Section 6.5.1
Hantush wells introduction type curves
IHE, module transient groundwater
Olsthoorn, 2019-01-03
Hantush (1956) considered the transient flow due to a well with a constant extraction since $t=0$ placed in a uniform confined aquifer of infinite extent covered by a layer with constant vertical resistance $c$ above which a constant head is maintained.
The partial differential equation now contains the leakage between the aquifer and the layer with maintained head.
$$ \frac {\partial s^2} {\partial r^2} + \frac 1 r \frac {\partial \phi} {\partial r}
- \frac s {kD c} = \frac S {kD} \frac {\partial s} {\partial t} $$
$$ s(x, t) = 0$$
$$ Q = kD (2 \pi r) \frac {\partial \phi} {\partial r}, \,\,\, \mathtt{for}\,\,\, r=r_0 $$
The solution may be opbtained by straighforward Lapace transformation and looking up de result from the Laplace inversions table. It reads with
$$ \lambda = \sqrt{ kD c} $$
$$ s(r, t) = \frac Q {4 \pi kD} W_h(u, \frac r \lambda),\,\,\,\, u = \frac {r^2 S} {4 kD t}$$
where $W_h(..)$ is the so-called Hantush well function, which, obviously differs from the Theis well function
Theis
Step1: Implementing the integral
Step2: In conclusies both implementations function equally well.
Hantush Type curves | Python Code:
from scipy.special import exp1
from scipy.integrate import quad
import numpy as np
import matplotlib.pyplot as plt
Explanation: Section 6.5.1
Hantush wells introduction type curves
IHE, module transient groundwater
Olsthoorn, 2019-01-03
Hantush (1956) considered the transient flow due to a well with a constant extraction since $t=0$ placed in a uniform confined aquifer of infinite extent covered by a layer with constant vertical resistance $c$ above which a constant head is maintained.
The partial differential equation now contains the leakage between the aquifer and the layer with maintained head.
$$ \frac {\partial s^2} {\partial r^2} + \frac 1 r \frac {\partial \phi} {\partial r}
- \frac s {kD c} = \frac S {kD} \frac {\partial s} {\partial t} $$
$$ s(x, t) = 0$$
$$ Q = kD (2 \pi r) \frac {\partial \phi} {\partial r}, \,\,\, \mathtt{for}\,\,\, r=r_0 $$
The solution may be opbtained by straighforward Lapace transformation and looking up de result from the Laplace inversions table. It reads with
$$ \lambda = \sqrt{ kD c} $$
$$ s(r, t) = \frac Q {4 \pi kD} W_h(u, \frac r \lambda),\,\,\,\, u = \frac {r^2 S} {4 kD t}$$
where $W_h(..)$ is the so-called Hantush well function, which, obviously differs from the Theis well function
Theis:
$$ W(z) = \mathtt{exp1}(z) = \intop _z ^\infty \frac {e^{-y}} {y} dy $$
Hantush:
$$ W_h(z, \rho) = \intop_u ^\infty \frac {e^{-y-\left(\frac \rho 2 \right)^2}} y dy $$
with $\rho = \frac r \lambda $ and $u = \frac {r^2 S} {4 kD t} $
End of explanation
def Wh(u, rho):
'''Numerical integration using quad'''
def kernel(y, rho): return np.exp(-y - (rho / 2)**2 /y) / y
def wh(u, rho): return quad(kernel, u, np.inf, (rho,))
wh = np.frompyfunc(wh, 2, 2)
return wh(u, rho)[0]
def Wh2(u, rho, tol=1e-14):
'''Return Hantush using summation.
This implementation works but has a limited reach; for very small
values of u (u<0.001) the solution will deteriorate into nonsense,
'''
#import pdb
#pdb.set_trace()
tau = (rho/2)**2 / u
f0 = 1
E = exp1(u)
w0= f0 * E
W = w0
for n in range(1, 500):
E = (1/n) * (np.exp(-u) - u * E)
f0 = -f0 / n * tau
w1 = f0 * E
#print(w1)
if np.max(abs(w0 + w1)) < tol: # use w0 + w1 because terms alternate sign
#print('succes')
break
W += w1
w0 = w1 # remember previous value
return W
# Timing and test
u = np.array([1e-3, 1e-2, 1e-1, 1, 10])
rho=0.3
print("Hantush quad :")
%timeit Wh(u, rho)
print(Wh(u, rho))
print("Hantush series:")
%timeit Wh2(u, rho)
print(Wh2(u, rho))
Explanation: Implementing the integral
End of explanation
u = np.logspace(-6, 1)
rhos = [0.01, 0.03, 0.1, 0.3, 1]
plt.title('Hantush type curves (integral implementation)')
plt.xlabel('1/u')
plt.ylabel('Wh')
plt.xscale('log')
plt.yscale('log')
plt.ylim((0.1, 100))
plt.grid()
plt.plot(1/u, exp1(u), label='Theis')
for rho in rhos:
plt.plot(1/u, Wh(u, rho), label='rho={:.2f}'.format(rho))
#plt.plot(1/u, Wh2(u, rho), label='rho={:.2f}'.format(rho))
# This breaks down for higher times when became stationary. At least with 500 terms.
plt.legend()
plt.show()
Explanation: In conclusies both implementations function equally well.
Hantush Type curves
End of explanation |
11,133 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Topological Sorting
The function topo_sort implements <em style="color
Step1: Graphical Representation
Step2: The function toDot(Edges, Order) takes two arguments
Step3: Testing | Python Code:
def topo_sort(T, D):
Parents = { t: set() for t in T } # dictionary of parents
Children = { t: set() for t in T } # dictionary of children
for s, t in D:
Children[s].add(t)
Parents [t].add(s)
Orphans = { t for (t, P) in Parents.items() if len(P) == 0 }
Sorted = []
count = 0
Order = {}
while len(T) > 0:
assert Orphans != set(), 'The graph is cyclic!'
t = Orphans.pop()
Order[t] = count
count += 1
Orphans -= { t }
T -= { t }
Sorted.append(t)
for s in Children[t]:
Parents[s] -= { t }
if Parents[s] == set():
Orphans.add(s)
return Sorted
def topo_sort(T, D):
print('_' * 100)
display(toDot(D))
Parents = { t: set() for t in T } # dictionary of parents
Children = { t: set() for t in T } # dictionary of children
for s, t in D:
Children[s].add(t)
Parents [t].add(s)
Orphans = { t for (t, P) in Parents.items() if len(P) == 0 }
Sorted = []
count = 0
Order = {}
while len(T) > 0:
assert Orphans != set(), 'The graph is cyclic!'
t = Orphans.pop()
Order[t] = count
count += 1
Orphans -= { t }
T -= { t }
Sorted.append(t)
for s in Children[t]:
Parents[s] -= { t }
if Parents[s] == set():
Orphans.add(s)
print('_' * 80)
display(toDot(D, Order))
return Sorted
Explanation: Topological Sorting
The function topo_sort implements <em style="color:blue">Kahn's algorithm</em> for
<em style="color:blue">topological sorting</em>.
- The first argument T is the set of the nodes of the directed graph.
- The second argument D is the set of the edges. Edges are represented as pairs.
The function returns a list Sorted that contains all nodes of $T$. If
Sorted[i] = x, Sorted[j] = y, and (x, y) in D,
then we must have i < j.
End of explanation
import graphviz as gv
Explanation: Graphical Representation
End of explanation
def toDot(Edges, Order={}):
V = set()
for x, y in Edges:
V.add(x)
V.add(y)
dot = gv.Digraph(node_attr={'shape': 'record', 'style': 'rounded'})
dot.attr(rankdir='LR', size='8,5')
for x in V:
o = Order.get(x, None)
if o != None:
dot.node(str(x), label='{' + str(x) + '|' + str(o) + '}')
else:
dot.node(str(x))
for u, v in Edges:
dot.edge(str(u), str(v))
return dot
Explanation: The function toDot(Edges, Order) takes two arguments:
- Edges is a set of pairs of the form (x, y) where x and y are nodes of a graph
and (x, y) is a directed edge from xto y.
- Order is a dictionary assigning natural numbers to some of the nodes.
The set of edges is displayed as a directed graph and for those nodes x such that Order[x] is defined, both x and the label Order[x] is depicted.
End of explanation
def demo():
T = { 5, 7, 3, 11, 8, 2, 9, 10 }
D = { (5, 11), (7, 11), (7, 8), (3, 8), (3, 10), (11, 2), (11, 9), (11, 10), (8, 9) }
S = topo_sort(T, D)
print(S)
demo()
Explanation: Testing
End of explanation |
11,134 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
그래프를 그리기 위해서 matplotlib을 임포트 합니다. %matplotlib inline은 새로운 창을 띄우지 않고 주피터 노트북 안에 이미지를 삽입하여 줍니다.
Step1: 텐서플로우를 tf 란 이름으로 임포트 하세요.
tf.Session()을 사용하여 세션 객체를 하나 만드세요.
sess = tf.Session()
임의의 샘플 데이터를 만들려고 합니다. 평균 0, 표준 편차 0.55 인 샘플 데이터 1000개를 만듭니다.
x_raw = tf.random_normal([...], mean=.., stddev=..)
x = sess.run(x_raw)
Step2: 위에서 x 축의 값을 만들었으니 이에 상응하는 y 축의 값을 만들려고 합니다. y 값은 0.1*x+0.3 을 만족하되 실제 데이터처럼 보이게 하려고 난수를 조금 섞어서 만듭니다. 여기서는 평균 0, 표준 편차 0.03 인 정규 분포 난수를 만듭니다.
y_raw = 0.1 * x + 0.3 + tf.random_normal([...], mean=.., stddev=..)
y = sess.run(y_raw)
Step3: 만든 샘플 데이터를 산점도로 나타내보겠습니다. plot 명령에 x, y 축의 값을 전달하고 산점도 표시는 원 모양 'o'으로 하고 테두리 선을 검은색으로 그리도록 하겠습니다.
plt.plot(x, y, 'o', markeredgecolor='k')
선형 회귀에서 사용할 두개의 변수 W 와 b 를 만들고 직선 방정식을 구성합니다.
W = tf.Variable(tf.zeros([.]))
b = ...(tf.zeros([.]))
y_hat = W * x + b
Step4: 회귀에서의 손실함수는 평균 제곱 오차(mean squared error)입니다. 텐서플로우에서 사용하는 오차 함수 tf.loss.mean_squared_error()를 사용하여 손실 함수를 위한 노드를 만듭니다. 이 함수에 전달할 매개변수는 정답 y와 예측한 값 y_hat 입니다.
loss = tf.losses.mean_squared_error(y, y_hat)
경사하강법은 텐서플로우 tf.train.GradientDescentOptimizer()에 구현되어 있습니다. 경사하강법 학습속도를 0.5로 주고 최적화 연산을 만듭니다.
optimizer = tf.train.GradientDescentOptimizer(0.5)
optimizer.minimize()함수에 손실 함수 객체를 넘겨주어 학습할 최종 객체를 생성합니다.
train = optimizer.minimize(loss)
Step5: 계산 그래프에 필요한 변수를 초기화합니다.
Step6: sess.run() 메소드를 이용해 필요한 연산을 수행할 수 있습니다. 반드시 수행할 것은 train이고 화면 출력을 위해 W, b, loss 를 계산해서 값을 반환 받겠습니다.
_, w_, b_, c = sess.run([train, W, b, loss])
반환 받은 c 는 costs 리스트에 추가하여 나중에 손실함수 그래프를 그리겠습니다. w_, b_ 를 이용해 위 산점도에 직선이 어떻게 맞춰지는지 그림으로 표현합니다.
plt.plot(x, w_ * x + b_) | Python Code:
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: 그래프를 그리기 위해서 matplotlib을 임포트 합니다. %matplotlib inline은 새로운 창을 띄우지 않고 주피터 노트북 안에 이미지를 삽입하여 줍니다.
End of explanation
x_raw = ...
x = ...
Explanation: 텐서플로우를 tf 란 이름으로 임포트 하세요.
tf.Session()을 사용하여 세션 객체를 하나 만드세요.
sess = tf.Session()
임의의 샘플 데이터를 만들려고 합니다. 평균 0, 표준 편차 0.55 인 샘플 데이터 1000개를 만듭니다.
x_raw = tf.random_normal([...], mean=.., stddev=..)
x = sess.run(x_raw)
End of explanation
y_raw = ...
y = ...
Explanation: 위에서 x 축의 값을 만들었으니 이에 상응하는 y 축의 값을 만들려고 합니다. y 값은 0.1*x+0.3 을 만족하되 실제 데이터처럼 보이게 하려고 난수를 조금 섞어서 만듭니다. 여기서는 평균 0, 표준 편차 0.03 인 정규 분포 난수를 만듭니다.
y_raw = 0.1 * x + 0.3 + tf.random_normal([...], mean=.., stddev=..)
y = sess.run(y_raw)
End of explanation
W = ...
b = ...
y_hat = ...
Explanation: 만든 샘플 데이터를 산점도로 나타내보겠습니다. plot 명령에 x, y 축의 값을 전달하고 산점도 표시는 원 모양 'o'으로 하고 테두리 선을 검은색으로 그리도록 하겠습니다.
plt.plot(x, y, 'o', markeredgecolor='k')
선형 회귀에서 사용할 두개의 변수 W 와 b 를 만들고 직선 방정식을 구성합니다.
W = tf.Variable(tf.zeros([.]))
b = ...(tf.zeros([.]))
y_hat = W * x + b
End of explanation
loss = ...
optimizer = ...
train = ...
Explanation: 회귀에서의 손실함수는 평균 제곱 오차(mean squared error)입니다. 텐서플로우에서 사용하는 오차 함수 tf.loss.mean_squared_error()를 사용하여 손실 함수를 위한 노드를 만듭니다. 이 함수에 전달할 매개변수는 정답 y와 예측한 값 y_hat 입니다.
loss = tf.losses.mean_squared_error(y, y_hat)
경사하강법은 텐서플로우 tf.train.GradientDescentOptimizer()에 구현되어 있습니다. 경사하강법 학습속도를 0.5로 주고 최적화 연산을 만듭니다.
optimizer = tf.train.GradientDescentOptimizer(0.5)
optimizer.minimize()함수에 손실 함수 객체를 넘겨주어 학습할 최종 객체를 생성합니다.
train = optimizer.minimize(loss)
End of explanation
init = ...
sess.run(init)
Explanation: 계산 그래프에 필요한 변수를 초기화합니다.
End of explanation
costs = []
for step in range(10):
_, w_, b_, c = ...
costs.append(c)
print(step, w_, b_, c)
# 산포도 그리기
plt.plot(x, y, 'o', markeredgecolor='k')
# 직선 그리기
plt.plot(...)
# x, y 축 레이블링을 하고 각 축의 최대, 최소값 범위를 지정합니다.
plt.xlabel('x')
plt.xlim(-2,2)
plt.ylim(0.1,0.6)
plt.ylabel('y')
plt.show()
Explanation: sess.run() 메소드를 이용해 필요한 연산을 수행할 수 있습니다. 반드시 수행할 것은 train이고 화면 출력을 위해 W, b, loss 를 계산해서 값을 반환 받겠습니다.
_, w_, b_, c = sess.run([train, W, b, loss])
반환 받은 c 는 costs 리스트에 추가하여 나중에 손실함수 그래프를 그리겠습니다. w_, b_ 를 이용해 위 산점도에 직선이 어떻게 맞춰지는지 그림으로 표현합니다.
plt.plot(x, w_ * x + b_)
End of explanation |
11,135 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualize Source time courses
This tutorial focuses on visualization of stcs.
Surface Source Estimates
First, we get the paths for the evoked data and the time courses (stcs).
Step1: Then, we read the stc from file
Step2: This is a
Step3: The SourceEstimate object is in fact a surface source estimate. MNE also
supports volume-based source estimates but more on that later.
We can plot the source estimate using the
Step4: Note that here we used initial_time=0.1, but we can also browse through
time using time_viewer=True.
In case mayavi is not available, we also offer a matplotlib
backend. Here we use verbose='error' to ignore a warning that not all
vertices were used in plotting.
Step5: Volume Source Estimates
We can also visualize volume source estimates (used for deep structures).
Let us load the sensor-level evoked data. We select the MEG channels
to keep things simple.
Step6: Then, we can load the precomputed inverse operator from a file.
Step7: The source estimate is computed using the inverse operator and the
sensor-space data.
Step8: This time, we have a different container
(
Step9: This too comes with a convenient plot method.
Step10: For this visualization, nilearn must be installed.
This visualization is interactive. Click on any of the anatomical slices
to explore the time series. Clicking on any time point will bring up the
corresponding anatomical map.
We could visualize the source estimate on a glass brain. Unlike the previous
visualization, a glass brain does not show us one slice but what we would
see if the brain was transparent like glass.
Step11: Vector Source Estimates
If we choose to use pick_ori='vector' in
Step12: Dipole fits
For computing a dipole fit, we need to load the noise covariance, the BEM
solution, and the coregistration transformation files. Note that for the
other methods, these were already used to generate the inverse operator.
Step13: Dipoles are fit independently for each time point, so let us crop our time
series to visualize the dipole fit for the time point of interest.
Step14: Finally, we can visualize the dipole. | Python Code:
import os
import mne
from mne.datasets import sample
from mne.minimum_norm import apply_inverse, read_inverse_operator
from mne import read_evokeds
data_path = sample.data_path()
sample_dir = os.path.join(data_path, 'MEG', 'sample')
subjects_dir = os.path.join(data_path, 'subjects')
fname_evoked = data_path + '/MEG/sample/sample_audvis-ave.fif'
fname_stc = os.path.join(sample_dir, 'sample_audvis-meg')
Explanation: Visualize Source time courses
This tutorial focuses on visualization of stcs.
Surface Source Estimates
First, we get the paths for the evoked data and the time courses (stcs).
End of explanation
stc = mne.read_source_estimate(fname_stc, subject='sample')
Explanation: Then, we read the stc from file
End of explanation
print(stc)
Explanation: This is a :class:SourceEstimate <mne.SourceEstimate> object
End of explanation
initial_time = 0.1
stc.plot(subjects_dir=subjects_dir, initial_time=initial_time)
Explanation: The SourceEstimate object is in fact a surface source estimate. MNE also
supports volume-based source estimates but more on that later.
We can plot the source estimate using the
:func:stc.plot <mne.SourceEstimate.plot> just as in other MNE
objects. Note that for this visualization to work, you must have mayavi
and pysurfer installed on your machine.
End of explanation
stc.plot(subjects_dir=subjects_dir, initial_time=initial_time,
backend='matplotlib', verbose='error')
Explanation: Note that here we used initial_time=0.1, but we can also browse through
time using time_viewer=True.
In case mayavi is not available, we also offer a matplotlib
backend. Here we use verbose='error' to ignore a warning that not all
vertices were used in plotting.
End of explanation
evoked = read_evokeds(fname_evoked, condition=0, baseline=(None, 0))
evoked.pick_types(meg=True, eeg=False)
Explanation: Volume Source Estimates
We can also visualize volume source estimates (used for deep structures).
Let us load the sensor-level evoked data. We select the MEG channels
to keep things simple.
End of explanation
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-vol-7-meg-inv.fif'
inv = read_inverse_operator(fname_inv)
src = inv['src']
Explanation: Then, we can load the precomputed inverse operator from a file.
End of explanation
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE or sLORETA)
stc = apply_inverse(evoked, inv, lambda2, method)
stc.crop(0.0, 0.2)
Explanation: The source estimate is computed using the inverse operator and the
sensor-space data.
End of explanation
print(stc)
Explanation: This time, we have a different container
(:class:VolSourceEstimate <mne.VolSourceEstimate>) for the source time
course.
End of explanation
stc.plot(src, subject='sample', subjects_dir=subjects_dir)
Explanation: This too comes with a convenient plot method.
End of explanation
stc.plot(src, subject='sample', subjects_dir=subjects_dir, mode='glass_brain')
Explanation: For this visualization, nilearn must be installed.
This visualization is interactive. Click on any of the anatomical slices
to explore the time series. Clicking on any time point will bring up the
corresponding anatomical map.
We could visualize the source estimate on a glass brain. Unlike the previous
visualization, a glass brain does not show us one slice but what we would
see if the brain was transparent like glass.
End of explanation
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
inv = read_inverse_operator(fname_inv)
stc = apply_inverse(evoked, inv, lambda2, 'dSPM', pick_ori='vector')
stc.plot(subject='sample', subjects_dir=subjects_dir,
initial_time=initial_time)
Explanation: Vector Source Estimates
If we choose to use pick_ori='vector' in
:func:apply_inverse <mne.minimum_norm.apply_inverse>
End of explanation
fname_cov = os.path.join(data_path, 'MEG', 'sample', 'sample_audvis-cov.fif')
fname_bem = os.path.join(subjects_dir, 'sample', 'bem',
'sample-5120-bem-sol.fif')
fname_trans = os.path.join(data_path, 'MEG', 'sample',
'sample_audvis_raw-trans.fif')
Explanation: Dipole fits
For computing a dipole fit, we need to load the noise covariance, the BEM
solution, and the coregistration transformation files. Note that for the
other methods, these were already used to generate the inverse operator.
End of explanation
evoked.crop(0.1, 0.1)
dip = mne.fit_dipole(evoked, fname_cov, fname_bem, fname_trans)[0]
Explanation: Dipoles are fit independently for each time point, so let us crop our time
series to visualize the dipole fit for the time point of interest.
End of explanation
dip.plot_locations(fname_trans, 'sample', subjects_dir)
Explanation: Finally, we can visualize the dipole.
End of explanation |
11,136 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fitting a Mixture Model with Gibbs Sampling
Step1: Suppose we receive some data that looks like the following
Step2: It appears that these data exist in three separate clusters. We want to develop a method for finding these latent clusters. One way to start developing a method is to attempt to describe the process that may have generated these data.
For simplicity and sanity, let's assume that each data point is generated independently of the other. Moreover, we will assume that within each cluster, the data points are identically distributed. In this case, we will assume each cluster is normally distributed and that each cluster has the same variance, $\sigma^2$.
Given these assumptions, our data could have been generated by the following process. For each data point, randomly select 1 of 3 clusters from the distribution $\text{Discrete}(\pi_1, \pi_2, \pi_3)$. Each cluster $k$ corresponds to a parameter $\theta_k$ for that cluster, sample a data point from $\mathcal{N}(\theta_k, \sigma^2)$.
Equivalently, we could consider these data to be generated from a probability distribution with this probability density function
Step7: Gibbs Sampling
The theory of Gibbs sampling tells us that given some data $\bf y$ and a probability distribution $p$ parameterized by $\gamma_1, \ldots, \gamma_d$, we can successively draw samples from the distribution by sampling from
$$\gamma_j^{(t)}\sim p(\gamma_j \,|\, \gamma_{\neg j}^{(t-1)})$$
where $\gamma_{\neg j}^{(t-1)}$ is all current values of $\gamma_i$ except for $\gamma_j$. If we sample long enough, these $\gamma_j$ values will be random samples from $p$.
In deriving a Gibbs sampler, it is often helpful to observe that
$$
p(\gamma_j \,|\, \gamma_{\neg j})
= \frac{
p(\gamma_1,\ldots,\gamma_d)
}{
p(\gamma_{\neg j})
} \propto p(\gamma_1,\ldots,\gamma_d).
$$
The conditional distribution is proportional to the joint distribution. We will get a lot of mileage from this simple observation by dropping constant terms from the joint distribution (relative to the parameters we are conditioned on).
The $\gamma$ values in our model are each of the $\theta_k$ values, the $z_i$ values, and the $\pi_k$ values. Thus, we need to derive the conditional distributions for each of these.
Many derivation of Gibbs samplers that I have seen rely on a lot of handwaving and casual appeals to conjugacy. I have tried to add more mathematical details here. I would gladly accept feedback on how to more clearly present the derivations! I have also tried to make the derivations more concrete by immediately providing code to do the computations in this specific case.
Conditional Distribution of Assignment
For berevity, we will use
$$
p(z_i=k \,|\, \cdot)=
p(z_i=k \,|\,
z_{\neg i}, \pi,
\theta_1, \theta_2, \theta_3, \sigma, \bf x
).
$$
Because cluster assignements are conditionally independent given the cluster weights and paramters,
\begin{align}
p(z_i=k \,|\, \cdot)
&\propto
\prod_i^n
\prod_k^K
\left(
\pi_k
\frac{1}{\sigma\sqrt{2\pi}}
\text{exp}\left{
\frac{-(x_i-\theta_k)^2}{2\sigma^2}
\right}
\right)^{\delta(z_i, k)} \
&\propto
\pi_k \cdot
\frac{1}{\sigma\sqrt{2\pi}}
\text{exp}\left{
\frac{-(x_i-\theta_k)^2}{2\sigma^2}
\right}
\end{align}
This equation intuitively makes sense
Step10: Conditional Distribution of Mixture Weights
We can similarly derive the conditional distributions of mixture weights by an application of Bayes theorem. Instead of updating each component of $\pi$ separately, we update them together (this is called blocked Gibbs).
\begin{align}
p(\pi \,|\, \cdot)&=
p(\pi \,|\,
\bf{z},
\theta_1, \theta_2, \theta_3,
\sigma, \mathbf{x}, \alpha
)\
&\propto
p(\pi \,|\,
\mathbf{x},
\theta_1, \theta_2, \theta_3,
\sigma, \alpha
)
p(\bf{z}\ \,|\,
\mathbf{x},
\theta_1, \theta_2, \theta_3,
\sigma, \pi, \alpha
)\
&=
p(\pi \,|\,
\alpha
)
p(\bf{z}\ \,|\,
\mathbf{x},
\theta_1, \theta_2, \theta_3,
\sigma, \pi, \alpha
)\
&=
\prod_{i=1}^K \pi_k^{\alpha/K - 1}
\prod_{i=1}^K \pi_k^{\sum_{i=1}^N \delta(z_i, k)} \
&=\prod_{k=1}^3 \pi_k^{\alpha/K+\sum_{i=1}^N \delta(z_i, k)-1}\
&\propto \text{Dir}\left(
\sum_{i=1}^N \delta(z_i, 1)+\alpha/K,
\sum_{i=1}^N \delta(z_i, 2)+\alpha/K,
\sum_{i=1}^N \delta(z_i, 3)+\alpha/K
\right)
\end{align}
Here are Python functions to sample from the mixture weights given the current state and to update the mixture weights in the state object.
Step11: Conditional Distribution of Cluster Means
Finally, we need to compute the conditional distribution for the cluster means.
We assume the unknown cluster means are distributed according to a normal distribution with hyperparameter mean $\lambda_1$ and variance $\lambda_2^2$. The final step in this derivation comes from the normal-normal conjugacy. For more information see section 2.3 of this and section 6.2 this.)
\begin{align}
p(\theta_k \,|\, \cdot)&=
p(\theta_k \,|\,
\bf{z}, \pi,
\theta_{\neg k},
\sigma, \bf x, \lambda_1, \lambda_2
) \
&\propto p(\left{x_i \,|\, z_i=k\right} \,|\, \bf{z}, \pi,
\theta_1, \theta_2, \theta_3,
\sigma, \lambda_1, \lambda_2) \cdot\
&\phantom{==}p(\theta_k \,|\, \bf{z}, \pi,
\theta_{\neg k},
\sigma, \lambda_1, \lambda_2)\
&\propto p(\left{x_i \,|\, z_i=k\right} \,|\, \mathbf{z},
\theta_k, \sigma)
p(\theta_k \,|\, \lambda_1, \lambda_2)\
&= \mathcal{N}(\theta_k \,|\, \mu_n, \sigma_n)\
\end{align}
$$ \sigma_n^2 = \frac{1}{
\frac{1}{\lambda_2^2} + \frac{N_k}{\sigma^2}
} $$
and
$$\mu_n = \sigma_n^2
\left(
\frac{\lambda_1}{\lambda_2^2} +
\frac{n\bar{x_k}}{\sigma^2}
\right)
$$
Here is the code for sampling those means and for updating our state accordingly.
Step12: Doing each of these three updates in sequence makes a complete Gibbs step for our mixture model. Here is a function to do that
Step13: Initially, we assigned each data point to a random cluster. We can see this by plotting a histogram of each cluster.
Step14: Each time we run gibbs_step, our state is updated with newly sampled assignments. Look what happens to our histogram after 5 steps
Step16: Suddenly, we are seeing clusters that appear very similar to what we would intuitively expect
Step17: See that the log likelihood improves with iterations of the Gibbs sampler. This is what we should expect | Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import random
import matplotlib.pyplot as plt
from scipy import stats
from collections import namedtuple, Counter
Explanation: Fitting a Mixture Model with Gibbs Sampling
End of explanation
data = pd.Series.from_csv("clusters.csv")
_=data.hist(bins=20)
data.size
Explanation: Suppose we receive some data that looks like the following:
End of explanation
SuffStat = namedtuple('SuffStat', 'theta N')
def update_suffstats(state):
for cluster_id, N in Counter(state['assignment']).iteritems():
points_in_cluster = [x
for x, cid in zip(state['data_'], state['assignment'])
if cid == cluster_id
]
mean = np.array(points_in_cluster).mean()
state['suffstats'][cluster_id] = SuffStat(mean, N)
def initial_state():
num_clusters = 3
alpha = 1.0
cluster_ids = range(num_clusters)
state = {
'cluster_ids_': cluster_ids,
'data_': data,
'num_clusters_': num_clusters,
'cluster_variance_': .01,
'alpha_': alpha,
'hyperparameters_': {
"mean": 0,
"variance": 1,
},
'suffstats': [None, None, None],
'assignment': [random.choice(cluster_ids) for _ in data],
'pi': [alpha / num_clusters for _ in cluster_ids],
'cluster_means': [-1, 0, 1]
}
update_suffstats(state)
return state
state = initial_state()
for k, v in state.items():
print k
Explanation: It appears that these data exist in three separate clusters. We want to develop a method for finding these latent clusters. One way to start developing a method is to attempt to describe the process that may have generated these data.
For simplicity and sanity, let's assume that each data point is generated independently of the other. Moreover, we will assume that within each cluster, the data points are identically distributed. In this case, we will assume each cluster is normally distributed and that each cluster has the same variance, $\sigma^2$.
Given these assumptions, our data could have been generated by the following process. For each data point, randomly select 1 of 3 clusters from the distribution $\text{Discrete}(\pi_1, \pi_2, \pi_3)$. Each cluster $k$ corresponds to a parameter $\theta_k$ for that cluster, sample a data point from $\mathcal{N}(\theta_k, \sigma^2)$.
Equivalently, we could consider these data to be generated from a probability distribution with this probability density function:
$$
p(x_i \,|\, \pi, \theta_1, \theta_2, \theta_3, \sigma)=
\sum_{k=1}^3 \pi_k\cdot
\frac{1}{\sigma\sqrt{2\pi}}
\text{exp}\left{
\frac{-(x_i-\theta_k)^2}{2\sigma^2}
\right}
$$
where $\pi$ is a 3-dimensional vector giving the mixing proportions. In other words, $\pi_k$ describes the proportion of points that occur in cluster $k$.
That is, the probability distribution describing $x$ is a linear combination of normal distributions.
We want to use this generative model to formulate an algorithm for determining the particular parameters that generated the dataset above. The $\pi$ vector is unknown to us, as is each cluster mean $\theta_k$.
We would also like to know $z_i\in{1, 2, 3}$, the latent cluster for each point. It turns out that introducing $z_i$ into our model will help us solve for the other values.
The joint distribution of our observed data (data) along with the assignment variables is given by:
\begin{align}
p(\mathbf{x}, \mathbf{z} \,|\, \pi, \theta_1, \theta_2, \theta_3, \sigma)&=
p(\mathbf{z} \,|\, \pi)
p(\mathbf{x} \,|\, \mathbf{z}, \theta_1, \theta_2, \theta_3, \sigma)\
&= \prod_{i=1}^N p(z_i \,|\, \pi)
\prod_{i=1}^N p(x_i \,|\, z_i, \theta_1, \theta_2, \theta_3, \sigma) \
&= \prod_{i=1}^N \pi_{z_i}
\prod_{i=1}^N
\frac{1}{\sigma\sqrt{2\pi}}
\text{exp}\left{
\frac{-(x_i-\theta_{z_i})^2}{2\sigma^2}
\right}\
&= \prod_{i=1}^N
\left(
\pi_{z_i}
\frac{1}{\sigma\sqrt{2\pi}}
\text{exp}\left{
\frac{-(x_i-\theta_{z_i})^2}{2\sigma^2}
\right}
\right)\
&=
\prod_i^n
\prod_k^K
\left(
\pi_k
\frac{1}{\sigma\sqrt{2\pi}}
\text{exp}\left{
\frac{-(x_i-\theta_k)^2}{2\sigma^2}
\right}
\right)^{\delta(z_i, k)}
\end{align}
Keeping Everything Straight
Before moving on, we need to devise a way to keep all our data and parameters straight. Following ideas suggested by Keith Bonawitz, let's define a "state" object to store all of this data.
It won't yet be clear why we are defining some components of state, however we will use each part eventually! As an attempt at clarity, I am using a trailing underscore in the names of members that are fixed. We will update the other parameters as we try to fit the model.
End of explanation
def log_assignment_score(data_id, cluster_id, state):
log p(z_i=k \,|\, \cdot)
We compute these scores in log space for numerical stability.
x = state['data_'][data_id]
theta = state['cluster_means'][cluster_id]
var = state['cluster_variance_']
log_pi = np.log(state['pi'][cluster_id])
return log_pi + stats.norm.logpdf(x, theta, var)
def assigment_probs(data_id, state):
p(z_i=cid \,|\, \cdot) for cid in cluster_ids
scores = [log_assignment_score(data_id, cid, state) for cid in state['cluster_ids_']]
scores = np.exp(np.array(scores))
return scores / scores.sum()
def sample_assignment(data_id, state):
Sample cluster assignment for data_id given current state
cf Step 1 of Algorithm 2.1 in Sudderth 2006
p = assigment_probs(data_id, state)
return np.random.choice(state['cluster_ids_'], p=p)
def update_assignment(state):
Update cluster assignment for each data point given current state
cf Step 1 of Algorithm 2.1 in Sudderth 2006
for data_id, x in enumerate(state['data_']):
state['assignment'][data_id] = sample_assignment(data_id, state)
update_suffstats(state)
Explanation: Gibbs Sampling
The theory of Gibbs sampling tells us that given some data $\bf y$ and a probability distribution $p$ parameterized by $\gamma_1, \ldots, \gamma_d$, we can successively draw samples from the distribution by sampling from
$$\gamma_j^{(t)}\sim p(\gamma_j \,|\, \gamma_{\neg j}^{(t-1)})$$
where $\gamma_{\neg j}^{(t-1)}$ is all current values of $\gamma_i$ except for $\gamma_j$. If we sample long enough, these $\gamma_j$ values will be random samples from $p$.
In deriving a Gibbs sampler, it is often helpful to observe that
$$
p(\gamma_j \,|\, \gamma_{\neg j})
= \frac{
p(\gamma_1,\ldots,\gamma_d)
}{
p(\gamma_{\neg j})
} \propto p(\gamma_1,\ldots,\gamma_d).
$$
The conditional distribution is proportional to the joint distribution. We will get a lot of mileage from this simple observation by dropping constant terms from the joint distribution (relative to the parameters we are conditioned on).
The $\gamma$ values in our model are each of the $\theta_k$ values, the $z_i$ values, and the $\pi_k$ values. Thus, we need to derive the conditional distributions for each of these.
Many derivation of Gibbs samplers that I have seen rely on a lot of handwaving and casual appeals to conjugacy. I have tried to add more mathematical details here. I would gladly accept feedback on how to more clearly present the derivations! I have also tried to make the derivations more concrete by immediately providing code to do the computations in this specific case.
Conditional Distribution of Assignment
For berevity, we will use
$$
p(z_i=k \,|\, \cdot)=
p(z_i=k \,|\,
z_{\neg i}, \pi,
\theta_1, \theta_2, \theta_3, \sigma, \bf x
).
$$
Because cluster assignements are conditionally independent given the cluster weights and paramters,
\begin{align}
p(z_i=k \,|\, \cdot)
&\propto
\prod_i^n
\prod_k^K
\left(
\pi_k
\frac{1}{\sigma\sqrt{2\pi}}
\text{exp}\left{
\frac{-(x_i-\theta_k)^2}{2\sigma^2}
\right}
\right)^{\delta(z_i, k)} \
&\propto
\pi_k \cdot
\frac{1}{\sigma\sqrt{2\pi}}
\text{exp}\left{
\frac{-(x_i-\theta_k)^2}{2\sigma^2}
\right}
\end{align}
This equation intuitively makes sense: point $i$ is more likely to be in cluster $k$ if $k$ is itself probable ($\pi_k\gg 0$) and $x_i$ is close to the mean of the cluster $\theta_k$.
For each data point $i$, we can compute $p(z_i=k \,|\, \cdot)$ for each of cluster $k$. These values are the unnormalized parameters to a discrete distribution from which we can sample assignments.
Below, we define functions for doing this sampling. sample_assignment will generate a sample from the posterior assignment distribution for the specified data point. update_assignment will sample from the posterior assignment for each data point and update the state object.
End of explanation
def sample_mixture_weights(state):
Sample new mixture weights from current state according to
a Dirichlet distribution
cf Step 2 of Algorithm 2.1 in Sudderth 2006
ss = state['suffstats']
alpha = [ss[cid].N + state['alpha_'] / state['num_clusters_']
for cid in state['cluster_ids_']]
return stats.dirichlet(alpha).rvs(size=1).flatten()
def update_mixture_weights(state):
Update state with new mixture weights from current state
sampled according to a Dirichlet distribution
cf Step 2 of Algorithm 2.1 in Sudderth 2006
state['pi'] = sample_mixture_weights(state)
Explanation: Conditional Distribution of Mixture Weights
We can similarly derive the conditional distributions of mixture weights by an application of Bayes theorem. Instead of updating each component of $\pi$ separately, we update them together (this is called blocked Gibbs).
\begin{align}
p(\pi \,|\, \cdot)&=
p(\pi \,|\,
\bf{z},
\theta_1, \theta_2, \theta_3,
\sigma, \mathbf{x}, \alpha
)\
&\propto
p(\pi \,|\,
\mathbf{x},
\theta_1, \theta_2, \theta_3,
\sigma, \alpha
)
p(\bf{z}\ \,|\,
\mathbf{x},
\theta_1, \theta_2, \theta_3,
\sigma, \pi, \alpha
)\
&=
p(\pi \,|\,
\alpha
)
p(\bf{z}\ \,|\,
\mathbf{x},
\theta_1, \theta_2, \theta_3,
\sigma, \pi, \alpha
)\
&=
\prod_{i=1}^K \pi_k^{\alpha/K - 1}
\prod_{i=1}^K \pi_k^{\sum_{i=1}^N \delta(z_i, k)} \
&=\prod_{k=1}^3 \pi_k^{\alpha/K+\sum_{i=1}^N \delta(z_i, k)-1}\
&\propto \text{Dir}\left(
\sum_{i=1}^N \delta(z_i, 1)+\alpha/K,
\sum_{i=1}^N \delta(z_i, 2)+\alpha/K,
\sum_{i=1}^N \delta(z_i, 3)+\alpha/K
\right)
\end{align}
Here are Python functions to sample from the mixture weights given the current state and to update the mixture weights in the state object.
End of explanation
def sample_cluster_mean(cluster_id, state):
cluster_var = state['cluster_variance_']
hp_mean = state['hyperparameters_']['mean']
hp_var = state['hyperparameters_']['variance']
ss = state['suffstats'][cluster_id]
numerator = hp_mean / hp_var + ss.theta * ss.N / cluster_var
denominator = (1.0 / hp_var + ss.N / cluster_var)
posterior_mu = numerator / denominator
posterior_var = 1.0 / denominator
return stats.norm(posterior_mu, np.sqrt(posterior_var)).rvs()
def update_cluster_means(state):
state['cluster_means'] = [sample_cluster_mean(cid, state)
for cid in state['cluster_ids_']]
Explanation: Conditional Distribution of Cluster Means
Finally, we need to compute the conditional distribution for the cluster means.
We assume the unknown cluster means are distributed according to a normal distribution with hyperparameter mean $\lambda_1$ and variance $\lambda_2^2$. The final step in this derivation comes from the normal-normal conjugacy. For more information see section 2.3 of this and section 6.2 this.)
\begin{align}
p(\theta_k \,|\, \cdot)&=
p(\theta_k \,|\,
\bf{z}, \pi,
\theta_{\neg k},
\sigma, \bf x, \lambda_1, \lambda_2
) \
&\propto p(\left{x_i \,|\, z_i=k\right} \,|\, \bf{z}, \pi,
\theta_1, \theta_2, \theta_3,
\sigma, \lambda_1, \lambda_2) \cdot\
&\phantom{==}p(\theta_k \,|\, \bf{z}, \pi,
\theta_{\neg k},
\sigma, \lambda_1, \lambda_2)\
&\propto p(\left{x_i \,|\, z_i=k\right} \,|\, \mathbf{z},
\theta_k, \sigma)
p(\theta_k \,|\, \lambda_1, \lambda_2)\
&= \mathcal{N}(\theta_k \,|\, \mu_n, \sigma_n)\
\end{align}
$$ \sigma_n^2 = \frac{1}{
\frac{1}{\lambda_2^2} + \frac{N_k}{\sigma^2}
} $$
and
$$\mu_n = \sigma_n^2
\left(
\frac{\lambda_1}{\lambda_2^2} +
\frac{n\bar{x_k}}{\sigma^2}
\right)
$$
Here is the code for sampling those means and for updating our state accordingly.
End of explanation
def gibbs_step(state):
update_assignment(state)
update_mixture_weights(state)
update_cluster_means(state)
Explanation: Doing each of these three updates in sequence makes a complete Gibbs step for our mixture model. Here is a function to do that:
End of explanation
def plot_clusters(state):
gby = pd.DataFrame({
'data': state['data_'],
'assignment': state['assignment']}
).groupby(by='assignment')['data']
hist_data = [gby.get_group(cid).tolist()
for cid in gby.groups.keys()]
plt.hist(hist_data,
bins=20,
histtype='stepfilled', alpha=.5 )
plot_clusters(state)
Explanation: Initially, we assigned each data point to a random cluster. We can see this by plotting a histogram of each cluster.
End of explanation
for _ in range(5):
gibbs_step(state)
plot_clusters(state)
Explanation: Each time we run gibbs_step, our state is updated with newly sampled assignments. Look what happens to our histogram after 5 steps:
End of explanation
def log_likelihood(state):
Data log-likeliehood
Equation 2.153 in Sudderth
ll = 0
for x in state['data_']:
pi = state['pi']
mean = state['cluster_means']
sd = np.sqrt(state['cluster_variance_'])
ll += np.log(np.dot(pi, stats.norm(mean, sd).pdf(x)))
return ll
state = initial_state()
ll = [log_likelihood(state)]
for _ in range(20):
gibbs_step(state)
ll.append(log_likelihood(state))
pd.Series(ll).plot()
Explanation: Suddenly, we are seeing clusters that appear very similar to what we would intuitively expect: three Gaussian clusters.
Another way to see the progress made by the Gibbs sampler is to plot the change in the model's log-likelihood after each step. The log likehlihood is given by:
$$
\log p(\mathbf{x} \,|\, \pi, \theta_1, \theta_2, \theta_3)
\propto \sum_x \log \left(
\sum_{k=1}^3 \pi_k \exp
\left{
-(x-\theta_k)^2 / (2\sigma^2)
\right}
\right)
$$
We can define this as a function of our state object:
End of explanation
pd.Series(ll).plot(ylim=[-150, -100])
Explanation: See that the log likelihood improves with iterations of the Gibbs sampler. This is what we should expect: the Gibbs sampler finds state configurations that make the data we have seem "likely". However, the likelihood isn't strictly monotonic: it jitters up and down. Though it behaves similarly, the Gibbs sampler isn't optimizing the likelihood function. In its steady state, it is sampling from the posterior distribution. The state after each step of the Gibbs sampler is a sample from the posterior.
End of explanation |
11,137 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Epoching and averaging (ERP/ERF)
Step1: In MNE, epochs refers to a collection of single trials or short segments
of time locked raw data. If you haven't already, you might want to check out
tut_epochs_objects. In this tutorial we take a deeper look into
construction of epochs and averaging the epoch data to evoked instances.
First let's read in the raw sample data.
Step2: To create time locked epochs, we first need a set of events that contain the
information about the times. In this tutorial we use the stimulus channel to
define the events. Let's look at the raw data.
Step3: Notice channel STI 014 at the bottom. It is the trigger channel that
was used for combining all the events to a single channel. We can see that it
has several pulses of different amplitude throughout the recording. These
pulses correspond to different stimuli presented to the subject during the
acquisition. The pulses have values of 1, 2, 3, 4, 5 and 32. These are the
events we are going to align the epochs to. To create an event list from raw
data, we simply call a function dedicated just for that. Since the event list
is simply a numpy array, you can also manually create one. If you create one
from an outside source (like a separate file of events), pay special
attention in aligning the events correctly with the raw data.
Step4: The event list contains three columns. The first column corresponds to
sample number. To convert this to seconds, you should divide the sample
number by the used sampling frequency. The second column is reserved for the
old value of the trigger channel at the time of transition, but is currently
not in use. The third column is the trigger id (amplitude of the pulse).
You might wonder why the samples don't seem to align with the plotted data.
For instance, the first event has a sample number of 27977 which should
translate to roughly 46.6 seconds (27977 / 600). However looking at
the pulses we see the first pulse at 3.6 seconds. This is because Neuromag
recordings have an attribute first_samp which refers to the offset
between the system start and the start of the recording. Our data has a
first_samp equal to 25800. This means that the first sample you see with
raw.plot is the sample number 25800. Generally you don't need to worry
about this offset as it is taken into account with MNE functions, but it is
good to be aware of. Just to confirm, let's plot the events together with the
raw data. Notice how the vertical lines (events) align nicely with the pulses
on STI 014.
Step5: In this tutorial we are only interested in triggers 1, 2, 3 and 4. These
triggers correspond to auditory and visual stimuli. The event_id here
can be an int, a list of ints or a dict. With dicts it is possible to assign
these ids to distinct categories. When using ints or lists this information
is lost. First we shall define some parameters to feed to the
Step6: Now we have everything we need to construct the epochs. To get some
meaningful results, we also want to baseline the epochs. Baselining computes
the mean over the baseline period and adjusts the data accordingly. The
epochs constructor uses a baseline period from tmin to 0.0 seconds by
default, but it is wise to be explicit. That way you are less likely to end
up with surprises along the way. None as the first element of the tuple
refers to the start of the time window (-200 ms in this case).
See
Step7: Let's plot the epochs to see the results. The number at the top refers to the
id number. We can see that 128 good epochs out of total of 145 events got
through the rejection process. Visual inspection also reveals that some
epochs containing saccades or blinks got through. You can also reject epochs
by hand by clicking on the epoch in the browser window. The selected epochs
get rejected when you close the epochs browser. How you should reject the
epochs and which thresholds to use is not a trivial question and this
tutorial takes no stand on that matter.
To see all the interactive features of the epochs browser, click 'Help' in
the lower left corner of the browser window.
Step8: To see why the epochs were rejected, we can plot the drop log.
Step9: To get the evoked response you can simply do epochs.average(). It
includes only the data channels by default. For the sake of example, we use
picks to include the EOG channels as well. Notice that we cannot use the
same picks as before as the indices are different. 'Why are they different?'
you might ask. They're different because picks is simply a list of
channel indices and as the epochs were constructed, also a new info structure
is created where the channel indices run from 0 to epochs.info['nchan'].
See tut_info_objects for more information.
Step10: Notice we have used forward slashes ('/') to separate the factors of the
conditions of the experiment. We can use these 'tags' to select for example
all left trials (both visual left and auditory right) ...
Step11: <div class="alert alert-info"><h4>Note</h4><p>It is also possible to add metadata to Epochs objects, allowing for
more complex selections on subsets of Epochs. See
`sphx_glr_auto_tutorials_plot_metadata_epochs.py` for more
information.</p></div>
Finally, let's plot the evoked responses. | Python Code:
import os.path as op
import numpy as np
import mne
Explanation: Epoching and averaging (ERP/ERF)
End of explanation
data_path = mne.datasets.sample.data_path()
fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(fname)
raw.set_eeg_reference('average', projection=True) # set EEG average reference
Explanation: In MNE, epochs refers to a collection of single trials or short segments
of time locked raw data. If you haven't already, you might want to check out
tut_epochs_objects. In this tutorial we take a deeper look into
construction of epochs and averaging the epoch data to evoked instances.
First let's read in the raw sample data.
End of explanation
order = np.arange(raw.info['nchan'])
order[9] = 312 # We exchange the plotting order of two channels
order[312] = 9 # to show the trigger channel as the 10th channel.
raw.plot(n_channels=10, order=order, block=True)
Explanation: To create time locked epochs, we first need a set of events that contain the
information about the times. In this tutorial we use the stimulus channel to
define the events. Let's look at the raw data.
End of explanation
events = mne.find_events(raw)
print('Found %s events, first five:' % len(events))
print(events[:5])
# Plot the events to get an idea of the paradigm
# Specify colors and an event_id dictionary for the legend.
event_id = {'Auditory/Left': 1, 'Auditory/Right': 2,
'Visual/Left': 3, 'Visual/Right': 4,
'smiley': 5, 'button': 32}
color = {1: 'green', 2: 'yellow', 3: 'red', 4: 'c', 5: 'black', 32: 'blue'}
mne.viz.plot_events(events, raw.info['sfreq'], raw.first_samp, color=color,
event_id=event_id)
Explanation: Notice channel STI 014 at the bottom. It is the trigger channel that
was used for combining all the events to a single channel. We can see that it
has several pulses of different amplitude throughout the recording. These
pulses correspond to different stimuli presented to the subject during the
acquisition. The pulses have values of 1, 2, 3, 4, 5 and 32. These are the
events we are going to align the epochs to. To create an event list from raw
data, we simply call a function dedicated just for that. Since the event list
is simply a numpy array, you can also manually create one. If you create one
from an outside source (like a separate file of events), pay special
attention in aligning the events correctly with the raw data.
End of explanation
raw.plot(events=events, n_channels=10, order=order)
Explanation: The event list contains three columns. The first column corresponds to
sample number. To convert this to seconds, you should divide the sample
number by the used sampling frequency. The second column is reserved for the
old value of the trigger channel at the time of transition, but is currently
not in use. The third column is the trigger id (amplitude of the pulse).
You might wonder why the samples don't seem to align with the plotted data.
For instance, the first event has a sample number of 27977 which should
translate to roughly 46.6 seconds (27977 / 600). However looking at
the pulses we see the first pulse at 3.6 seconds. This is because Neuromag
recordings have an attribute first_samp which refers to the offset
between the system start and the start of the recording. Our data has a
first_samp equal to 25800. This means that the first sample you see with
raw.plot is the sample number 25800. Generally you don't need to worry
about this offset as it is taken into account with MNE functions, but it is
good to be aware of. Just to confirm, let's plot the events together with the
raw data. Notice how the vertical lines (events) align nicely with the pulses
on STI 014.
End of explanation
tmin, tmax = -0.2, 0.5
event_id = {'Auditory/Left': 1, 'Auditory/Right': 2,
'Visual/Left': 3, 'Visual/Right': 4}
# Only pick MEG and EOG channels.
picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=True)
Explanation: In this tutorial we are only interested in triggers 1, 2, 3 and 4. These
triggers correspond to auditory and visual stimuli. The event_id here
can be an int, a list of ints or a dict. With dicts it is possible to assign
these ids to distinct categories. When using ints or lists this information
is lost. First we shall define some parameters to feed to the
:class:mne.Epochs constructor. The values tmin and tmax refer to
offsets in relation to the events. Here we make epochs that collect the data
from 200 ms before to 500 ms after the event.
End of explanation
baseline = (None, 0.0)
reject = {'mag': 4e-12, 'eog': 200e-6}
epochs = mne.Epochs(raw, events=events, event_id=event_id, tmin=tmin,
tmax=tmax, baseline=baseline, reject=reject, picks=picks)
Explanation: Now we have everything we need to construct the epochs. To get some
meaningful results, we also want to baseline the epochs. Baselining computes
the mean over the baseline period and adjusts the data accordingly. The
epochs constructor uses a baseline period from tmin to 0.0 seconds by
default, but it is wise to be explicit. That way you are less likely to end
up with surprises along the way. None as the first element of the tuple
refers to the start of the time window (-200 ms in this case).
See :class:mne.Epochs for more.
We also define rejection thresholds to get rid of noisy epochs. The
rejection thresholds are defined as peak-to-peak values within the epoch time
window. They are defined as T/m for gradiometers, T for magnetometers and V
for EEG and EOG electrodes.
<div class="alert alert-info"><h4>Note</h4><p>In this tutorial, we don't preprocess the data. This is not
something you would normally do. See our `documentation` on
preprocessing for more.</p></div>
End of explanation
epochs.plot(block=True)
Explanation: Let's plot the epochs to see the results. The number at the top refers to the
id number. We can see that 128 good epochs out of total of 145 events got
through the rejection process. Visual inspection also reveals that some
epochs containing saccades or blinks got through. You can also reject epochs
by hand by clicking on the epoch in the browser window. The selected epochs
get rejected when you close the epochs browser. How you should reject the
epochs and which thresholds to use is not a trivial question and this
tutorial takes no stand on that matter.
To see all the interactive features of the epochs browser, click 'Help' in
the lower left corner of the browser window.
End of explanation
epochs.plot_drop_log()
Explanation: To see why the epochs were rejected, we can plot the drop log.
End of explanation
picks = mne.pick_types(epochs.info, meg=True, eog=True)
evoked_left = epochs['Auditory/Left'].average(picks=picks)
evoked_right = epochs['Auditory/Right'].average(picks=picks)
Explanation: To get the evoked response you can simply do epochs.average(). It
includes only the data channels by default. For the sake of example, we use
picks to include the EOG channels as well. Notice that we cannot use the
same picks as before as the indices are different. 'Why are they different?'
you might ask. They're different because picks is simply a list of
channel indices and as the epochs were constructed, also a new info structure
is created where the channel indices run from 0 to epochs.info['nchan'].
See tut_info_objects for more information.
End of explanation
epochs_left = epochs['Left']
# ... or to select a very specific subset. This is the same as above:
evoked_left = epochs['Left/Auditory'].average(picks=picks)
Explanation: Notice we have used forward slashes ('/') to separate the factors of the
conditions of the experiment. We can use these 'tags' to select for example
all left trials (both visual left and auditory right) ...
End of explanation
evoked_left.plot(time_unit='s')
evoked_right.plot(time_unit='s')
Explanation: <div class="alert alert-info"><h4>Note</h4><p>It is also possible to add metadata to Epochs objects, allowing for
more complex selections on subsets of Epochs. See
`sphx_glr_auto_tutorials_plot_metadata_epochs.py` for more
information.</p></div>
Finally, let's plot the evoked responses.
End of explanation |
11,138 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Inspection Paradox is Everywhere
Allen Downey 2019
MIT License
Step1: Class size
Here's the data summarizing the distribution of undergraduate class sizes at Purdue University in 2013-14.
Step3: I generate a sample from this distribution, assuming a uniform distribution in each range and an upper bound of 300.
Step4: The "unbiased" sample is as seen by the college, with each class equally likely to be in the sample.
Step6: To generate a biased sample, we use the values themselves as weights and resample with replacement.
Step8: To plot the distribution, I use KDE to estimate the density function, then evaluate it over the given sequence of xs.
Step9: The following plot shows the distribution of class size as seen by the Dean, and as seen by a sample of students.
Step11: Here are the means of the unbiased and biased distributions.
Step12: Red Line
Here are times between trains in seconds.
Step13: Here's the same data in minutes.
Step14: We can use the same function to generate a biased sample.
Step15: And plot the results.
Step16: Here are the means of the distributions and the percentage difference.
Step18: Social network
The following function reads the Facebook data.
Step19: The unbiased sample is the number of friends for each user.
Step20: We can use the same function to generate a biased sample.
Step21: And generate the plot.
Step22: Here are the means of the distributions.
Step23: And the probability that the friend of a user has more friends than the user.
Step24: Relay race
The following function read the data from the 2010 James Joyce Ramble 10K, where I ran my personal record time.
Step25: In this case, the weights are related to the difference between each element of the sample and the hypothetical speed of the observer.
Step26: And here's the plot.
Step27: Prison sentences
First we read the data from the Bureau of Prisons web page.
Step28: Here are the low and I sentences for each range. I assume that the minimum sentence is about a week, that sentences "less than life" are 40 years, and that a life sentence is between 40 and 60 years.
Step29: We can get the counts from the table.
Step31: Here's a different version of generate_sample for a continuous quantity.
Step32: In this case, the data are biased.
Step33: So we have to unbias them with weights inversely proportional to the values.
Prisoners in federal prison typically serve 85% of their nominal sentence. We can take that into account in the weights.
Step34: Here's the unbiased sample.
Step35: And the plotted distributions.
Step36: We can also compute the distribution of sentences as seen by someone at the prison for 13 months.
Step37: Here's the sample.
Step38: And here's what it looks like.
Step39: In the unbiased distribution, almost half of prisoners serve less than one year.
Step40: But if we sample the prison population, barely 3% are short timers.
Step41: Here are the means of the distributions.
Step42: The dartboard problem | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from empiricaldist import Pmf
from utils import decorate
# set the random seed so we get the same results every time
np.random.seed(17)
# make the directory for the figures
import os
if not os.path.exists('inspection'):
!mkdir inspection
Explanation: The Inspection Paradox is Everywhere
Allen Downey 2019
MIT License
End of explanation
# Class size data originally from
# https://www.purdue.edu/datadigest/2013-14/InstrStuLIfe/DistUGClasses.html
# now available from
# https://web.archive.org/web/20160415011613/https://www.purdue.edu/datadigest/2013-14/InstrStuLIfe/DistUGClasses.html
sizes = [(1, 1),
(2, 9),
(10, 19),
(20, 29),
(30, 39),
(40, 49),
(50, 99),
(100, 300)]
counts = [138, 635, 1788, 1979, 796, 354, 487, 333]
Explanation: Class size
Here's the data summarizing the distribution of undergraduate class sizes at Purdue University in 2013-14.
End of explanation
def generate_sample(sizes, counts):
Generate a sample from a distribution.
sizes: sequence of (low, high) pairs
counts: sequence of integers
returns: NumPy array
t = []
for (low, high), count in zip(sizes, counts):
print(count, low, high)
sample = np.random.randint(low, high+1, count)
t.extend(sample)
return np.array(t)
Explanation: I generate a sample from this distribution, assuming a uniform distribution in each range and an upper bound of 300.
End of explanation
unbiased = generate_sample(sizes, counts)
Explanation: The "unbiased" sample is as seen by the college, with each class equally likely to be in the sample.
End of explanation
def resample_weighted(sample, weights):
Resample values from `sample` with the given weights.
sample: NumPy array
weights: NumPy array
returns: NumPy array
n = len(sample)
p = weights / np.sum(weights)
return np.random.choice(sample, n, p=p)
biased = resample_weighted(unbiased, unbiased)
Explanation: To generate a biased sample, we use the values themselves as weights and resample with replacement.
End of explanation
from scipy.stats import gaussian_kde
def kdeplot(sample, xs, label=None, **options):
Use KDE to plot the density function.
sample: NumPy array
xs: NumPy array
label: string
density = gaussian_kde(sample, **options).evaluate(xs)
plt.plot(xs, density, label=label)
decorate(ylabel='Relative likelihood')
Explanation: To plot the distribution, I use KDE to estimate the density function, then evaluate it over the given sequence of xs.
End of explanation
xs = np.arange(1, 300)
kdeplot(unbiased, xs, 'Reported by the Dean')
kdeplot(biased, xs, 'Reported by students')
decorate(xlabel='Class size',
title='Distribution of class sizes')
plt.savefig('inspection/class_size.png', dpi=150)
Explanation: The following plot shows the distribution of class size as seen by the Dean, and as seen by a sample of students.
End of explanation
np.mean(unbiased)
np.mean(biased)
from empiricaldist import Cdf
def cdfplot(sample, xs, label=None, **options):
Plot the CDF of the sample.
sample: NumPy array
xs: NumPy array (ignored)
label: string
cdf = Cdf.from_seq(sample, **options)
cdf.plot(label=label)
decorate(ylabel='CDF')
xs = np.arange(1, 300)
cdfplot(unbiased, xs, 'Reported by the Dean')
cdfplot(biased, xs, 'Reported by students')
decorate(xlabel='Class size',
title='Distribution of class sizes')
plt.savefig('inspection/class_size.png', dpi=150)
Explanation: Here are the means of the unbiased and biased distributions.
End of explanation
unbiased = [
428.0, 705.0, 407.0, 465.0, 433.0, 425.0, 204.0, 506.0, 143.0, 351.0,
450.0, 598.0, 464.0, 749.0, 341.0, 586.0, 754.0, 256.0, 378.0, 435.0,
176.0, 405.0, 360.0, 519.0, 648.0, 374.0, 483.0, 537.0, 578.0, 534.0,
577.0, 619.0, 538.0, 331.0, 186.0, 629.0, 193.0, 360.0, 660.0, 484.0,
512.0, 315.0, 457.0, 404.0, 740.0, 388.0, 357.0, 485.0, 567.0, 160.0,
428.0, 387.0, 901.0, 187.0, 622.0, 616.0, 585.0, 474.0, 442.0, 499.0,
437.0, 620.0, 351.0, 286.0, 373.0, 232.0, 393.0, 745.0, 636.0, 758.0,
]
Explanation: Red Line
Here are times between trains in seconds.
End of explanation
unbiased = np.array(unbiased) / 60
Explanation: Here's the same data in minutes.
End of explanation
biased = resample_weighted(unbiased, unbiased)
Explanation: We can use the same function to generate a biased sample.
End of explanation
xs = np.linspace(1, 16.5, 101)
kdeplot(unbiased, xs, 'Seen by MBTA')
kdeplot(biased, xs, 'Seen by passengers')
decorate(xlabel='Time between trains (min)',
title='Distribution of time between trains')
plt.savefig('inspection/red_line.png', dpi=150)
xs = np.linspace(1, 16.5, 101)
cdfplot(unbiased, xs, 'Seen by MBTA')
cdfplot(biased, xs, 'Seen by passengers')
decorate(xlabel='Time between trains (min)',
title='Distribution of time between trains')
plt.savefig('inspection/red_line.png', dpi=150)
Explanation: And plot the results.
End of explanation
np.mean(biased), np.mean(unbiased)
(np.mean(biased) - np.mean(unbiased)) / np.mean(unbiased) * 100
Explanation: Here are the means of the distributions and the percentage difference.
End of explanation
import networkx as nx
def read_graph(filename):
Read a graph from a file.
filename: string
return: nx.Graph
G = nx.Graph()
array = np.loadtxt(filename, dtype=int)
G.add_edges_from(array)
return G
# https://snap.stanford.edu/data/facebook_combined.txt.gz
fb = read_graph('facebook_combined.txt.gz')
n = len(fb)
m = len(fb.edges())
n, m
Explanation: Social network
The following function reads the Facebook data.
End of explanation
unbiased = [fb.degree(node) for node in fb]
len(unbiased)
np.max(unbiased)
Explanation: The unbiased sample is the number of friends for each user.
End of explanation
biased = resample_weighted(unbiased, unbiased)
Explanation: We can use the same function to generate a biased sample.
End of explanation
xs = np.linspace(0, 300, 101)
kdeplot(unbiased, xs, 'Random sample of people')
kdeplot(biased, xs, 'Random sample of friends')
decorate(xlabel='Number of friends in social network',
title='Distribution of social network size')
plt.savefig('inspection/social.png', dpi=150)
xs = np.linspace(0, 300, 101)
cdfplot(unbiased, xs, 'Random sample of people')
cdfplot(biased, xs, 'Random sample of friends')
decorate(xlabel='Number of friends in social network',
title='Distribution of social network size',
xlim=[-10, 310])
plt.savefig('inspection/social.png', dpi=150)
Explanation: And generate the plot.
End of explanation
np.mean(biased), np.mean(unbiased)
Explanation: Here are the means of the distributions.
End of explanation
np.mean(biased > unbiased)
Explanation: And the probability that the friend of a user has more friends than the user.
End of explanation
import relay
results = relay.ReadResults()
unbiased = relay.GetSpeeds(results)
Explanation: Relay race
The following function read the data from the 2010 James Joyce Ramble 10K, where I ran my personal record time.
End of explanation
weights = np.abs(np.array(unbiased) - 7)
biased = resample_weighted(unbiased, weights)
Explanation: In this case, the weights are related to the difference between each element of the sample and the hypothetical speed of the observer.
End of explanation
xs = np.linspace(3, 11, 101)
kdeplot(unbiased, xs, 'Seen by spectator')
kdeplot(biased, xs, 'Seen by runner at 7 mph', bw_method=0.2)
decorate(xlabel='Running speed (mph)',
title='Distribution of running speed')
plt.savefig('inspection/relay.png', dpi=150)
xs = np.linspace(3, 11, 101)
cdfplot(unbiased, xs, 'Seen by spectator')
cdfplot(biased, xs, 'Seen by runner at 7 mph')
decorate(xlabel='Running speed (mph)',
title='Distribution of running speed')
plt.savefig('inspection/relay.png', dpi=150)
Explanation: And here's the plot.
End of explanation
tables = pd.read_html('BOP Statistics_ Sentences Imposed.html')
df = tables[0]
df
Explanation: Prison sentences
First we read the data from the Bureau of Prisons web page.
End of explanation
sentences = [(0.02, 1),
(1, 3),
(3, 5),
(5, 10),
(10, 15),
(15, 20),
(20, 40),
(40, 60)]
Explanation: Here are the low and I sentences for each range. I assume that the minimum sentence is about a week, that sentences "less than life" are 40 years, and that a life sentence is between 40 and 60 years.
End of explanation
counts = df['# of Inmates']
Explanation: We can get the counts from the table.
End of explanation
def generate_sample(sizes, counts):
Generate a sample from a distribution.
sizes: sequence of (low, high) pairs
counts: sequence of integers
returns: NumPy array
t = []
for (low, high), count in zip(sizes, counts):
print(count, low, high)
sample = np.random.uniform(low, high, count)
t.extend(sample)
return np.array(t)
Explanation: Here's a different version of generate_sample for a continuous quantity.
End of explanation
biased = generate_sample(sentences, counts)
Explanation: In this case, the data are biased.
End of explanation
weights = 1 / (0.85 * np.array(biased))
Explanation: So we have to unbias them with weights inversely proportional to the values.
Prisoners in federal prison typically serve 85% of their nominal sentence. We can take that into account in the weights.
End of explanation
unbiased = resample_weighted(biased, weights)
Explanation: Here's the unbiased sample.
End of explanation
xs = np.linspace(0, 60, 101)
kdeplot(unbiased, xs, 'Seen by judge', bw_method=0.5)
kdeplot(biased, xs, 'Seen by prison visitor', bw_method=0.5)
decorate(xlabel='Prison sentence (years)',
title='Distribution of federal prison sentences')
plt.savefig('inspection/orange.png', dpi=150)
xs = np.linspace(0, 60, 101)
cdfplot(unbiased, xs, 'Seen by judge')
cdfplot(biased, xs, 'Seen by prison visitor')
decorate(xlabel='Prison sentence (years)',
title='Distribution of federal prison sentences')
plt.savefig('inspection/orange.png', dpi=150)
Explanation: And the plotted distributions.
End of explanation
x = 0.85 * unbiased
y = 13 / 12
weights = x + y
Explanation: We can also compute the distribution of sentences as seen by someone at the prison for 13 months.
End of explanation
kerman = resample_weighted(unbiased, weights)
Explanation: Here's the sample.
End of explanation
xs = np.linspace(0, 60, 101)
kdeplot(unbiased, xs, 'Seen by judge', bw_method=0.5)
kdeplot(kerman, xs, 'Seen by Kerman', bw_method=0.5)
kdeplot(biased, xs, 'Seen by visitor', bw_method=0.5)
decorate(xlabel='Prison sentence (years)',
title='Distribution of federal prison sentences')
plt.savefig('inspection/orange.png', dpi=150)
xs = np.linspace(0, 60, 101)
cdfplot(unbiased, xs, 'Seen by judge')
cdfplot(kerman, xs, 'Seen by Kerman')
cdfplot(biased, xs, 'Seen by visitor')
decorate(xlabel='Prison sentence (years)',
title='Distribution of federal prison sentences')
plt.savefig('inspection/orange.png', dpi=150)
Explanation: And here's what it looks like.
End of explanation
np.mean(unbiased<1)
Explanation: In the unbiased distribution, almost half of prisoners serve less than one year.
End of explanation
np.mean(biased<1)
Explanation: But if we sample the prison population, barely 3% are short timers.
End of explanation
np.mean(unbiased)
np.mean(biased)
np.mean(kerman)
Explanation: Here are the means of the distributions.
End of explanation
from matplotlib.patches import Circle
def draw_dartboard():
ax = plt.gca()
c1 = Circle((0, 0), 170, color='C3', alpha=0.3)
c2 = Circle((0, 0), 160, color='white')
c3 = Circle((0, 0), 107, color='C3', alpha=0.3)
c4 = Circle((0, 0), 97, color='white')
c5 = Circle((0, 0), 16, color='C3', alpha=0.3)
c6 = Circle((0, 0), 6, color='white')
for circle in [c1, c2, c3, c4, c5, c6]:
ax.add_patch(circle)
plt.axis('equal')
draw_dartboard()
plt.text(0, 10, '25 ring')
plt.text(0, 110, 'triple ring')
plt.text(0, 170, 'double ring')
plt.savefig('inspection/darts0.png', dpi=150)
sigma = 50
n = 100
error_x = np.random.normal(0, sigma, size=(n))
error_y = np.random.normal(0, sigma, size=(n))
draw_dartboard()
plt.plot(error_x, error_y, '.')
plt.savefig('inspection/darts1.png', dpi=150)
sigma = 50
n = 10000
error_x = np.random.normal(0, sigma, size=(n))
error_y = np.random.normal(0, sigma, size=(n))
import numpy as np
import seaborn as sns
import matplotlib.pyplot as pl
ax = sns.kdeplot(error_x, error_y, shade=True, cmap="PuBu")
ax.collections[0].set_alpha(0)
plt.axis([-240, 240, -175, 175])
decorate(xlabel='x distance from center (mm)',
ylabel='y distance from center (mm)',
title='Estimated density')
plt.savefig('inspection/darts2.png', dpi=150)
rs = np.hypot(error_x, error_y)
np.random.seed(18)
sigma = 50
n = 10000
error_x = np.random.normal(0, sigma, size=(n))
error_y = np.random.normal(0, sigma, size=(n))
xs = np.linspace(-200, 200, 101)
#ys = np.exp(-(xs/sigma)**2/2)
#pmf = Pmf(ys, index=xs)
#pmf.normalize()
#pmf.plot(color='gray')
unbiased = error_x
biased = resample_weighted(unbiased, np.abs(unbiased))
kdeplot(unbiased, xs, 'Density at a point')
kdeplot(biased, xs, 'Total density in a ring')
#kdeplot(rs, xs, 'Total density in a ring')
decorate(xlabel='Distance from center (mm)',
ylabel='Density',
xlim=[0, 210])
plt.savefig('inspection/darts3.png', dpi=150)
xs = np.linspace(0, 200, 101)
unbiased = np.abs(error_x)
biased = resample_weighted(unbiased, unbiased)
cdfplot(unbiased, xs, 'Density at a point')
cdfplot(biased, xs, 'Total density in a ring')
decorate(xlabel='Distance from center (mm)',
ylabel='Density')
plt.savefig('inspection/darts4.png', dpi=150)
triple = (biased > 97) & (biased < 107)
triple.mean() * 100
ring50 = (biased > 6) & (biased < 16)
ring50.mean() * 100
double = (biased > 160) & (biased < 170)
double.mean() * 100
bull = (biased < 6)
bull.mean() * 100
Explanation: The dartboard problem
End of explanation |
11,139 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="http
Step1: Molecules
Let's start with something relaxing.
Execute each cell by selecting it and pressing Shift + Return
Step2: You've just created your first molecule in buckyball - the bb.from_name method can translate many common chemical names, as well as IUPAC nomenclature, into 2D and 3D chemical structures.
If morphine is a little intimdating, you can try making caffeine, or even just water, by editing the code in the above cell. You can edit any code in this notebook at any time, and press Shift + Return to execute the new commands.
Let's take a look at your Molecule's attributes. Try using Jupyter's live autocompletion - typing my_first_molecule.[tab], for instance, will show you a list of the object's attributes.
Step3: Every chemical system in buckyball is called a Molecule. These objects store information about atoms, chemical bonds, 3D positions, and chemical properties. A Molecule object is more than just a data store, though - they also have methods for simulating, modeling, and visualizing this information.
A list of the molecule's atoms are stored at my_first_molecule.atoms. Let's print out some custom information about the first five atoms
Step4: Manipulating 3D structures
Sometimes, a 3D structure needs to be fixed - perhaps the wrong rotamer was generated, or someone gave you a non-planar geometry of a planar molecule.
For instance, the OpenBabel SMILES converter will generate a gauche-conformation alkane, when we really want an anti conformation. Let's use the GeometryBuilder to fix it - click on the central dihedral bond, and adjust the angle to 180º, either with the slider or by typing "180".
Step5: Readin', Writin', and Introspectin'
Of course, you'll probably want to import and export your own molecular structures. Buckyball can read and write molecules from a variety of 3D formats, including pdb, sdf, mol2, cif, and xyz.
Let's start by reading in a file that was downloaded from the PDB database (you can also use bb.from_pdb(3AID) to download it directly from the PDB site). We can get some quick information about the molecule in the file just by its name on the last line of the cell.
Step6: This is a crystal structure of a protein (HIV-1 protease) that's bound to a small organic drug molecule (that's the single unknown residue type). In addition to the protein and drug, there are also several water molecules hanging around the structure.
Step7: Let's pull the drug molecule out of the structure and save it to disk. First, click on one of the atoms in the drug. You'll see that it's Residue ARQ401 in Chain A. We can grab this residue by going into the molecule's secondary structure
Step8: Now, we'll write it to disk
Step9: Simulation and modeling
As a quick first example, let's take a look at a quantum mechanical model of benzene.
This time, we'll construct the molecule in yet another way - by using a SMILES string.
Step10: Next, we associate the quantum mechanical model with the molecule, and run a calculation
Step11: There's a bunch of data here, which we'll dive into in other notebooks. For now, we can visualize the results of the calculation
Step12: You can visualize two different kinds of orbitals here
Step13: The cell below constructs a quick visualization of the HIV drug's contacts with the surrounding protein residues, with 2D rendering, a 3D rendering, and an inspector that provides information about atoms that the user clicks on | Python Code:
# This cell sets up both the python and notebook environments
%matplotlib inline
import moldesign as mdt # import the buckyball package
from moldesign import units as u # import the buckyball unit system
Explanation: <a href="http://moldesign.bionano.autodesk.com" target="_blank"><img src="img/Top.png"></a>
<center><h1>MDT Quickstart<br> or,
A Hitchhiker's Guide to the Chemical Modeling Universe</h1></center>
This notebook gives you the Molecular Design Toolkit crash course.
Click on the following cell and press Shift + Return to execute it.
End of explanation
my_first_molecule = mdt.from_name('morphine')
my_first_molecule.draw()
Explanation: Molecules
Let's start with something relaxing.
Execute each cell by selecting it and pressing Shift + Return
End of explanation
my_first_molecule
Explanation: You've just created your first molecule in buckyball - the bb.from_name method can translate many common chemical names, as well as IUPAC nomenclature, into 2D and 3D chemical structures.
If morphine is a little intimdating, you can try making caffeine, or even just water, by editing the code in the above cell. You can edit any code in this notebook at any time, and press Shift + Return to execute the new commands.
Let's take a look at your Molecule's attributes. Try using Jupyter's live autocompletion - typing my_first_molecule.[tab], for instance, will show you a list of the object's attributes.
End of explanation
print 'This molecule has %d atoms' % my_first_molecule.num_atoms
for atom in my_first_molecule.atoms[:5]:
print 'Atom {atom} (atomic number {atom.atnum}) weighs {atom.mass}, and ' \
'is bonded to {atom.num_bonds} other atoms.'.format(atom=atom)
my_first_molecule.atoms[-1]
Explanation: Every chemical system in buckyball is called a Molecule. These objects store information about atoms, chemical bonds, 3D positions, and chemical properties. A Molecule object is more than just a data store, though - they also have methods for simulating, modeling, and visualizing this information.
A list of the molecule's atoms are stored at my_first_molecule.atoms. Let's print out some custom information about the first five atoms:
End of explanation
mol = mdt.from_smiles('CCCC')
mdt.ui.GeometryBuilder(mol)
Explanation: Manipulating 3D structures
Sometimes, a 3D structure needs to be fixed - perhaps the wrong rotamer was generated, or someone gave you a non-planar geometry of a planar molecule.
For instance, the OpenBabel SMILES converter will generate a gauche-conformation alkane, when we really want an anti conformation. Let's use the GeometryBuilder to fix it - click on the central dihedral bond, and adjust the angle to 180º, either with the slider or by typing "180".
End of explanation
hivprotease = mdt.read('data/3AID.pdb')
hivprotease
Explanation: Readin', Writin', and Introspectin'
Of course, you'll probably want to import and export your own molecular structures. Buckyball can read and write molecules from a variety of 3D formats, including pdb, sdf, mol2, cif, and xyz.
Let's start by reading in a file that was downloaded from the PDB database (you can also use bb.from_pdb(3AID) to download it directly from the PDB site). We can get some quick information about the molecule in the file just by its name on the last line of the cell.
End of explanation
hivprotease.draw()
Explanation: This is a crystal structure of a protein (HIV-1 protease) that's bound to a small organic drug molecule (that's the single unknown residue type). In addition to the protein and drug, there are also several water molecules hanging around the structure.
End of explanation
drugresidue = hivprotease.chains['A'].residues['ARQ401']
drug = mdt.Molecule(drugresidue)
drug.draw3d()
Explanation: Let's pull the drug molecule out of the structure and save it to disk. First, click on one of the atoms in the drug. You'll see that it's Residue ARQ401 in Chain A. We can grab this residue by going into the molecule's secondary structure:
End of explanation
drug.write(filename='/tmp/drugmol.xyz')
!cat /tmp/drugmol.xyz
Explanation: Now, we'll write it to disk:
End of explanation
mol = mdt.from_smiles('C1=CC=CC=C1')
mol.draw()
Explanation: Simulation and modeling
As a quick first example, let's take a look at a quantum mechanical model of benzene.
This time, we'll construct the molecule in yet another way - by using a SMILES string.
End of explanation
mol.set_energy_model(mdt.models.PySCFPotential(theory='rhf',basis='sto-3g'))
mol.calculate()
Explanation: Next, we associate the quantum mechanical model with the molecule, and run a calculation:
End of explanation
mol.draw_orbitals()
Explanation: There's a bunch of data here, which we'll dive into in other notebooks. For now, we can visualize the results of the calculation:
End of explanation
viewer = hivprotease.draw3d()
hdonors = [atom for atom in hivprotease.atoms
if atom.elem in ('O','N') and atom.residue.type == 'protein']
viewer.add_style(style='vdw', atoms=hdonors, radius=0.5)
viewer.highlight_atoms(atoms=[a for a in hdonors if a.distance(drugresidue) <= 4.0*u.angstrom])
viewer
Explanation: You can visualize two different kinds of orbitals here: the canonical molecular orbitals, often just referred to as "molecular orbitals" (MOs); and atomic orbitals, which were combined to create the MOs.
Visualization API
You can access and create molecule visualizations in the notebook as well.
First, let's draw some more information into our visualization. We'll draw all the protein's potential hydrogen bond donors (basically Oxygens and Nitrogens) as spheres, and highlight those close to the ligand.
End of explanation
contact_view = bb.viewer.make_contact_view(drugresidue,view_radius=3.5*u.angstrom,
contact_radius=2.25*u.angstrom,
width=600,height=400)
geom_view = bb.viewer.GeometryViewer(hivprotease, width=600,height=300)
geom_view.cartoon(opacity=0.7)
geom_view.licorice(atoms=drugresidue,radius=0.5)
for residue,color in contact_view.colored_residues.iteritems():
geom_view.licorice(atoms=residue,color=color)
# We use the ipywidgets package to arrange the three UI elements:
import ipywidgets as ipy
viewer_group = ipy.VBox([geom_view,contact_view])
selector = bb.ui.SelectionGroup([ipy.HBox([viewer_group,bb.ui.AtomInspector()])])
selector
Explanation: The cell below constructs a quick visualization of the HIV drug's contacts with the surrounding protein residues, with 2D rendering, a 3D rendering, and an inspector that provides information about atoms that the user clicks on:
End of explanation |
11,140 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
神经网络算法实现的核心之一是对代价函数的反向求导,Theano和Tensorflow中都定义了求导的符号函数,同样地,作为深度学习平台,自动求导(autograd)功能在pytorch中也扮演着核心功能,不同的是,pytorch的动态图功能使其更灵活(define by run), 比如甚至在每次迭代中都可以通过改变pytorch中Variable的属性,从而使其加入亦或退出反向求导图,这个功能在某些应用中会特别使用,比如在训练的后半段,我们需要更新的只有后层的参数,那么只需要在前层参数的Variable设置成不要求求导。
变量
autograd.Variable是pytorch设计理念的核心之一,它将tensor封装成Variable,并支持绝大多数tensor上的操作,同时赋予其两个极其重要的属性:requires_grad和volatile,Variable还具有三个属性: .data用于存储Variable的数值,.grad也为变量,用于存储Variable的导数值,.grad_fn (creator) 是生成该Variable的函数,当Variable为用户自定义时,其值为"None". 细节参见源码Variable
Step1: A simple numpy implementation of one hidden layer neural network.
In this implementation, for each update of $w_i$, both the forward and backward passes need to computed.
Step2: with very slight modifications, we could end up with the implementation of the same algorithm in PyTorch | Python Code:
x = Variable(T.ones(2,2), requires_grad=True)
print x
y = T.exp(x + 2)
yy = T.exp(-x-2)
print y
z = (y + yy)/2
out = z.mean()
print z, out
make_dot(out)
out.backward(T.FloatTensor(1), retain_graph=True)
x.grad
T.randn(1,1)
from __future__ import print_function
xx = Variable(torch.randn(1,1), requires_grad = True)
print(xx)
yy = 3*xx
zz = yy**2
#yy.register_hook(print)
zz.backward(T.FloatTensor([0.1]))
print(xx.grad)
Explanation: 神经网络算法实现的核心之一是对代价函数的反向求导,Theano和Tensorflow中都定义了求导的符号函数,同样地,作为深度学习平台,自动求导(autograd)功能在pytorch中也扮演着核心功能,不同的是,pytorch的动态图功能使其更灵活(define by run), 比如甚至在每次迭代中都可以通过改变pytorch中Variable的属性,从而使其加入亦或退出反向求导图,这个功能在某些应用中会特别使用,比如在训练的后半段,我们需要更新的只有后层的参数,那么只需要在前层参数的Variable设置成不要求求导。
变量
autograd.Variable是pytorch设计理念的核心之一,它将tensor封装成Variable,并支持绝大多数tensor上的操作,同时赋予其两个极其重要的属性:requires_grad和volatile,Variable还具有三个属性: .data用于存储Variable的数值,.grad也为变量,用于存储Variable的导数值,.grad_fn (creator) 是生成该Variable的函数,当Variable为用户自定义时,其值为"None". 细节参见源码Variable
End of explanation
# y_pred = w2*(relu(w1*x))
# loss = 0.5*sum (y_pred - y)^2
import numpy as np
N, D_in, D_hidden, D_out = 50, 40, 100, 10
x = np.random.randn(N, D_in)
y = np.random.randn(N, D_out)
w1 = np.random.randn(D_in, D_hidden)
w2 = np.random.randn(D_hidden, D_out)
learning_rate = 0.0001
for t in range(100):
### 前向通道
h = x.dot(w1) #50x40 and 40x100 produce 50x100
h_relu = np.maximum(h, 0) #this has to be np.maximum as it takes two input arrays and do element-wise max, 50x100
y_pred = h_relu.dot(w2) #50x100 and 100x10 produce 50x10
#print y_pred.shape
### 误差函数
loss = 0.5 * np.sum(np.square(y_pred - y))
### 反向通道
grad_y_pred = y_pred - y #50x10
grad_w2 = h_relu.T.dot(grad_y_pred) #50x100 and 50x10 should produce 100x10, so transpose h_relu
grad_h_relu = grad_y_pred.dot(w2.T) #50x10 and 100x10 should produce 50x100, so transpose w2
grad_h = grad_h_relu.copy() #make a copy of
grad_h[grad_h < 0] = 0 #
grad_w1 = x.T.dot(grad_h) #50x100 and 50x40 should produce 40x100
w1 = w1 - learning_rate * grad_w1
w2 = w2 - learning_rate * grad_w2
Explanation: A simple numpy implementation of one hidden layer neural network.
In this implementation, for each update of $w_i$, both the forward and backward passes need to computed.
End of explanation
import torch
N, D_in, D_hidden, D_out = 50, 40, 100, 10
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)
w1 = torch.randn(D_in, D_hidden)
w2 = torch.randn(D_hidden, D_out)
learning_rate = 0.0001
for t in range(100):
h = x.mm(w1) #50x40 and 40x100 produce 50x100
#h = x.matmul(w1) #50x40 and 40x100 produce 50x100, matmul for checking
h_relu = h.clamp(min=0) #this has to be np.maximum as it takes two input arrays and do element-wise max, 50x100
y_pred = h_relu.mm(w2) #50x100 and 100x10 produce 50x10
#print y_pred.shape
loss = 0.5 * (y_pred - y).pow(2).sum()
grad_y_pred = y_pred - y #50x10
grad_w2 = h_relu.t().mm(grad_y_pred) #50x100 and 50x10 should produce 100x10, so transpose h_relu
grad_h_relu = grad_y_pred.mm(w2.t()) #50x10 and 100x10 should produce 50x100, so transpose w2
grad_h = grad_h_relu.clone() #make a copy
grad_h[grad_h < 0] = 0 #
grad_w1 = x.t().mm(grad_h) #50x100 and 50x40 should produce 40x100
w1 = w1 - learning_rate * grad_w1
w2 = w2 - learning_rate * grad_w2
Explanation: with very slight modifications, we could end up with the implementation of the same algorithm in PyTorch
End of explanation |
11,141 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
6차시
Step1: 2. 학습 데이터 불러오기
Step2: 3. 학습 데이터 살펴보기
Step3: 4. Convolution Neural Network
Step4: 5. Transfer Learning
Step5: 6. Transfer Learning Finetune | Python Code:
# 도구 준비
import os
import random
import tensorflow as tf # 텐서플로우
import tensorflow_hub as hub
import matplotlib.pyplot as plt # 시각화 도구
%matplotlib inline
import numpy as np
import PIL.Image as Image
print(f'Tensorflow 버전을 확인합니다: {tf.__version__}')
Explanation: 6차시: 텐서플로우 2.x 활용 전이 학습 이미지 분류
AI 맛보기 6주차: 2020. 08. 11. 20:00 ~ 22:00 (120분)
도구 불러오기 및 버전 확인
학습 데이터 불러오기: From Google Drive
학습 완료된 모델 확인
학습 데이터 살펴보기
Convolution Neural Network
Transfer Learning
Transfer Learning Finetune
참고자료
파이썬 3 표준 문서
텐서플로우 전이학습
1. 도구 불러오기 및 버전 확인
End of explanation
try:
from google.colab import drive
drive.mount('/content/gdrive')
except:
print(f'Google colab 환경이 아닙니다.')
pass
!rm -r '/tmp/dataset'
!unzip -d '/tmp/' './dataset.zip' &> /dev/null
# !unzip -d '/tmp/' '/content/gdrive/My Drive/Colab Notebooks/dataset.zip' &> /dev/null
!ls '/tmp/dataset'
# prepare dataset
dataset_root = os.path.abspath(os.path.expanduser('/tmp/dataset'))
print(f'Dataset root: {dataset_root}')
IMAGE_SHAPE = (128, 128) # 자신의 데이터 셋에 맞추어서 조정!
image_generator = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1/255,
validation_split=0.2)
train_data = image_generator.flow_from_directory(dataset_root, target_size=IMAGE_SHAPE,
subset='training')
validation_data = image_generator.flow_from_directory(dataset_root, target_size=IMAGE_SHAPE,
subset='validation')
for image_batch, label_batch in validation_data:
print(f'Image batch shape: {image_batch.shape}')
print(f'Label batch shape: {label_batch.shape}')
break
Explanation: 2. 학습 데이터 불러오기: From Google Drive
End of explanation
classifier_url = 'https://tfhub.dev/google/imagenet/inception_v3/classification/4'
classifier = tf.keras.Sequential([
hub.KerasLayer(classifier_url, input_shape=IMAGE_SHAPE+(3,)) # Channel 3 RGB
])
labels_path = tf.keras.utils.get_file('ImageNetLabels.txt',
'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt')
imagenet_labels = np.array(open(labels_path).read().splitlines())
## using original ImangeNet classifier
result_batch = classifier.predict(image_batch)
print(f'Batch result shape: {result_batch.shape}')
predicted_class_names = imagenet_labels[np.argmax(result_batch, axis=-1)]
print(f'Batch predicted class names: {predicted_class_names}')
fig = plt.figure(figsize=(10, 10.5))
for n in range(30):
ax = fig.add_subplot(6, 5, n+1)
ax.imshow(image_batch[n])
ax.set_title(predicted_class_names[n])
ax.axis('off')
_ = fig.suptitle('ImageNet predictions')
Explanation: 3. 학습 데이터 살펴보기
End of explanation
## Log class
### https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/Callback
class CollectBatchStats(tf.keras.callbacks.Callback):
def __init__(self):
self.batch_losses = []
self.batch_val_losses = []
self.batch_acc = []
self.batch_val_acc = []
def on_epoch_end(self, epoch, logs=None):
self.batch_losses.append(logs['loss'])
self.batch_acc.append(logs['accuracy'])
self.batch_val_losses.append(logs['val_loss'])
self.batch_val_acc.append(logs['val_accuracy'])
self.model.reset_metrics()
cnn_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=image_batch.shape[1:]),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(train_data.num_classes)])
cnn_model.summary()
base_learning_rate = 0.001
cnn_model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=base_learning_rate),
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
steps_per_epoch = np.ceil(train_data.samples/train_data.batch_size) # train all dataset per epoch
epochs = 25*2
cnn_callback = CollectBatchStats()
cnn_history = cnn_model.fit(train_data,
epochs=epochs,
steps_per_epoch=steps_per_epoch,
validation_data=validation_data,
callbacks=[cnn_callback])
# Draw learning curves chart
acc = cnn_callback.batch_acc
val_acc = cnn_callback.batch_val_acc
loss = cnn_callback.batch_losses
val_loss = cnn_callback.batch_val_losses
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(2, 1, 1)
ax.plot(acc, label='Training Accuracy')
ax.plot(val_acc, label='Validation Accuracy')
ax.legend(loc='lower right')
ax.set_ylabel('Accuracy')
ax.set_ylim([min(plt.ylim()),1])
ax.set_title('Training and Validation Accuracy')
ax = fig.add_subplot(2, 1, 2)
ax.plot(loss, label='Training Loss')
ax.plot(val_loss, label='Validation Loss')
ax.legend(loc='upper right')
ax.set_ylabel('Cross Entropy')
ax.set_ylim([0,1.0])
ax.set_title('Training and Validation Loss')
ax.set_xlabel('epoch')
_ = fig.suptitle('Our Convolution Neural Network')
# Plot results
class_names = sorted(validation_data.class_indices.items(), key=lambda pair:pair[1])
class_names = np.array([key.title() for key, value in class_names])
print(f'Classes: {class_names}')
## get result labels
predicted_batch = cnn_model.predict(image_batch)
predicted_id = np.argmax(predicted_batch, axis=-1)
predicted_label_batch = class_names[predicted_id]
label_id = np.argmax(label_batch, axis=-1)
## plot
fig = plt.figure(figsize=(10, 10.5))
for n in range(30):
ax = fig.add_subplot(6, 5, n+1)
ax.imshow(image_batch[n])
color = 'green' if predicted_id[n] == label_id[n] else 'red'
ax.set_title(predicted_label_batch[n].title(), color=color)
ax.axis('off')
_ = fig.suptitle('Model predictions (green: correct, red: incorrect)')
Explanation: 4. Convolution Neural Network
End of explanation
# Prepare transfer learning
feature_extractor_url = 'https://tfhub.dev/google/tf2-preview/inception_v3/feature_vector/4'
feature_extractor_layer = hub.KerasLayer(feature_extractor_url,
input_shape=IMAGE_SHAPE+(3, ))
feature_batch = feature_extractor_layer(image_batch)
print(f'Feature vector shape: {feature_batch.shape}')
## Frozen feature extraction layer
feature_extractor_layer.trainable = False
## Make a model for classification
model = tf.keras.Sequential([
feature_extractor_layer,
tf.keras.layers.Dense(train_data.num_classes, activation='softmax')
])
model.summary()
predictions= model(image_batch)
print(f'Prediction shape: {predictions.shape}')
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=base_learning_rate),
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
steps_per_epoch = np.ceil(train_data.samples/train_data.batch_size) # train all dataset per epoch
initial_epoch = 25
batch_stats_callback = CollectBatchStats()
history = model.fit(train_data,
epochs=initial_epoch,
steps_per_epoch=steps_per_epoch,
validation_data=validation_data,
callbacks=[batch_stats_callback])
# Draw learning curves chart
acc = batch_stats_callback.batch_acc
val_acc = batch_stats_callback.batch_val_acc
loss = batch_stats_callback.batch_losses
val_loss = batch_stats_callback.batch_val_losses
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(2, 1, 1)
ax.plot(acc, label='Training Accuracy')
ax.plot(val_acc, label='Validation Accuracy')
ax.legend(loc='lower right')
ax.set_ylabel('Accuracy')
ax.set_ylim([min(plt.ylim()),1])
ax.set_title('Training and Validation Accuracy')
ax = fig.add_subplot(2, 1, 2)
ax.plot(loss, label='Training Loss')
ax.plot(val_loss, label='Validation Loss')
ax.legend(loc='upper right')
ax.set_ylabel('Cross Entropy')
ax.set_ylim([0,1.0])
ax.set_title('Training and Validation Loss')
ax.set_xlabel('epoch')
_ = fig.suptitle('Transfer Learning: Convolution Neural Network')
# Plot results
class_names = sorted(validation_data.class_indices.items(), key=lambda pair:pair[1])
class_names = np.array([key.title() for key, value in class_names])
print(f'Classes: {class_names}')
## get result labels
predicted_batch = model.predict(image_batch)
predicted_id = np.argmax(predicted_batch, axis=-1)
predicted_label_batch = class_names[predicted_id]
label_id = np.argmax(label_batch, axis=-1)
## plot
fig = plt.figure(figsize=(10, 10.5))
for n in range(30):
ax = fig.add_subplot(6, 5, n+1)
ax.imshow(image_batch[n])
color = 'green' if predicted_id[n] == label_id[n] else 'red'
ax.set_title(predicted_label_batch[n].title(), color=color)
ax.axis('off')
_ = fig.suptitle('Model predictions (green: correct, red: incorrect)')
Explanation: 5. Transfer Learning
End of explanation
## Unfrozen feature extraction layer
feature_extractor_layer.trainable = True
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=base_learning_rate/10),
loss=tf.keras.losses.CategoricalCrossentropy(),
metrics=['accuracy'])
model.summary()
finetune_epoch = 25
history_fine = model.fit(train_data,
epochs=initial_epoch+finetune_epoch,
initial_epoch=initial_epoch,
steps_per_epoch=steps_per_epoch,
validation_data=validation_data,
callbacks = [batch_stats_callback])
# Draw learning curves chart
fine_acc = batch_stats_callback.batch_acc
fine_val_acc = batch_stats_callback.batch_val_acc
fine_loss = batch_stats_callback.batch_losses
fine_val_loss = batch_stats_callback.batch_val_losses
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(2, 1, 1)
ax.plot(acc, label='Training Accuracy')
ax.plot(val_acc, label='Validation Accuracy')
ax.set_ylabel('Accuracy')
ax.set_ylim([min(plt.ylim()),1])
ax.plot([initial_epoch,initial_epoch],
ax.get_ylim(), label='Start Fine Tuning')
ax.legend(loc='lower right')
ax.set_title('Training and Validation Accuracy')
ax = fig.add_subplot(2, 1, 2)
ax.plot(loss, label='Training Loss')
ax.plot(val_loss, label='Validation Loss')
ax.set_ylabel('Cross Entropy')
ax.set_ylim([0,1.0])
ax.plot([initial_epoch,initial_epoch],
ax.get_ylim(), label='Start Fine Tuning')
ax.legend(loc='upper right')
ax.set_title('Training and Validation Loss')
ax.set_xlabel('epoch')
# Plot results
class_names = sorted(validation_data.class_indices.items(), key=lambda pair:pair[1])
class_names = np.array([key.title() for key, value in class_names])
print(f'Classes: {class_names}')
## get result labels
predicted_batch = model.predict(image_batch)
predicted_id = np.argmax(predicted_batch, axis=-1)
predicted_label_batch = class_names[predicted_id]
label_id = np.argmax(label_batch, axis=-1)
## plot
fig = plt.figure(figsize=(10, 10.5))
for n in range(30):
ax = fig.add_subplot(6, 5, n+1)
ax.imshow(image_batch[n])
color = 'green' if predicted_id[n] == label_id[n] else 'red'
ax.set_title(predicted_label_batch[n].title(), color=color)
ax.axis('off')
_ = fig.suptitle('Model predictions (green: correct, red: incorrect)')
idx = 0
test_data = validation_data[idx//32][0][idx%32]
actual_label = class_names[np.argmax(validation_data[idx//32][1][idx%32])]
predicted_label = class_names[np.argmax(model.predict(tf.expand_dims(test_data, 0)))]
fig = plt.figure(figsize=(6, 6))
fig.set_facecolor('white')
ax = fig.add_subplot()
axm = ax.imshow(test_data)
fig.suptitle(f'Test Image [{idx}]', fontsize=14)
ax.set_title(f'Label: {predicted_label} (Actual: {actual_label})', fontsize=12)
ax.grid(False)
Explanation: 6. Transfer Learning Finetune
End of explanation |
11,142 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
OKCupid Clean Data
OKCupid's website returns some partially hidden text when it is too long for their layout.
Lets skip these and just focus on the fully named places.
Step1: Feature
Step2: Geolocation APIs have hourly limits, so this was originally run using a cron job nightly to build up a large map of locations to (lat/lon)
Step3: User Table
For simplicity store lat/lon within user table. A location table and a user table would be better.
Step4: Feature
Step5: Feature
Step6: Write as json for archive tools | Python Code:
%matplotlib inline
import time
import pylab
import numpy as np
import pandas as pd
import pycupid.locations
people = pd.read_json('/Users/ajmendez/data/okcupid/random.json')
print('Scraping archive found {:,d} random people'.format(len(people)))
Explanation: OKCupid Clean Data
OKCupid's website returns some partially hidden text when it is too long for their layout.
Lets skip these and just focus on the fully named places.
End of explanation
locations = people['location'].astype(unicode)#.replace(r'\s+', np.nan, regex=True)
isgood = (locations.str.extract((u'(\u2026)')).isnull()) & (locations.str.len() > 0)
noriginal = len(locations.unique())
unique_locations = locations[isgood].unique()
nlocations = len(unique_locations)
print('There are a total of {:,d} unique locations and {:,d} good ones'.format(noriginal, nlocations))
print(' > missing locations: {:0.1f}%'.format((noriginal-nlocations)*100.0/noriginal))
print(' > missing people: {:0.1f}%'.format((len(locations)-len(np.where(isgood)[0]))*100.0/len(locations)))
Explanation: Feature: Lat/Lon
Locations are generally [city name], [state abbr].
However there are a number of locations that where too long for the search page and are abbreviated with a unicode u'\u2026'
Lets ignore these places on our first pass and then return to them later -- ~14% loss of locations
End of explanation
# does not seem to pickup the lat/lon notation from the old db
location_map = pd.read_json('/Users/ajmendez/data/okcupid/location_map.json', orient='index')
location_map.columns = ['lat', 'lon']
print('Location cache contains {:,d} locations'.format(len(location_map)))
# load v2:
location_map = pd.read_json('/Users/ajmendez/data/okcupid/locations_v2.json', orient='index')
geonames = pycupid.locations.getGN()
inew = 0
for i, location in enumerate(unique_locations):
if location in location_map.index:
continue
print u'Getting location: {}'.format(location)
try:
loc, (lat, lon) = geonames.geocode(location.encode('utf8'))
except Exception as e:
print u' > Failed: {}'.format(location)
# raise e
# too many loc* names!
location_map.loc[location] = [lat,lon]
inew += 1
# give the API a bit of a break
time.sleep(0.2)
if inew > 1000:
break
print len(location_map)
location_map.to_json('/Users/ajmendez/data/okcupid/locations_v2.json', orient='index')
Explanation: Geolocation APIs have hourly limits, so this was originally run using a cron job nightly to build up a large map of locations to (lat/lon)
End of explanation
finished = []
for i, location in enumerate(location_map.index):
if location in finished:
continue
tmp = location_map.loc[location]
isloc = (locations == location)
people.loc[isloc, 'lat'] = tmp['lat']
people.loc[isloc, 'lon'] = tmp['lon']
people.loc[isloc, 'nloc'] = isloc.sum()
finished.append(location)
if (i%1000 == 0):
print i,
# better plots later, this is just a test
people.plot('lon', 'lat', kind='scatter', s=2, lw=0, alpha=0.1)
people.to_csv('/Users/ajmendez/data/okcupid/random_v2.csv', encoding='utf-8')
Explanation: User Table
For simplicity store lat/lon within user table. A location table and a user table would be better.
End of explanation
people = pd.read_csv('/Users/ajmendez/data/okcupid/random_v2.csv')
tmp = people['username'].str.extract((u'(\d+)'))
people['username_number'] = tmp.apply(lambda x: int(x) if isinstance(x, (str, unicode)) else np.nan)
people['username_nlength'] = tmp.apply(lambda x: len(x) if isinstance(x, (str,unicode)) else 0)
people.to_csv('/Users/ajmendez/data/okcupid/random_v3.csv', encoding='utf-8')
Explanation: Feature: Numbers in Usernames
Extract the integers that are in each username
End of explanation
names = ['dinosaur', 'saur','saurus', 'dino','jurassic', 'rex', 'sarus',
'pterodactyl', 'archaeopter', 'pteranod', 'pterodact']
people['hasdino'] = people['username'].str.lower().str.extract((u'({})'.format('|'.join(names)))).notnull()
people.to_csv('/Users/ajmendez/data/okcupid/random_v4.csv', encoding='utf-8')
Explanation: Feature: Name Groups
End of explanation
people = pd.read_csv('/Users/ajmendez/data/okcupid/random_v2.csv')
people.to_json('/Users/ajmendez/data/okcupid/random_v2.json', orient='index')
Explanation: Write as json for archive tools
End of explanation |
11,143 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
OP2 Demo
The iPython notebook for this demo can be found in
Step1: Sets default precision of real numbers for pandas output
Step2: As with the BDF, we can use the long form and the short form. However, the long form for the OP2 doesn't really add anything. So, let's just use the short form.
Besides massive speed improvements in the OP2 relative to v0.7, this version adds pandas dataframe support.
Step3: OP2 Introspection
The get_op2_stats() function lets you quickly understand what in an op2.
Step4: If that's too long...
Step5: Acccessing the Eigenvectors object
Eigenvectors are the simplest object. They use the same class as for displacements, velocity, acceleration, SPC Forces, MPC Forces, Applied Loads, etc. These are all node-based tables with TX, TY, TZ, RX, RY, RZ. Results are in the analysis coordinate frame (CD), which is defined by the GRID card.
Numpy-based Approach
We'll first show off the standard numpy based results on a transient case. Static results are the same, except that you'll always use the 0th index for the "time" index.
The tutorial is intetionally just accessing the objects in a very clear, though inefficient way. The OP2 objects can take full advantage of the numpy operations.
Step6: Pandas-based Approach
If you like pandas, you can access all the OP2 objects, which is very useful within the Jupyter Notebook. Different objects will look differently, but you can change the layout.
If you're trying to learn pandas, there are many tutorials online, such as
Step7: Accessing the plate stress/strain
Results are stored on a per element type basis.
The OP2 is the same as an F06, so CQUAD4 elements have centroidal-based results or centroidal-based as well as the results at the 4 corner nodes.
Be careful about what you're accessing.
Step8: Similar to the BDF, we can use object_attributes/methods
Step9: Number of Nodes on a CQUAD4
For linear CQUAD4s, there is 1 centroidal stress at two locations
For bilinear quads, there are 5 stresses at two locations (4 nodes + centroidal)
node_id=0 indicates a centroidal quantity
CTRIA3s are always centroidal
What sets this?
STRESS(real, sort1, BILIN) = ALL # bilinear cquad
STRESS(real, sort1, CENT) = ALL # linear quad
STRAIN(real, sort1, BILIN) = ALL # bilinear cquad
STRAIN(real, sort1, CENT) = ALL # linear quad
How do we know if we're bilinear?
print("is_bilinear = %s\n" % plate_stress.is_bilinear())
What locations are chosen?
That depends on fiber distance/fiber curvature...
- fiber_curvature - mean stress (oa) & slope (om)
$$ \sigma_{top} = \sigma_{alt} + \frac{t}{2} \sigma_{mean}$$
$$ \sigma_{btm} = \sigma_{alt} + \frac{t}{2} \sigma_{mean}$$
fiber_distance - upper and lower surface stress (o_top; o_btm)
If you have stress, fiber_distance is always returned regardless of your option.
What sets this?
STRAIN(real, sort1, FIBER) = ALL # fiber distance/default
STRAIN(real, sort1, STRCUR) = ALL # strain curvature
How do we know if we're using fiber_distance?
print("is_fiber_distance = %s" % plate_stress.is_fiber_distance())
Accessing results
Note that this is intentionally done iinefficiently to access specific entries in order to explain the data structure.
Step10: Let's print out the actual mass properties from the OP2 and get the same result as the F06
We need PARAM,POSTEXT,YES in out BDF to get the Grid Point Weight Table
Step11: We can also write the full F06
Step12: The mass results are different as pyNastran's mass assumes point masses
$$m_{plates} = A * (rho * t + nsm)$$
$$m_{solid} = V * rho$$
$$m_{bars} = L * (rho * A + nsm)$$
$$I = m*r^2$$
The larger your model is and the further from the origin, the more accurate the result.
For some applications (e.g. a weight breakdown), this is probably be fine.
Step13: It's not like Nastran is perfect either.
Limitations
You cannot do weight statements in Nastran by component/property/material.
Everything is always summmed up (e.g. you can have different geometry in Subcase 2 and MPCs connecting physical geomtry, with other parts flying off into space).
These are things that pyNastran can do.
Step14: Let's get the breakdown by property ID | Python Code:
import os
import copy
import numpy as np
import pyNastran
pkg_path = pyNastran.__path__[0]
from pyNastran.utils import print_bad_path
from pyNastran.op2.op2 import read_op2
from pyNastran.utils import object_methods, object_attributes
import pandas as pd
Explanation: OP2 Demo
The iPython notebook for this demo can be found in:
- docs\quick_start\demo\op2_demo.ipynb
- https://github.com/SteveDoyle2/pyNastran/tree/master/docs/quick_start/demo/op2_demo.ipynb
Why use the OP2? Why not use the F06/PCH file?
Most people are comfortable with the F06. However, it's:
- Ironically, a lot harder to parse. The OP2 is very structured.
- Much, much, much slower. We can read entire blocks of arrays with a single call. The data is already typed.
- Much, much more memory inefficient because we aren't appending strings onto lists and turning that into a numpy array.
F06 parsers get ridiculously hard when you start do complicated results, like:
- single subcase buckling
- superelements
- SOL 200 optimization with sub-optimization
- SPOINTs
The pyNastran OP2 Reader is fast, highly validated, and it supports most result types. The data in the OP2 is also more accurate because there is no rounding.
Validating an OP2
The test_op2 script is created when you run python setup.py develop or python setup.py install on pyNastran. Assuming it's on your path (it'll be in Python27\Scripts or something similar), you can run:
```
test_op2 -f solid_bending.op2
The-ftells us to print outsolid_bending.test_op2.f06```, which can be compared to your F06 for a small file to build confidence in the reader. It's also useful when you want an F06 of your model without rerunning Nastran just to see what's in it.
If you have a large model, you can make test_op2 run much, much faster. The -c flag disables double-reading of the OP2. By default, test_op2 uses two different read methods (the old method and new method) to ensure that results are read in properly. When running the code, this is turned off, but is turned on for test_op2.
```
test_op2 -fc solid_bending.op2
```
Import the packages
End of explanation
pd.set_option('precision', 3)
np.set_printoptions(precision=3, threshold=20)
Explanation: Sets default precision of real numbers for pandas output
End of explanation
#op2_filename = r'D:\work\pynastran_0.8.0\models\iSat\ISat_Launch_Sm_Rgd.op2'
#op2_filename = r'D:\work\pynastran_0.8.0\models\iSat\ISat_Launch_Sm_4pt.op2'
op2_filename = os.path.abspath(os.path.join(pkg_path, '..', 'models', 'iSat', 'ISat_Launch_Sm_4pt.op2'))
assert os.path.exists(op2_filename), print_bad_path(op2_filename)
# define the input file with a file path
op2 = read_op2(op2_filename, build_dataframe=True, debug=False)
Explanation: As with the BDF, we can use the long form and the short form. However, the long form for the OP2 doesn't really add anything. So, let's just use the short form.
Besides massive speed improvements in the OP2 relative to v0.7, this version adds pandas dataframe support.
End of explanation
print(op2.get_op2_stats())
Explanation: OP2 Introspection
The get_op2_stats() function lets you quickly understand what in an op2.
End of explanation
print(op2.get_op2_stats(short=True))
Explanation: If that's too long...
End of explanation
# what modes did we analyze: 1 to 167
print("loadcases = %s" % op2.eigenvectors.keys())
# get subcase 1
eig1 = op2.eigenvectors[1]
modes = eig1.modes
times = eig1._times # the generic version of modes
print("modes = %s\n" % modes)
print("times = %s\n" % times)
imode2 = 1 # corresponds to mode 2
mode2 = eig1.data[imode2, :, :]
print('first 10 nodes and grid types\nNid Gridtype\n%s' % eig1.node_gridtype[:10, :])
node_ids = eig1.node_gridtype[:, 0]
index_node10 = np.where(node_ids == 10)[0] # we add the [0] because it's 1d
mode2_node10 = mode2[index_node10]
print("translation mode2_node10 = %s" % eig1.data[imode2, index_node10, :3].ravel())
print("rotations mode2_node10 = %s" % eig1.data[imode2, index_node10, 3:].ravel())
Explanation: Acccessing the Eigenvectors object
Eigenvectors are the simplest object. They use the same class as for displacements, velocity, acceleration, SPC Forces, MPC Forces, Applied Loads, etc. These are all node-based tables with TX, TY, TZ, RX, RY, RZ. Results are in the analysis coordinate frame (CD), which is defined by the GRID card.
Numpy-based Approach
We'll first show off the standard numpy based results on a transient case. Static results are the same, except that you'll always use the 0th index for the "time" index.
The tutorial is intetionally just accessing the objects in a very clear, though inefficient way. The OP2 objects can take full advantage of the numpy operations.
End of explanation
from IPython.display import YouTubeVideo
YouTubeVideo('5JnMutdy6Fw')
#https://www.youtube.com/watch?v=5JnMutdy6Fw
# get subcase 1
eig1 = op2.eigenvectors[1]
eig1.data_frame
Explanation: Pandas-based Approach
If you like pandas, you can access all the OP2 objects, which is very useful within the Jupyter Notebook. Different objects will look differently, but you can change the layout.
If you're trying to learn pandas, there are many tutorials online, such as:
http://pandas.pydata.org/pandas-docs/stable/10min.html
or a very long, but good video:
End of explanation
# element forces/stresses/strains are by element type consistent with the F06, so...
plate_stress = op2.cquad4_stress[1]
print("plate_stress_obj = %s" % type(plate_stress))
# the set of variables in the RealPlateStressArray
print("plate_stress = %s\n" % plate_stress.__dict__.keys())
# list of parameters that define the object (e.g. what is the nonlinear variable name
print("data_code_keys = %s\n" % plate_stress.data_code.keys())
# nonlinear variable name
name = plate_stress.data_code['name']
print("name = %r" % plate_stress.data_code['name'])
print("list-type variables = %s" % plate_stress.data_code['data_names'])
# the special loop parameter
# for modal analysis, it's "modes"
# for transient, it's "times"
# or be lazy and use "_times"
print("modes = %s" % plate_stress.modes) # name + 's'
# extra list-type parameter for modal analysis; see data_names
#print("mode_cycles =", plate_stress.mode_cycles)
Explanation: Accessing the plate stress/strain
Results are stored on a per element type basis.
The OP2 is the same as an F06, so CQUAD4 elements have centroidal-based results or centroidal-based as well as the results at the 4 corner nodes.
Be careful about what you're accessing.
End of explanation
#print "attributes =", object_attributes(plate_stress)
print("methods = %s\n" % object_methods(plate_stress))
print('methods2= %s\n' % plate_stress.object_methods())
print("headers = %s\n" % plate_stress.get_headers())
Explanation: Similar to the BDF, we can use object_attributes/methods
End of explanation
# element forces/stresses/strains are by element type consistent
# with the F06, so...
def abs_max_min(vals):
absvals = list(abs(vals))
maxval = max(absvals)
i = absvals.index(maxval)
return vals[i]
#-----------------------------
# again, we have linear quads, so two locations per element
print("element_node[:10, :] =\n%s..." % plate_stress.element_node[:10, :])
# lets get the stress for the first 3 CQUAD4 elements
eids = plate_stress.element_node[:, 0]
ueids = np.unique(eids)
print('ueids = %s' % ueids[:3])
# get the first index of the first 5 elements
ieids = np.searchsorted(eids, ueids[:3])
print('ieids = %s' % ieids)
# the easy way to slice data for linear plates
ieids5 = np.vstack([ieids, ieids + 1]).ravel()
ieids5.sort()
print('verify5:\n%s' % ieids5)
#-----------------------------
itime = 0 # static analysis / mode 1
if plate_stress.is_von_mises(): # True
ovm = plate_stress.data[itime, :, 7]
print('we have von mises data; ovm=%s\n' % ovm)
else:
omax_shear = plate_stress.data[itime, :, 7]
print('we have max shear data; omax_shear=%s\n' % omax_shear)
print("[layer1, layer2, ...] = %s" % ovm[ieids5])
ieid1000 = np.where(eids == 1000)[0]
print('ieid1000 = %s' % ieid1000)
ovm_mode6_eid1000 = ovm[ieid1000]
print("ovm_mode6_eid1000 = %s -> %s" % (ovm_mode6_eid1000, abs_max_min(ovm_mode6_eid1000)))
# see the difference between "transient"/"modal"/"frequency"-style results
# and "nodal"/"elemental"-style results
# just change imode
imode = 5 # mode 6; could just as easily be dt
iele = 10 # element 10
ilayer = 1
ieid10 = np.where(eids == iele)[0][ilayer]
print('ieid10 = %s' % ieid10)
print(plate_stress.element_node[ieid10, :])
# headers = [u'fiber_distance', u'oxx', u'oyy', u'txy', u'angle', u'omax', u'omin', u'von_mises']
print("ps.modes = %s" % plate_stress.modes[imode])
print("ps.cycles = %s" % plate_stress.cycles[imode])
print("oxx = %s" % plate_stress.data[imode, ieid10, 1])
print("oyy = %s" % plate_stress.data[imode, ieid10, 2])
print("txy = %s" % plate_stress.data[imode, ieid10, 3])
print("omax = %s" % plate_stress.data[imode, ieid10, 5])
print("omin = %s" % plate_stress.data[imode, ieid10, 6])
print("ovm/max_shear = %s" % plate_stress.data[imode, ieid10, 7])
if plate_stress.is_fiber_distance():
print("fiber_distance = %s" % plate_stress.data[imode, ieid10, 0])
else:
print("curvature = %s" % plate_stress.data[imode, ieid10, 0])
from pyNastran.bdf.bdf import read_bdf
bdf_filename = os.path.abspath(os.path.join(pkg_path, '..', 'models', 'iSat', 'ISat_Launch_Sm_4pt.dat'))
model = read_bdf(bdf_filename, debug=False)
mass, cg, I = model.mass_properties()
Explanation: Number of Nodes on a CQUAD4
For linear CQUAD4s, there is 1 centroidal stress at two locations
For bilinear quads, there are 5 stresses at two locations (4 nodes + centroidal)
node_id=0 indicates a centroidal quantity
CTRIA3s are always centroidal
What sets this?
STRESS(real, sort1, BILIN) = ALL # bilinear cquad
STRESS(real, sort1, CENT) = ALL # linear quad
STRAIN(real, sort1, BILIN) = ALL # bilinear cquad
STRAIN(real, sort1, CENT) = ALL # linear quad
How do we know if we're bilinear?
print("is_bilinear = %s\n" % plate_stress.is_bilinear())
What locations are chosen?
That depends on fiber distance/fiber curvature...
- fiber_curvature - mean stress (oa) & slope (om)
$$ \sigma_{top} = \sigma_{alt} + \frac{t}{2} \sigma_{mean}$$
$$ \sigma_{btm} = \sigma_{alt} + \frac{t}{2} \sigma_{mean}$$
fiber_distance - upper and lower surface stress (o_top; o_btm)
If you have stress, fiber_distance is always returned regardless of your option.
What sets this?
STRAIN(real, sort1, FIBER) = ALL # fiber distance/default
STRAIN(real, sort1, STRCUR) = ALL # strain curvature
How do we know if we're using fiber_distance?
print("is_fiber_distance = %s" % plate_stress.is_fiber_distance())
Accessing results
Note that this is intentionally done iinefficiently to access specific entries in order to explain the data structure.
End of explanation
gpw = op2.grid_point_weight
#print(gpw.object_attributes())
print(gpw)
Explanation: Let's print out the actual mass properties from the OP2 and get the same result as the F06
We need PARAM,POSTEXT,YES in out BDF to get the Grid Point Weight Table
End of explanation
import getpass
name = getpass.getuser()
os.chdir(os.path.join(r'C:\Users', name, 'Desktop'))
# write the F06 with Real/Imaginary or Magnitude/Phase
# only matters for complex results
op2.write_f06('isat.f06', is_mag_phase=False)
!head -n 40 isat.f06
#from IPython.display import display, Math, Latex
Explanation: We can also write the full F06
End of explanation
print('cg =\n%s' % gpw.cg)
print('cg = %s' % cg)
Explanation: The mass results are different as pyNastran's mass assumes point masses
$$m_{plates} = A * (rho * t + nsm)$$
$$m_{solid} = V * rho$$
$$m_{bars} = L * (rho * A + nsm)$$
$$I = m*r^2$$
The larger your model is and the further from the origin, the more accurate the result.
For some applications (e.g. a weight breakdown), this is probably be fine.
End of explanation
from pyNastran.bdf.bdf import read_bdf
bdf_filename = os.path.abspath(os.path.join(pkg_path, '..', 'models', 'iSat', 'ISat_Launch_Sm_4pt.dat'))
model = read_bdf(bdf_filename, debug=False)
Explanation: It's not like Nastran is perfect either.
Limitations
You cannot do weight statements in Nastran by component/property/material.
Everything is always summmed up (e.g. you can have different geometry in Subcase 2 and MPCs connecting physical geomtry, with other parts flying off into space).
These are things that pyNastran can do.
End of explanation
from six import iteritems
#help(model.mass_properties)
pid_to_eids_map = model.get_element_ids_dict_with_pids()
#print(pid_to_eids_map.keys())
print('pid, mass, cg, [ixx, iyy, izz, ixy, ixz]')
for pid, eids in sorted(iteritems(pid_to_eids_map)):
mass, cg, inertia = model.mass_properties(element_ids=eids, reference_point=[0., 0., 0.])
print('%-3s %-.6f %-38s %s' % (pid, mass, cg, inertia))
Explanation: Let's get the breakdown by property ID
End of explanation |
11,144 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ch 11
Step1: First, define the constants.
Let's say we're dealing with 1-dimensional vectors, and a maximum sequence size of 3.
Step2: Next up, define the placeholder(s).
We only need one for this simple example
Step3: Now let's make a helper function to create LSTM cells
Step4: Call the function and extract the cell outputs.
Step5: You know what? We can just keep stacking cells on top of each other. In a new variable scope, you can pipe the output of the previous cell to the input of the new cell. Check it out
Step6: What if we wanted 5 layers of RNNs?
There's a useful shortcut that the TensorFlow library supplies, called MultiRNNCell. Here's a helper function to use it
Step7: Here's the helper function in action
Step8: Before starting a session, let's prepare some simple input to the network.
Step9: Start the session, and initialize variables.
Step10: We can run the outputs to verify that the code is sound. | Python Code:
import tensorflow as tf
Explanation: Ch 11: Concept 01
Multi RNN
All we need is TensorFlow:
End of explanation
input_dim = 1
seq_size = 3
Explanation: First, define the constants.
Let's say we're dealing with 1-dimensional vectors, and a maximum sequence size of 3.
End of explanation
input_placeholder = tf.placeholder(dtype=tf.float32, shape=[None, seq_size, input_dim])
Explanation: Next up, define the placeholder(s).
We only need one for this simple example: the input placeholder.
End of explanation
def make_cell(state_dim):
return tf.contrib.rnn.LSTMCell(state_dim)
Explanation: Now let's make a helper function to create LSTM cells
End of explanation
with tf.variable_scope("first_cell") as scope:
cell = make_cell(state_dim=10)
outputs, states = tf.nn.dynamic_rnn(cell, input_placeholder, dtype=tf.float32)
Explanation: Call the function and extract the cell outputs.
End of explanation
with tf.variable_scope("second_cell") as scope:
cell2 = make_cell(state_dim=10)
outputs2, states2 = tf.nn.dynamic_rnn(cell2, outputs, dtype=tf.float32)
Explanation: You know what? We can just keep stacking cells on top of each other. In a new variable scope, you can pipe the output of the previous cell to the input of the new cell. Check it out:
End of explanation
def make_multi_cell(state_dim, num_layers):
cells = [make_cell(state_dim) for _ in range(num_layers)]
return tf.contrib.rnn.MultiRNNCell(cells)
Explanation: What if we wanted 5 layers of RNNs?
There's a useful shortcut that the TensorFlow library supplies, called MultiRNNCell. Here's a helper function to use it:
End of explanation
multi_cell = make_multi_cell(state_dim=10, num_layers=5)
outputs5, states5 = tf.nn.dynamic_rnn(multi_cell, input_placeholder, dtype=tf.float32)
Explanation: Here's the helper function in action:
End of explanation
input_seq = [[1], [2], [3]]
Explanation: Before starting a session, let's prepare some simple input to the network.
End of explanation
init_op = tf.global_variables_initializer()
sess = tf.InteractiveSession()
sess.run(init_op)
Explanation: Start the session, and initialize variables.
End of explanation
outputs_val, outputs2_val, outputs5_val = sess.run([outputs, outputs2, outputs5],
feed_dict={input_placeholder: [input_seq]})
print(outputs_val)
print(outputs2_val)
print(outputs5_val)
Explanation: We can run the outputs to verify that the code is sound.
End of explanation |
11,145 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
Step1: What is Data?
A dataset consists of multiple data rows.
Each row describes an item with its features.
Think of features as properties of a sample. E.g. an apple has color, size, kind, ...
Step2: We separate dataset into features and labels matrices.
Features could be vectors (data rows has only 1 feature), Matrices (having multiple features) or Tensors (each feature is itself another matrix or tensor)
Labels could be vectors, matrices or tensors too.
Usually features are matrices and we declare it by X and lables are vectors declared by y.
We declare the dataset size as the number of data items in it.
Step3: Supervised Learning
TODO
Regression
TODO
Prepare the data
First we prepare the data and preprocess it.
Standardization, Normalization and Imputation are the common preprocesses.
Step4: Train/Test Split
In order to reguralize the model, we split the dataset into training and test sets.
Training and test sets are similar in shape except in size.
Usually training set is the bigger part of the dataset (e.g. 80%) and the rest is test set (e.g. 20%).
Step5: Train the model on the training set
Step6: Evaluate on the test set
Step7: The model is ready and can be used to predict on real data
Classification
Prepare the data
Just like regression.
Step8: Train/Test Split
Step9: Train the model on the training set
Step10: Evaluate on the test set
Step11: Truely Classified Digits
Step12: Misclassified Digits
Step13: Unsupervised Learning
Clustering
Prepare the data
Like supervised learning, we first gather and preprocess the data.
The difference is that there is no lables vector in clusterin. Only features (X).
Also train/test split doesn't make sense since there is no true lable to compute the score.
There are (fortunately!) some evaluation metrics but they work on the whole dataset.
Some other evaluation metrics work when the true labels are provided.
Step14: Train the model on whole dataset
Step15: PCA
TODO | Python Code:
# importing numpy, pandas & matplotlib
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import random
%matplotlib inline
Explanation: Introduction
End of explanation
# Load Iris dataset
from sklearn.datasets import load_iris
iris = load_iris()
iris.feature_names
print(iris.DESCR)
Explanation: What is Data?
A dataset consists of multiple data rows.
Each row describes an item with its features.
Think of features as properties of a sample. E.g. an apple has color, size, kind, ...
End of explanation
X = pd.DataFrame(iris.data, columns=iris.feature_names)
y = pd.Series(iris.target)
print('X.shape =', X.shape)
print('y.shape =', y.shape)
X.head()
y.head()
plt.figure(figsize=(6, 6));
plt.scatter(X.values[:, 0], X.values[:, 1], c=y, cmap=plt.cm.rainbow);
# Correlation plots are a good way to understand the dataset
pd.plotting.scatter_matrix(X, figsize=(10, 10));
Explanation: We separate dataset into features and labels matrices.
Features could be vectors (data rows has only 1 feature), Matrices (having multiple features) or Tensors (each feature is itself another matrix or tensor)
Labels could be vectors, matrices or tensors too.
Usually features are matrices and we declare it by X and lables are vectors declared by y.
We declare the dataset size as the number of data items in it.
End of explanation
from sklearn.datasets import load_boston
boston = load_boston()
boston.feature_names
print(boston.DESCR)
X = pd.DataFrame(boston.data, columns=boston.feature_names)
y = pd.Series(boston.target)
print('X.shape =', X.shape)
print('y.shape =', y.shape)
X.head()
y.head()
plt.scatter(X.RM, y, marker='.');
plt.xlabel('# Rooms');
plt.ylabel('House Price');
plt.scatter(X.LSTAT, y, marker='.');
plt.xlabel('LSTAT');
plt.ylabel('House Price');
X = X.RM
Explanation: Supervised Learning
TODO
Regression
TODO
Prepare the data
First we prepare the data and preprocess it.
Standardization, Normalization and Imputation are the common preprocesses.
End of explanation
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X.values.reshape(X.shape[0], 1),
y.values.reshape(y.shape[0], 1),
test_size=.3)
print('X_train.shape =', X_train.shape)
print('X_test.shape =', X_test.shape)
print('y_train.shape =', y_train.shape)
print('y_test.shape =', y_test.shape)
Explanation: Train/Test Split
In order to reguralize the model, we split the dataset into training and test sets.
Training and test sets are similar in shape except in size.
Usually training set is the bigger part of the dataset (e.g. 80%) and the rest is test set (e.g. 20%).
End of explanation
from sklearn.linear_model import LinearRegression
model = LinearRegression().fit(X_train, y_train)
Explanation: Train the model on the training set
End of explanation
y_pred = model.predict(X_test)
from sklearn.metrics import mean_squared_error
print('Mean squared error =', mean_squared_error(y_test, y_pred))
print('Linear Regression score = %.2f%%' % (model.score(X_test, y_test) * 100))
plt.scatter(X_test, y_test, marker='.');
plt.plot(X_test, y_pred, color='red');
Explanation: Evaluate on the test set
End of explanation
from sklearn.datasets import fetch_mldata
mnist = fetch_mldata('MNIST original')
X = pd.DataFrame(mnist.data)
y = pd.Series(mnist.target)
f, axes = plt.subplots(3, 4, figsize=(8, 8));
for i in range(3):
for j in range(4):
axes[i, j].axis('off')
if i == 2 and j >= 2:
continue
num = i * 4 + j
axes[i, j].set_title(num)
axes[i, j].matshow(X.values[y == num][0].reshape(28, 28))
Explanation: The model is ready and can be used to predict on real data
Classification
Prepare the data
Just like regression.
End of explanation
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=10000, shuffle=False)
print('X_train.shape =', X_train.shape)
print('y_train.shape =', y_train.shape)
print('X_test.shape =', X_test.shape)
print('y_test.shape =', y_test.shape)
Explanation: Train/Test Split
End of explanation
from sklearn.linear_model import SGDClassifier
model = SGDClassifier().fit(X_train, y_train)
Explanation: Train the model on the training set
End of explanation
print('SGD Classifier score = %.2f%%' % (model.score(X_test, y_test) * 100))
f, axes = plt.subplots(3, 4, figsize=(8, 8));
for i in range(3):
for j in range(4):
axes[i, j].axis('off')
if i == 2 and j >= 2:
continue
axes[i, j].set_title(i * 4 + j)
axes[i, j].matshow(model.coef_[i * 4 + j].reshape(28, 28))
Explanation: Evaluate on the test set
End of explanation
y_pred = model.predict(X_test)
f, axes = plt.subplots(1, 5, figsize=(12, 4))
samples = random.sample(list(y_test[y_test == y_pred].iteritems()), 5)
for i_zero, (i, p) in enumerate(samples):
axes[i_zero].axis('off')
axes[i_zero].matshow(X_test.loc[i].values.reshape(28, 28))
axes[i_zero].set_title('pred = %d' % y_pred[i - 60000], color='green')
Explanation: Truely Classified Digits
End of explanation
y_pred = model.predict(X_test)
f, axes = plt.subplots(1, 5, figsize=(12, 4))
samples = random.sample(list(y_test[y_test != y_pred].iteritems()), 5)
for i_zero, (i, p) in enumerate(samples):
axes[i_zero].axis('off')
axes[i_zero].matshow(X_test.loc[i].values.reshape(28, 28))
axes[i_zero].set_title('pred = %d' % y_pred[i - 60000], color='red')
Explanation: Misclassified Digits
End of explanation
from sklearn.datasets import make_moons
X, y = make_moons(1000, noise=.05)
plt.scatter(X[:, 0], X[:, 1], s=10);
Explanation: Unsupervised Learning
Clustering
Prepare the data
Like supervised learning, we first gather and preprocess the data.
The difference is that there is no lables vector in clusterin. Only features (X).
Also train/test split doesn't make sense since there is no true lable to compute the score.
There are (fortunately!) some evaluation metrics but they work on the whole dataset.
Some other evaluation metrics work when the true labels are provided.
End of explanation
from sklearn.cluster import DBSCAN
model = DBSCAN(eps=.2).fit(X)
plt.scatter(X[:, 0], X[:, 1], s=10, c=model.labels_, cmap=plt.cm.rainbow);
from sklearn.metrics import adjusted_rand_score
print('Adjusted rand index =', adjusted_rand_score(y, model.labels_))
Explanation: Train the model on whole dataset
End of explanation
from sklearn.datasets import fetch_olivetti_faces
faces = fetch_olivetti_faces()
plt.subplots(2, 3)
for i in range(6):
plt.subplot(2, 3, i + 1)
plt.axis('off')
plt.imshow(faces.images[10 * i].reshape(64, 64), cmap=plt.cm.gray)
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
faces.data = StandardScaler().fit_transform(faces.data)
pca = PCA().fit(faces.data)
plt.subplots(2, 3)
for i in range(6):
plt.subplot(2, 3, i + 1)
plt.axis('off')
plt.imshow(pca.components_[i].reshape(64, 64), cmap=plt.cm.gray_r)
Explanation: PCA
TODO
End of explanation |
11,146 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Neural net painter
This notebook demonstrates a fun experiment in training a neural network to do regression from the color (r,g,b) of a pixel in an image, given its (x,y) position in the image. It's mostly useless, but gives a nice visual intuition for regression. This notebook is inspired by the same example in convnet.js and the first part of this notebook is mostly reimplementing it in Keras instead. Later, we'll have some fun interpolating different image models.
First make sure the following import statements work.
Step1: First we'll open an image, and create a helper function that converts that image into a training set of (x,y) positions (the data) and their corresponding (r,g,b) colors (the labels). We'll then load a picture with it.
Step2: We've postfixed all the variable names with a 1 because later we'll open a second image.
We're now going to define a neural network which takes a 2-neuron input (the normalized x, y position) and outputs a 3-neuron output corresponding to color. We'll use Keras's Sequential class to create a deep neural network with a bunch of 20-neuron fully-connected layers with ReLU activations. Our loss function will be a mean_squared_error between the predicted colors and the actual ones from the image.
Once we've defined that model, we'll create a neural network m1 with that architecture.
Step3: Let's now go ahead and train our neural network. In this case, we are going to use the training set as the validation set as well. Normally, you'd never do this because it would cause your neural network to overfit. But in this experiment, we're not worried about overfitting... in fact, overfitting is the whole point!
We train for 25 epochs and have a batch size of 5.
Step4: Now that the neural net is finished training, let's take the training data, our pixel positions, and simply send them back straight through the network, and plot the predicted colors on a new image. We'll make a new function for this called generate_image.
Step5: Sort of looks like the original image a bit! Of course the network can't learn the mapping perfectly without pretty much memorizing the data, but this way gives us a pretty good impression and doubles as an extremely inefficient form of compression!
Let's load another image. We'll load the second image and also resize it so that it's the same size as the first image.
Step6: Now we'll repeat the experiment from before. We'll make a new neural network m2 which will learn to map im2's (x,y) positions to its (r,g,b) colors.
Step7: Let's generate a new image from m2 and see how it looks.
Step8: Not too bad!
Now let's do something funky. We're going to make a new neural network, m3, with the same architecture as m1 and m2 but instead of training it, we'll just set its weights to be interpolations between the weights of m1 and m2 and at each step, we'll generate a new image. In other words, we'll gradually change the model learned from the first image into the model learned from the second image, and see what kind of an image it outputs at each step.
To help us do this, we'll create a function get_interpolated_weights and we'll make one change to our image generation function
Step9: Neat... Let's do one last thing, and make an animation with more frames. We'll generate 120 frames inside the assets folder, then use ffmpeg to stitch them into an mp4 file. If you don't have ffmpeg, you can install it from here.
Step10: You can find the video now in the assets directory. Looks neat! We can also display it in this notebook. From here, there's a lot of fun things we can do... Triangulating between multiple images, or streaming together several interpolations, or predicting color from not just position, but time in a movie. Lots of possibilities. | Python Code:
%matplotlib inline
import time
from PIL import Image
import numpy as np
import keras
from matplotlib.pyplot import imshow, figure
from keras.models import Sequential
from keras.layers import Dense
Explanation: Neural net painter
This notebook demonstrates a fun experiment in training a neural network to do regression from the color (r,g,b) of a pixel in an image, given its (x,y) position in the image. It's mostly useless, but gives a nice visual intuition for regression. This notebook is inspired by the same example in convnet.js and the first part of this notebook is mostly reimplementing it in Keras instead. Later, we'll have some fun interpolating different image models.
First make sure the following import statements work.
End of explanation
def get_data(img):
width, height = img.size
pixels = img.getdata()
x_data, y_data = [],[]
for y in range(height):
for x in range(width):
idx = x + y * width
r, g, b = pixels[idx]
x_data.append([x / float(width), y / float(height)])
y_data.append([r, g, b])
x_data = np.array(x_data)
y_data = np.array(y_data)
return x_data, y_data
im1 = Image.open("../assets/dog.jpg")
x1, y1 = get_data(im1)
print("data", x1)
print("labels", y1)
imshow(im1)
Explanation: First we'll open an image, and create a helper function that converts that image into a training set of (x,y) positions (the data) and their corresponding (r,g,b) colors (the labels). We'll then load a picture with it.
End of explanation
def make_model():
model = Sequential()
model.add(Dense(2, activation='relu', input_shape=(2,)))
model.add(Dense(20, activation='relu'))
model.add(Dense(20, activation='relu'))
model.add(Dense(20, activation='relu'))
model.add(Dense(20, activation='relu'))
model.add(Dense(20, activation='relu'))
model.add(Dense(20, activation='relu'))
model.add(Dense(20, activation='relu'))
model.add(Dense(3))
model.compile(loss='mean_squared_error', optimizer='adam')
return model
m1 = make_model()
Explanation: We've postfixed all the variable names with a 1 because later we'll open a second image.
We're now going to define a neural network which takes a 2-neuron input (the normalized x, y position) and outputs a 3-neuron output corresponding to color. We'll use Keras's Sequential class to create a deep neural network with a bunch of 20-neuron fully-connected layers with ReLU activations. Our loss function will be a mean_squared_error between the predicted colors and the actual ones from the image.
Once we've defined that model, we'll create a neural network m1 with that architecture.
End of explanation
m1.fit(x1, y1, batch_size=5, epochs=25, verbose=1, validation_data=(x1, y1))
Explanation: Let's now go ahead and train our neural network. In this case, we are going to use the training set as the validation set as well. Normally, you'd never do this because it would cause your neural network to overfit. But in this experiment, we're not worried about overfitting... in fact, overfitting is the whole point!
We train for 25 epochs and have a batch size of 5.
End of explanation
def generate_image(model, x, width, height):
img = Image.new("RGB", [width, height])
pixels = img.load()
y_pred = model.predict(x)
for y in range(height):
for x in range(width):
idx = x + y * width
r, g, b = y_pred[idx]
pixels[x, y] = (int(r), int(g), int(b))
return img
img = generate_image(m1, x1, im1.width, im1.height)
imshow(img)
Explanation: Now that the neural net is finished training, let's take the training data, our pixel positions, and simply send them back straight through the network, and plot the predicted colors on a new image. We'll make a new function for this called generate_image.
End of explanation
im2 = Image.open("../assets/kitty.jpg")
im2 = im2.resize(im1.size)
x2, y2 = get_data(im2)
print("data", x2)
print("labels", y2)
imshow(im2)
Explanation: Sort of looks like the original image a bit! Of course the network can't learn the mapping perfectly without pretty much memorizing the data, but this way gives us a pretty good impression and doubles as an extremely inefficient form of compression!
Let's load another image. We'll load the second image and also resize it so that it's the same size as the first image.
End of explanation
m2 = make_model() # make a new model, keep m1 separate
m2.fit(x2, y2, batch_size=5, epochs=25, verbose=1, validation_data=(x2, y2))
Explanation: Now we'll repeat the experiment from before. We'll make a new neural network m2 which will learn to map im2's (x,y) positions to its (r,g,b) colors.
End of explanation
img = generate_image(m2, x2, im2.width, im2.height)
imshow(img)
Explanation: Let's generate a new image from m2 and see how it looks.
End of explanation
def get_interpolated_weights(model1, model2, amt):
w1 = np.array(model1.get_weights())
w2 = np.array(model2.get_weights())
w3 = np.add((1.0 - amt) * w1, amt * w2)
return w3
def generate_image_rescaled(model, x, width, height):
img = Image.new("RGB", [width, height])
pixels = img.load()
y_pred = model.predict(x)
y_pred = 255.0 * (y_pred - np.min(y_pred)) / (np.max(y_pred) - np.min(y_pred)) # rescale y_pred
for y in range(height):
for x in range(width):
idx = x + y * width
r, g, b = y_pred[idx]
pixels[x, y] = (int(r), int(g), int(b))
return img
# make new model to hold interpolated weights
m3 = make_model()
# we'll do 8 frames and stitch the images together at the end
n = 8
interpolated_images = []
for i in range(n):
amt = float(i)/(n-1.0)
w3 = get_interpolated_weights(m1, m2, amt)
m3.set_weights(w3)
img = generate_image_rescaled(m3, x1, im1.width, im1.height)
interpolated_images.append(img)
full_image = np.concatenate(interpolated_images, axis=1)
figure(figsize=(16,4))
imshow(full_image)
Explanation: Not too bad!
Now let's do something funky. We're going to make a new neural network, m3, with the same architecture as m1 and m2 but instead of training it, we'll just set its weights to be interpolations between the weights of m1 and m2 and at each step, we'll generate a new image. In other words, we'll gradually change the model learned from the first image into the model learned from the second image, and see what kind of an image it outputs at each step.
To help us do this, we'll create a function get_interpolated_weights and we'll make one change to our image generation function: instead of just coloring the pixels to be the exact outputs, we'll auto-normalize every frame by rescaling the minimum and maximum output color to 0 to 255. This is because sometimes the intermediate models output in different ranges than what m1 and m2 were trained to. Yeah, this is a bit of a hack, but it works!
End of explanation
n = 120
frames_dir = '../assets/neural-painter-frames'
video_path = '../assets/neural-painter-interpolation.mp4'
import os
if not os.path.isdir(frames_dir):
os.makedirs(frames_dir)
for i in range(n):
amt = float(i)/(n-1.0)
w3 = get_interpolated_weights(m1, m2, amt)
m3.set_weights(w3)
img = generate_image_rescaled(m3, x1, im1.width, im1.height)
img.save('../assets/neural-painter-frames/frame%04d.png'%i)
cmd = 'ffmpeg -i %s/frame%%04d.png -c:v libx264 -pix_fmt yuv420p %s' % (frames_dir, video_path)
os.system(cmd)
Explanation: Neat... Let's do one last thing, and make an animation with more frames. We'll generate 120 frames inside the assets folder, then use ffmpeg to stitch them into an mp4 file. If you don't have ffmpeg, you can install it from here.
End of explanation
from IPython.display import HTML
import io
import base64
video = io.open(video_path, 'r+b').read()
encoded = base64.b64encode(video)
HTML(data='''<video alt="test" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4" />
</video>'''.format(encoded.decode('ascii')))
Explanation: You can find the video now in the assets directory. Looks neat! We can also display it in this notebook. From here, there's a lot of fun things we can do... Triangulating between multiple images, or streaming together several interpolations, or predicting color from not just position, but time in a movie. Lots of possibilities.
End of explanation |
11,147 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Author
Step1: Preprocess Data
For protyping I randomly sampled 10% the original dataset w/ this command
Step2: Data preprocessing
Step3: Categorical Features
Since all the features are categorical, we will one-hot encode them as features for the training process
Step4: Training
Now, in preparation for training and testing, we can now split our dataset into a training set and a test set. We can either do this randomly or by an arbitrary date.
Step5: Models
Ridge Regression
Now we are going to use regularized linear regression models from the scikit learn module. I'm going to try both l_1(Lasso) and l_2(Ridge) regularization. I'll also define a function that returns the cross-validation rmse error so we can evaluate our models and pick the best tuning params
Step6: The main tuning parameter for the Ridge model is alpha - a regularization parameter that measures how flexible our model is. The higher the regularization the less prone our model will be to overfit. However it will also lose flexibility and might not capture all of the signal in the data.
Step7: Note the U-ish shaped curve above. When alpha is too small the regularization is too strong and the model cannot capture all the complexities in the data. If however we let the model be too flexible (alpha large) the model begins to overfit. A value of alpha = 30 is about right based on the plot above.
Step8: So for the Ridge regression we get a rmsle of about 0.68
Lasso Regression
Let's try out the Lasso model. We will do a slightly different approach here and use the built in Lasso CV to figure out the best alpha for us. For some reason the alphas in Lasso CV are really the inverse or the alphas in Ridge.
Step9: Lasso seems to preform the same, so we'll use it on the dataset. Another neat thing about the Lasso is that it does feature selection for you - setting coefficients of features it deems unimportant to zero. Let's take a look at the coefficients
Step10: Good job Lasso. One thing to note here however is that the features selected are not necessarily the "correct" ones - especially since there are a lot of collinear features in this dataset. One idea to try here is run Lasso a few times on boostrapped samples and see how stable the feature selection is.
We can also take a look directly at what the most important coefficients are
Step11: The most important positive feature is the London boolean-- whether or not this property is in London. This definitely makes sense. Then a few other location and quality features contributed positively. Some of the negative features make less sense and would be worth looking into more - it seems like they might come from unbalanced categorical variables.
Also note that unlike the feature importance you'd get from a random forest these are actual coefficients in your model - so you can say precisely why the predicted price is what it is. The only issue here is that we log_transformed both the target and the numeric features so the actual magnitudes are a bit hard to interpret.
Gradient Boosted Regressor
Let's add an xgboost model to our linear model to see if we can improve our score
Step12: In the figure above the flat lines (after 100 iterations) show that the model is not overfitting. Thustly we can choose 100 for our # of iterators on our model.
Step13: Blended Regressor
The weights in the average (0.7, 0.3) are hyper-parameters - I used validation set to see what the best cutoff is. Basically this means I am weighting the preds from the lasso somewhat more heavily than the xgboost preds | Python Code:
import pandas as pd
import numpy as np
%pylab inline
import matplotlib.pyplot as plt
Explanation: Author: Alex Egg
This is my submission for the Machine Learning Scientist role at Amazon Development Centre (Scotland). I spent about 2 hours on it.
Intro
My old professor at UCSD, Yoav Freund, is one of the original innovators behind Boosting methods. Accordingly, I thought I'd give it a shot for this Amazon DC Scotland homework.
First, I'll start simple w/ a regularized linear regression model, then I'll try a Gradient Boosted Regression.
Tools
We will make extensive use of pandas and numpy for the initial data cleaning and processing. Then I will use the convenient scikit modules for training and evaluation metrics:
End of explanation
!cat data/pp-complete.csv | awk 'BEGIN {srand()} !/^$/ { if (rand() <= .10) print $0}' > data/sample_10.csv
!wc -l data/sample_10.csv
headers=["ID", "Price Sale", "Date of Transfer", "Postcode", "Property Type", "Old/New", "Duration", "PAON", "SAON",
"Street", "Locality", "Town/City", "District", "County", "PPD Category Type", "Record Status"]
raw = pd.read_csv("data/sample_10.csv", names=headers, parse_dates=["Date of Transfer"], index_col=0)
print raw.shape
raw.head()
Explanation: Preprocess Data
For protyping I randomly sampled 10% the original dataset w/ this command:
End of explanation
data = raw.drop(raw[raw["Price Sale"]>2000000].index)
data = data.drop(raw[raw["Price Sale"]<15000].index)
plt.rcParams['figure.figsize'] = (12.0, 6.0)
prices = pd.DataFrame({"A. price": data["Price Sale"], "B. log(price + 1)": np.log1p(data["Price Sale"])})
prices.hist()
#per the instructions we are using these three features for training
feature_cols = ["Date of Transfer", "Property Type", "Duration", "Town/City"]
reduced = data[["Price Sale"] + feature_cols]
#log transform the target:
reduced["SalePrice"] = np.log1p(reduced["Price Sale"])
reduced["is_london"] = reduced["Town/City"].apply(lambda x: 1 if (x=="LONDON") else 0)
reduced = reduced[["SalePrice", "Date of Transfer", "Property Type", "Duration", "is_london"]]
reduced.head()
Explanation: Data preprocessing:
We're not going to do anything fancy here:
Remove outliers
Transform the skewed numeric features by taking log(feature + 1) - this will make the features more normal
Create Dummy variables for the categorical features
End of explanation
reduced = pd.get_dummies(reduced)
reduced.head(1)
Explanation: Categorical Features
Since all the features are categorical, we will one-hot encode them as features for the training process:
End of explanation
def test_train_split_on_date(data, d):
feature_cols=['is_london', 'Property Type_D', 'Property Type_F',
'Property Type_O', 'Property Type_S', 'Property Type_T',
'Duration_F', 'Duration_L', 'Duration_U']
pre = data[data["Date of Transfer"]< d]
post = data[data["Date of Transfer"]>=d]
x_train = pre[feature_cols]
y_train = pre[["SalePrice"]]
x_test = post[feature_cols]
y_test = post[["SalePrice"]]
return x_train.values, x_test.values, y_train.values.ravel(), y_test.values.ravel()
X_train, X_test, Y_train, Y_test = test_train_split_on_date(reduced, date(2015, 1, 1))
print X_train.shape
print X_test.shape
# #test train split
from sklearn.model_selection import train_test_split
features = reduced[['is_london', 'Property Type_D', 'Property Type_F',
'Property Type_O', 'Property Type_S', 'Property Type_T',
'Duration_F', 'Duration_L', 'Duration_U']]
targets = reduced[["SalePrice"]]
X_train, X_test, Y_train, Y_test = train_test_split(features.values, targets.values.ravel(), test_size=0.25, random_state=4)
print X_train.shape
print X_test.shape
Explanation: Training
Now, in preparation for training and testing, we can now split our dataset into a training set and a test set. We can either do this randomly or by an arbitrary date.
End of explanation
from sklearn.linear_model import Ridge, RidgeCV, ElasticNet, LassoCV, LassoLarsCV
from sklearn.model_selection import cross_val_score
def rmse_cv(model):
rmse= np.sqrt(-cross_val_score(model, X_train, Y_train, scoring="neg_mean_squared_error", cv = 5))
return(rmse)
model_ridge = Ridge()
Explanation: Models
Ridge Regression
Now we are going to use regularized linear regression models from the scikit learn module. I'm going to try both l_1(Lasso) and l_2(Ridge) regularization. I'll also define a function that returns the cross-validation rmse error so we can evaluate our models and pick the best tuning params
End of explanation
alphas = [0.05, 0.1, 0.3, 1, 3, 5, 10, 15, 30, 50, 75]
cv_ridge = [rmse_cv(Ridge(alpha = alpha)).mean() for alpha in alphas]
cv_ridge = pd.Series(cv_ridge, index = alphas)
cv_ridge.plot(title = "Validation - Just Do It")
plt.xlabel("alpha")
plt.ylabel("rmse")
Explanation: The main tuning parameter for the Ridge model is alpha - a regularization parameter that measures how flexible our model is. The higher the regularization the less prone our model will be to overfit. However it will also lose flexibility and might not capture all of the signal in the data.
End of explanation
cv_ridge.min()
Explanation: Note the U-ish shaped curve above. When alpha is too small the regularization is too strong and the model cannot capture all the complexities in the data. If however we let the model be too flexible (alpha large) the model begins to overfit. A value of alpha = 30 is about right based on the plot above.
End of explanation
model_lasso = LassoCV(alphas = [1, 0.1, 0.001, 0.0005]).fit(X_train, Y_train)
rmse_cv(model_lasso).mean()
Explanation: So for the Ridge regression we get a rmsle of about 0.68
Lasso Regression
Let's try out the Lasso model. We will do a slightly different approach here and use the built in Lasso CV to figure out the best alpha for us. For some reason the alphas in Lasso CV are really the inverse or the alphas in Ridge.
End of explanation
coef = pd.Series(model_lasso.coef_, index = features.columns)
print("Lasso picked " + str(sum(coef != 0)) + " variables and eliminated the other " + str(sum(coef == 0)) + " variables")
Explanation: Lasso seems to preform the same, so we'll use it on the dataset. Another neat thing about the Lasso is that it does feature selection for you - setting coefficients of features it deems unimportant to zero. Let's take a look at the coefficients:
End of explanation
imp_coef = pd.DataFrame(coef.sort_values())
matplotlib.rcParams['figure.figsize'] = (8.0, 10.0)
imp_coef.plot(kind = "barh")
plt.title("Coefficients in the Lasso Model")
Explanation: Good job Lasso. One thing to note here however is that the features selected are not necessarily the "correct" ones - especially since there are a lot of collinear features in this dataset. One idea to try here is run Lasso a few times on boostrapped samples and see how stable the feature selection is.
We can also take a look directly at what the most important coefficients are:
End of explanation
from sklearn import ensemble
params = {'n_estimators': 500, 'max_depth': 2, 'min_samples_split': 2,
'learning_rate': 0.1, 'loss': 'ls'}
model = ensemble.GradientBoostingRegressor(**params)
model.fit(X_train, Y_train)
# compute test set deviance
test_score = np.zeros((params['n_estimators'],), dtype=np.float64)
for i, y_pred in enumerate(model.staged_predict(X_test)):
test_score[i] = model.loss_(Y_test, y_pred)
plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
plt.title('Deviance')
plt.plot(np.arange(params['n_estimators']) + 1, model.train_score_, 'b-',
label='Training Set Deviance')
plt.plot(np.arange(params['n_estimators']) + 1, test_score, 'r-',
label='Test Set Deviance')
plt.legend(loc='upper right')
plt.xlabel('Boosting Iterations')
plt.ylabel('Deviance')
Explanation: The most important positive feature is the London boolean-- whether or not this property is in London. This definitely makes sense. Then a few other location and quality features contributed positively. Some of the negative features make less sense and would be worth looking into more - it seems like they might come from unbalanced categorical variables.
Also note that unlike the feature importance you'd get from a random forest these are actual coefficients in your model - so you can say precisely why the predicted price is what it is. The only issue here is that we log_transformed both the target and the numeric features so the actual magnitudes are a bit hard to interpret.
Gradient Boosted Regressor
Let's add an xgboost model to our linear model to see if we can improve our score:
End of explanation
feature_importance = model.feature_importances_
# make importances relative to max importance
feature_importance = 100.0 * (feature_importance / feature_importance.max())
sorted_idx = np.argsort(feature_importance)
pos = np.arange(sorted_idx.shape[0]) + .5
plt.subplot(1, 2, 2)
plt.barh(pos, feature_importance[sorted_idx], align='center')
plt.yticks(pos, features.columns[sorted_idx])
plt.xlabel('Relative Importance')
plt.title('Variable Importance')
plt.show()
#model w/ new paramters
params = {'n_estimators': 100, 'max_depth': 2, 'min_samples_split': 2,
'learning_rate': 0.1, 'loss': 'ls'}
model_xgb = ensemble.GradientBoostingRegressor(**params)
model_xgb.fit(X_train, Y_train)
xgb_preds = model_xgb.predict(X_test)
lasso_preds = model_lasso.predict(X_test)
predictions = pd.DataFrame({"xgb":xgb_preds, "lasso":lasso_preds})
predictions.plot(x = "xgb", y = "lasso", kind = "scatter")
Explanation: In the figure above the flat lines (after 100 iterations) show that the model is not overfitting. Thustly we can choose 100 for our # of iterators on our model.
End of explanation
preds = 0.7*lasso_preds + 0.3*xgb_preds
from sklearn.metrics import mean_squared_error
mse = mean_squared_error(Y_test, preds)
print("MSE: %.4f" % mse)
Explanation: Blended Regressor
The weights in the average (0.7, 0.3) are hyper-parameters - I used validation set to see what the best cutoff is. Basically this means I am weighting the preds from the lasso somewhat more heavily than the xgboost preds:
End of explanation |
11,148 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 2
Imports
Step1: Plotting with parameters
Write a plot_sin1(a, b) function that plots $sin(ax+b)$ over the interval $[0,4\pi]$.
Customize your visualization to make it effective and beautiful.
Customize the box, grid, spines and ticks to match the requirements of this data.
Use enough points along the x-axis to get a smooth plot.
For the x-axis tick locations use integer multiples of $\pi$.
For the x-axis tick labels use multiples of pi using LaTeX
Step2: Then use interact to create a user interface for exploring your function
Step3: In matplotlib, the line style and color can be set with a third argument to plot. Examples of this argument
Step4: Use interact to create a UI for plot_sine2.
Use a slider for a and b as above.
Use a drop down menu for selecting the line style between a dotted blue line line, black circles and red triangles. | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
Explanation: Interact Exercise 2
Imports
End of explanation
import math as math
def plot_sine1(a, b):
x = np.linspace(0,4*math.pi,300)
f = plt.figure(figsize=(15,5))
plt.plot(np.sin(a*x+b))
plt.title('Sine Graph')
plt.xlabel('x')
plt.ylabel('sin(ax+b)')
plt.tick_params(right=False, top=False, direction='out')
plt.xticks([0,math.pi,2*math.pi,3*math.pi,4*math.pi],['0','$pi$','$2*pi$','$3*pi$','$4*pi$'])
plt.xlim(0,4*math.pi)
plot_sine1(5, 3.4)
Explanation: Plotting with parameters
Write a plot_sin1(a, b) function that plots $sin(ax+b)$ over the interval $[0,4\pi]$.
Customize your visualization to make it effective and beautiful.
Customize the box, grid, spines and ticks to match the requirements of this data.
Use enough points along the x-axis to get a smooth plot.
For the x-axis tick locations use integer multiples of $\pi$.
For the x-axis tick labels use multiples of pi using LaTeX: $3\pi$.
End of explanation
interact(plot_sine1, a=(0.0,5.0,0.1), b=(-5.0,5.0,0.1))
assert True # leave this for grading the plot_sine1 exercise
Explanation: Then use interact to create a user interface for exploring your function:
a should be a floating point slider over the interval $[0.0,5.0]$ with steps of $0.1$.
b should be a floating point slider over the interval $[-5.0,5.0]$ with steps of $0.1$.
End of explanation
def plot_sine2(a,b,style='b-'):
x = np.linspace(0,4*math.pi,300)
f = plt.figure(figsize=(15,5))
plt.plot(np.sin(a*x+b), '%s' % style)
plt.title('Sine Graph')
plt.xlabel('x')
plt.ylabel('sin(ax+b)')
plt.tick_params(right=False, top=False, direction='out')
plt.xticks([0,math.pi,2*math.pi,3*math.pi,4*math.pi],['0','$pi$','$2*pi$','$3*pi$','$4*pi$'])
plt.xlim(0,4*math.pi)
plot_sine2(4.0, -1.0, 'r--')
Explanation: In matplotlib, the line style and color can be set with a third argument to plot. Examples of this argument:
dashed red: r--
blue circles: bo
dotted black: k.
Write a plot_sine2(a, b, style) function that has a third style argument that allows you to set the line style of the plot. The style should default to a blue line.
End of explanation
interact(plot_sine2, a=(0.0,5.0,0.1), b=(-5.0,5.0,0.1), style={'dotted blue line': 'b--', 'black circles': 'ko',
'red triangles': 'r^'})
assert True # leave this for grading the plot_sine2 exercise
Explanation: Use interact to create a UI for plot_sine2.
Use a slider for a and b as above.
Use a drop down menu for selecting the line style between a dotted blue line line, black circles and red triangles.
End of explanation |
11,149 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Making a movie of reaction-diffusion concentrations
We recommend creating and using a virtual environment for NetPyNE tutorials. To do so, enter the following commands into your terminal
Step1: At this point, we can replace sim.simulate() in our init.py file with sim.runSimWithIntervalFunc(100.0, sim.analysis.plotRxDConcentration, timeRange=None, funcArgs=plotArgs).
Then we can run the simulation.
Step2: This should run the simulation, pausing every 100 ms to create a reaction-diffusion concentration plot.
At this point, we can create a movie (an animated gif) from our frames. | Python Code:
plotArgs = {
'speciesLabel': 'ca',
'regionLabel' : 'ecs',
'saveFig' : 'movie',
'showFig' : False,
'clim' : [1.9997, 2.000],
}
Explanation: Making a movie of reaction-diffusion concentrations
We recommend creating and using a virtual environment for NetPyNE tutorials. To do so, enter the following commands into your terminal:
mkdir netpyne_tuts
cd netpyne_tuts
python3 -m venv env
source env/bin/activate
python3 -m pip install --upgrade pip setuptools wheel
python3 -m pip install --upgrade ipython ipykernel jupyter
python3 -m pip install --upgrade neuron
git clone https://github.com/Neurosim-lab/netpyne.git
python3 -m pip install -e netpyne
ipython kernel install --user --name=env
For this tutorial, you will also need to install natsort and imageio.
python3 -m pip install natsort imageio
Then you can copy the example directory we will be using into netpyne_tuts, copy this notebook tutorial into it, and compile the mod files.
cp -r netpyne/examples/rxd_net .
cp netpyne/netpyne/tutorials/rxd_movie_tut/rxd_movie_tut.ipynb rxd_net
cd rxd_net
nrnivmodl mod
Finally, you can launch this tutorial in a Jupyter notebook.
jupyter notebook rxd_movie_tut.ipynb
Note that the network parameters are defined in netParams.py, the simulation configuration is specified in cfg.py and the steps to actually run the simulation are in init.py.
From the terminal, you could run this simulation with the command python3 init.py. Or, if you have MPI properly installed, you could run the sim on four cores with the command mpiexec -np 4 nrniv -python -mpi init.py.
To run the simulation from this notebook, you would execute %run init.py or !mpiexec -np 4 nrniv -python -mpi init.py.
However, we need to modify the simulation run so that a movie frame (figure) is generated at specified times. You can modify init.py to do this, but here we will do it interactively.
First, lets look at what's in init.py:
from netpyne import sim
from netParams import netParams
from cfg import cfg
sim.initialize(netParams, cfg)
sim.net.createPops()
sim.net.createCells()
sim.net.connectCells()
sim.net.addStims()
sim.net.addRxD()
sim.setupRecording()
sim.simulate()
sim.analyze()
We want to replace sim.simulate() with sim.runSimWithIntervalFunc(), which pauses at a set interval and executes the specified function. See more details on runSimWithIntervalFunc here: http://netpyne.org/netpyne.sim.run.html#netpyne.sim.run.runSimWithIntervalFunc.
The function runSimWithIntervalFunc requires two arguments: the time interval at which to execute the function (interval) and the function to be executed (func). It also has two optional arguments: a limited time range over which to execute the function (timeRange) and a dictionary of arguments to feed into the function to be executed (funcArgs).
For this example, which runs for 1000 ms, we will make a short movie with 10 frames by setting interval=100. We will use the function sim.analysis.plotRxDConcentration and we want to plot the calcium concentration in the extracellular space. We also need to set saveFig to 'movie', and set the colorbar limits (so they stay the same in each movie frame). In order to feed these arguments into the plotting function at each time step, we will create a dictionary:
End of explanation
from netpyne import sim
from netParams import netParams
from cfg import cfg
sim.initialize(netParams, cfg)
sim.net.createPops()
sim.net.createCells()
sim.net.connectCells()
sim.net.addStims()
sim.net.addRxD()
sim.setupRecording()
#sim.simulate()
sim.runSimWithIntervalFunc(100.0, sim.analysis.plotRxDConcentration, timeRange=None, funcArgs=plotArgs)
sim.analyze()
Explanation: At this point, we can replace sim.simulate() in our init.py file with sim.runSimWithIntervalFunc(100.0, sim.analysis.plotRxDConcentration, timeRange=None, funcArgs=plotArgs).
Then we can run the simulation.
End of explanation
import os
import natsort
import imageio
images = []
filenames = natsort.natsorted([file for file in os.listdir() if 'movie' in file and file.endswith('.png')])
for filename in filenames:
images.append(imageio.imread(filename))
imageio.mimsave('rxd_conc_movie.gif', images)
Explanation: This should run the simulation, pausing every 100 ms to create a reaction-diffusion concentration plot.
At this point, we can create a movie (an animated gif) from our frames.
End of explanation |
11,150 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Learning from Data
Decision Trees are a non-parametric supervised learning method used for classification and regression. The goal is to create a model that predicts the value of a target variable by learning simple decision rules inferred from the data features.
Paper Exercise
Let us start with a simple exercise in classifying credit risk.
We have the following features in our dataset.
- Risk - ordinal (label)
- Income - continuous
- Credit History - ordinal
We want to find out the rules that would help us classify the three risk type - This is a paper and pen exercise first!!
Step1: Plotting the Data
Step2: Preparing Data
We have one ordinal variable (Risk) and one nominal variable (Credit History)
Lets use a dictionary for encoding nominal variable
Step3: Decision Tree Classifier
Step4: Visualise the Tree
Step5: Understanding how the Decision Tree works
Terminology
- Each root node represents a single input variable (x) and a split point on that variable.
- The leaf nodes of the tree contain an output variable (y) which is used to make a prediction.
Growing the tree
The first choice we have is how many branches we split the trees. And we choose Binary Tree because otherwise it will explode due to combinatorial explosion. So BINARY TREES is a practical consideration.
The second decision is to choose which variable and where to split it. We need to have an objective function to do this
One objective function is to maximize the information gain (IG) at each split | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('fivethirtyeight')
df = pd.read_csv("data/creditRisk.csv")
df.head()
df.dtypes
Explanation: Learning from Data
Decision Trees are a non-parametric supervised learning method used for classification and regression. The goal is to create a model that predicts the value of a target variable by learning simple decision rules inferred from the data features.
Paper Exercise
Let us start with a simple exercise in classifying credit risk.
We have the following features in our dataset.
- Risk - ordinal (label)
- Income - continuous
- Credit History - ordinal
We want to find out the rules that would help us classify the three risk type - This is a paper and pen exercise first!!
End of explanation
import seaborn as sns
sns.stripplot(data = df, x = "Income", y = "Credit History", hue = "Risk", size = 10)
Explanation: Plotting the Data
End of explanation
df.Risk.unique()
Risk_mapping = {
'High': 2,
'Moderate': 1,
'Low': 0}
df['Risk'] = df['Risk'].map(Risk_mapping)
df['Credit History'].unique()
Credit_mapping = {
'Unknown': 0,
'Bad': -1,
'Good': 1}
df['Credit History'] = df['Credit History'].map(Credit_mapping)
df.head()
Explanation: Preparing Data
We have one ordinal variable (Risk) and one nominal variable (Credit History)
Lets use a dictionary for encoding nominal variable
End of explanation
data = df.iloc[:,0:2]
target = df.iloc[:,2:3]
from sklearn import tree
clf = tree.DecisionTreeClassifier()
clf
clf = clf.fit(data, target)
Explanation: Decision Tree Classifier
End of explanation
import pydotplus
from IPython.display import Image
data.columns
target.columns
dot_data = tree.export_graphviz(clf, out_file='tree.dot', feature_names=data.columns,
class_names=['Low', 'Moderate', 'High'], filled=True,
rounded=True, special_characters=True)
graph = pydotplus.graph_from_dot_file('tree.dot')
Image(graph.create_png())
Explanation: Visualise the Tree
End of explanation
from __future__ import division
x_min, x_max = data.ix[:, 0].min() - 2000, data.ix[:, 0].max() + 2000
y_min, y_max = data.ix[:, 1].min() - 1, data.ix[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, (x_max - x_min)/100), np.arange(y_min, y_max, (y_max - y_min)/100))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
cs = plt.contourf(xx, yy, Z, cmap=plt.cm.viridis, alpha = 0.5)
plt.scatter(x = data.ix[:,0], y = data.ix[:,1], c = target, s = 100, cmap=plt.cm.magma)
Explanation: Understanding how the Decision Tree works
Terminology
- Each root node represents a single input variable (x) and a split point on that variable.
- The leaf nodes of the tree contain an output variable (y) which is used to make a prediction.
Growing the tree
The first choice we have is how many branches we split the trees. And we choose Binary Tree because otherwise it will explode due to combinatorial explosion. So BINARY TREES is a practical consideration.
The second decision is to choose which variable and where to split it. We need to have an objective function to do this
One objective function is to maximize the information gain (IG) at each split:
$$ IG(D_p,f)= I(D_p) - \frac{N_{right}}{N} I(D_{right}) - \frac{N_{left}}{N} I(D_{left}) $$
where:
- f is the feature to perform the split
- $D_p$, $D_{left}$, and $D_{right}$ are the datasets of the parent, left and right child node, respectively
- I is the impurity measure
- N is the total number of samples
- $N_{left}$ and $N_{right}$ is the number of samples in the left and right child node.
Now we need to first define an Impurity measure. The three popular impurity measures are:
- Gini Impurity
- Entropy
- Classification Error
Gini Impurity and Entropy lead to similiar results when growing the tree, while Classification error is not as useful for growing the tree (but for pruning the tree) - See example here http://sebastianraschka.com/faq/docs/decision-tree-binary.html
Lets understand Gini Impurity a little better. Gini impurity is a measure of how often a randomly chosen element from the set would be incorrectly labeled if it was randomly labeled according to the distribution of labels in the subset, Gini impurity can be computed by summing the probability $t_{i} $ of an item with label $i$ being chosen times the probability $ 1-t_{i}$ of a mistake in categorizing that item.
$$ I_{G}(f)=\sum {i=1}^{J}t{i}(1-t_{i})=\sum {i=1}^{J}(t{i}-{t_{i}}^{2})=\sum {i=1}^{J}t{i}-\sum {i=1}^{J}{t{i}}^{2}=1-\sum {i=1}^{J}{t{i}}^{2} $$
Lets calculate the Gini for the overall data set:
Low - 4, Moderate - 6, High - 8 and total observations are 18
$$ I_G(t) = 1 - \left(\frac{6}{18}\right)^2 - \left(\frac{4}{18}\right)^2 - \left(\frac{8}{18}\right)^2 = 1 - \frac{116}{256} = 0.642 $$
scikit-learn uses an optimized CART algorithm, which will use a greedy approach. A greedy approach is used to divide the space called recursive binary splitting. This is a numerical procedure where all the values are lined up and different split points are tried and tested using a objective cost function. The split with the best cost (lowest cost because we minimize cost) is selected.
Another way to think of this is that a learned binary tree is actually a partitioning of the input space. You can think of each input variable as a dimension on an p-dimensional space. The decision tree split this up into rectangles (when p=2 input variables) or some kind of hyper-rectangles with more inputs.
We can draw these partitions for our dataset
End of explanation |
11,151 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Step1: 使用 SavedModel 格式
<table class="tfo-notebook-buttons" align="left">
<td data-segment-approved="false"><a target="_blank" href="https
Step2: 我们会使用 Grace Hopper 的一张照片作为运行示例,并使用一个预先训练的 Keras 图像分类模型,因为它简单易用。您也可以使用自定义模型,后文会作详细介绍。
Step3: 对该图像的顶部预测是“军服”。
Step4: 保存路径遵循 TensorFlow Serving 使用的惯例,路径的最后一个部分(此处为 1/)是模型的版本号——它可以让 Tensorflow Serving 之类的工具推断相对新鲜度。
您可以使用 tf.saved_model.load 将 SavedModel 加载回 Python,并查看 Admiral Hopper 的图像是如何分类的。
Step5: 导入的签名总是会返回字典。要自定义签名名称和输出字典键,请参阅在导出过程中指定签名。
Step6: 从 SavedModel 运行推断会产生与原始模型相同的结果。
Step7: 在 TensorFlow Serving 中运行 SavedModel
可以通过 Python 使用 SavedModel(下文中有详细介绍),但是,生产环境通常会使用专门服务进行推理,而不会运行 Python 代码。使用 TensorFlow Serving 时,这很容易从 SavedModel 进行设置。
请参阅 TensorFlow Serving REST 教程了解端到端 tensorflow-serving 示例。
磁盘上的 SavedModel 格式
SavedModel 是一个包含序列化签名和运行这些签名所需的状态的目录,其中包括变量值和词汇表。
Step8: saved_model.pb 文件用于存储实际 TensorFlow 程序或模型,以及一组已命名的签名——每个签名标识一个接受张量输入和产生张量输出的函数。
SavedModel 可能包含模型的多个变体(多个 v1.MetaGraphDefs,通过 saved_model_cli 的 --tag_set 标记进行标识),但这种情况很少见。可以为模型创建多个变体的 API 包括 tf.Estimator.experimental_export_all_saved_models 和 TensorFlow 1.x 中的 tf.saved_model.Builder。
Step9: variables 目录包含一个标准训练检查点(参阅训练检查点指南)。
Step10: assets 目录包含 TensorFlow 计算图使用的文件,例如,用于初始化词汇表的文本文件。本例中没有使用这种文件。
SavedModel 可能有一个用于保存 TensorFlow 计算图未使用的任何文件的 assets.extra 目录,例如,为使用者提供的关于如何处理 SavedModel 的信息。TensorFlow 本身并不会使用此目录。
保存自定义模型
tf.saved_model.save 支持保存 tf.Module 对象及其子类,如 tf.keras.Layer 和 tf.keras.Model。
我们来看一个保存和恢复 tf.Module 的示例。
Step11: 当您保存 tf.Module 时,任何 tf.Variable 特性、tf.function 装饰的方法以及通过递归遍历找到的 tf.Module 都会得到保存。(参阅检查点教程,了解此递归便利的详细信息。)但是,所有 Python 特性、函数和数据都会丢失。也就是说,当您保存 tf.function 时,不会保存 Python 代码。
如果不保存 Python 代码,SavedModel 如何知道怎样恢复函数?
简单地说,tf.function 的工作原理是,通过跟踪 Python 代码来生成 ConcreteFunction(一个可调用的 tf.Graph 包装器)。当您保存 tf.function 时,实际上保存的是 tf.function 的 ConcreteFunction 缓存。
要详细了解 tf.function 与 ConcreteFunction 之间的关系,请参阅 tf.function 指南。
Step12: 加载和使用自定义模型
在 Python 中加载 SavedModel 时,所有 tf.Variable 特性、tf.function 装饰方法和 tf.Module 都会按照与原始保存的 tf.Module 相同对象结构进行恢复。
Step13: 由于没有保存 Python 代码,所以使用新输入签名调用 tf.function 会失败:
python
imported(tf.constant([3.]))
<pre data-segment-approved="false">ValueError
Step14: 一般微调
与普通 __call__ 相比,Keras 的 SavedModel 提供了更多详细信息来解决更复杂的微调情形。TensorFlow Hub 建议在共享的 SavedModel 中提供以下详细信息(如果适用),以便进行微调:
如果模型使用随机失活,或者是训练与推理之间的前向传递不同的另一种技术(如批次归一化),则 __call__ 方法会获取一个可选的 Python 值 training= 参数。该参数的默认值为 False,但可将其设置为 True。
对于变量的对应列表,除了 __call__ 特性,还有 .variable 和 .trainable_variable 特性。在微调过程中,.trainable_variables 省略了一个变量,该变量原本可训练,但打算将其冻结。
对于 Keras 等将权重正则化项表示为层或子模型特性的框架,还有一个 .regularization_losses 特性。它包含一个零参数函数的列表,这些函数的值应加到总损失中。
回到初始 MobileNet 示例,我们来看看具体操作:
Step15: 导出时指定签名
TensorFlow Serving 之类的工具和 saved_model_cli 可以与 SavedModel 交互。为了帮助这些工具确定要使用的 ConcreteFunction,我们需要指定服务上线签名。tf.keras.Model 会自动指定服务上线签名,但是,对于自定义模块,我们必须明确声明服务上线签名。
重要提示:除非您需要使用 Python 将模型导出到 TensorFlow 2.x 之外的环境,否则您不需要明确导出签名。如果您在寻找为特定函数强制输入签名的方式,请参阅 tf.function 的 input_signature 参数。
默认情况下,自定义 tf.Module 中不会声明签名。
Step16: 要声明服务上线签名,请使用 signatures 关键字参数指定 ConcreteFunction。当指定单个签名时,签名键为 'serving_default',并将保存为常量 tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY。
Step17: 要导出多个签名,请将签名键的字典传递给 ConcreteFunction。每个签名键对应一个 ConcreteFunction。
Step18: 默认情况下,输出张量名称非常通用,如 output_0。为了控制输出的名称,请修改 tf.function,以便返回将输出名称映射到输出的字典。输入的名称来自 Python 函数参数名称。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
import os
import tempfile
from matplotlib import pyplot as plt
import numpy as np
import tensorflow as tf
tmpdir = tempfile.mkdtemp()
physical_devices = tf.config.list_physical_devices('GPU')
for device in physical_devices:
tf.config.experimental.set_memory_growth(device, True)
file = tf.keras.utils.get_file(
"grace_hopper.jpg",
"https://storage.googleapis.com/download.tensorflow.org/example_images/grace_hopper.jpg")
img = tf.keras.preprocessing.image.load_img(file, target_size=[224, 224])
plt.imshow(img)
plt.axis('off')
x = tf.keras.preprocessing.image.img_to_array(img)
x = tf.keras.applications.mobilenet.preprocess_input(
x[tf.newaxis,...])
Explanation: 使用 SavedModel 格式
<table class="tfo-notebook-buttons" align="left">
<td data-segment-approved="false"><a target="_blank" href="https://tensorflow.google.cn/guide/saved_model"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看</a></td>
<td data-segment-approved="false"><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/saved_model.ipynb"> <img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行</a></td>
<td data-segment-approved="false"><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/saved_model.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 上查看源代码</a></td>
<td data-segment-approved="false"><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/guide/saved_model.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a></td>
</table>
SavedModel 包含一个完整的 TensorFlow 程序——不仅包含权重值,还包含计算。它不需要原始模型构建代码就可以运行,因此,对共享和部署(使用 TFLite、TensorFlow.js、TensorFlow Serving 或 TensorFlow Hub)非常有用。
您可以使用以下 API 以 SavedModel 格式保存和加载模型:
低级 tf.saved_model API。本文档将详细介绍如何使用此 API。
保存:tf.saved_model.save(model, path_to_dir)
加载:model = tf.saved_model.load(path_to_dir)
高级tf.keras.Model API。请参阅 Keras 保存和序列化指南。
如果您只是想在训练中保存/加载权重,请参阅检查点指南。
从 Keras 创建 SavedModel
为便于简单介绍,本部分将导出一个预训练 Keras 模型来处理图像分类请求。本指南的其他部分将详细介绍和讨论创建 SavedModel 的其他方式。
End of explanation
labels_path = tf.keras.utils.get_file(
'ImageNetLabels.txt',
'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt')
imagenet_labels = np.array(open(labels_path).read().splitlines())
pretrained_model = tf.keras.applications.MobileNet()
result_before_save = pretrained_model(x)
decoded = imagenet_labels[np.argsort(result_before_save)[0,::-1][:5]+1]
print("Result before saving:\n", decoded)
Explanation: 我们会使用 Grace Hopper 的一张照片作为运行示例,并使用一个预先训练的 Keras 图像分类模型,因为它简单易用。您也可以使用自定义模型,后文会作详细介绍。
End of explanation
mobilenet_save_path = os.path.join(tmpdir, "mobilenet/1/")
tf.saved_model.save(pretrained_model, mobilenet_save_path)
Explanation: 对该图像的顶部预测是“军服”。
End of explanation
loaded = tf.saved_model.load(mobilenet_save_path)
print(list(loaded.signatures.keys())) # ["serving_default"]
Explanation: 保存路径遵循 TensorFlow Serving 使用的惯例,路径的最后一个部分(此处为 1/)是模型的版本号——它可以让 Tensorflow Serving 之类的工具推断相对新鲜度。
您可以使用 tf.saved_model.load 将 SavedModel 加载回 Python,并查看 Admiral Hopper 的图像是如何分类的。
End of explanation
infer = loaded.signatures["serving_default"]
print(infer.structured_outputs)
Explanation: 导入的签名总是会返回字典。要自定义签名名称和输出字典键,请参阅在导出过程中指定签名。
End of explanation
labeling = infer(tf.constant(x))[pretrained_model.output_names[0]]
decoded = imagenet_labels[np.argsort(labeling)[0,::-1][:5]+1]
print("Result after saving and loading:\n", decoded)
Explanation: 从 SavedModel 运行推断会产生与原始模型相同的结果。
End of explanation
!ls {mobilenet_save_path}
Explanation: 在 TensorFlow Serving 中运行 SavedModel
可以通过 Python 使用 SavedModel(下文中有详细介绍),但是,生产环境通常会使用专门服务进行推理,而不会运行 Python 代码。使用 TensorFlow Serving 时,这很容易从 SavedModel 进行设置。
请参阅 TensorFlow Serving REST 教程了解端到端 tensorflow-serving 示例。
磁盘上的 SavedModel 格式
SavedModel 是一个包含序列化签名和运行这些签名所需的状态的目录,其中包括变量值和词汇表。
End of explanation
!saved_model_cli show --dir {mobilenet_save_path} --tag_set serve
Explanation: saved_model.pb 文件用于存储实际 TensorFlow 程序或模型,以及一组已命名的签名——每个签名标识一个接受张量输入和产生张量输出的函数。
SavedModel 可能包含模型的多个变体(多个 v1.MetaGraphDefs,通过 saved_model_cli 的 --tag_set 标记进行标识),但这种情况很少见。可以为模型创建多个变体的 API 包括 tf.Estimator.experimental_export_all_saved_models 和 TensorFlow 1.x 中的 tf.saved_model.Builder。
End of explanation
!ls {mobilenet_save_path}/variables
Explanation: variables 目录包含一个标准训练检查点(参阅训练检查点指南)。
End of explanation
class CustomModule(tf.Module):
def __init__(self):
super(CustomModule, self).__init__()
self.v = tf.Variable(1.)
@tf.function
def __call__(self, x):
print('Tracing with', x)
return x * self.v
@tf.function(input_signature=[tf.TensorSpec([], tf.float32)])
def mutate(self, new_v):
self.v.assign(new_v)
module = CustomModule()
Explanation: assets 目录包含 TensorFlow 计算图使用的文件,例如,用于初始化词汇表的文本文件。本例中没有使用这种文件。
SavedModel 可能有一个用于保存 TensorFlow 计算图未使用的任何文件的 assets.extra 目录,例如,为使用者提供的关于如何处理 SavedModel 的信息。TensorFlow 本身并不会使用此目录。
保存自定义模型
tf.saved_model.save 支持保存 tf.Module 对象及其子类,如 tf.keras.Layer 和 tf.keras.Model。
我们来看一个保存和恢复 tf.Module 的示例。
End of explanation
module_no_signatures_path = os.path.join(tmpdir, 'module_no_signatures')
module(tf.constant(0.))
print('Saving model...')
tf.saved_model.save(module, module_no_signatures_path)
Explanation: 当您保存 tf.Module 时,任何 tf.Variable 特性、tf.function 装饰的方法以及通过递归遍历找到的 tf.Module 都会得到保存。(参阅检查点教程,了解此递归便利的详细信息。)但是,所有 Python 特性、函数和数据都会丢失。也就是说,当您保存 tf.function 时,不会保存 Python 代码。
如果不保存 Python 代码,SavedModel 如何知道怎样恢复函数?
简单地说,tf.function 的工作原理是,通过跟踪 Python 代码来生成 ConcreteFunction(一个可调用的 tf.Graph 包装器)。当您保存 tf.function 时,实际上保存的是 tf.function 的 ConcreteFunction 缓存。
要详细了解 tf.function 与 ConcreteFunction 之间的关系,请参阅 tf.function 指南。
End of explanation
imported = tf.saved_model.load(module_no_signatures_path)
assert imported(tf.constant(3.)).numpy() == 3
imported.mutate(tf.constant(2.))
assert imported(tf.constant(3.)).numpy() == 6
Explanation: 加载和使用自定义模型
在 Python 中加载 SavedModel 时,所有 tf.Variable 特性、tf.function 装饰方法和 tf.Module 都会按照与原始保存的 tf.Module 相同对象结构进行恢复。
End of explanation
optimizer = tf.optimizers.SGD(0.05)
def train_step():
with tf.GradientTape() as tape:
loss = (10. - imported(tf.constant(2.))) ** 2
variables = tape.watched_variables()
grads = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(grads, variables))
return loss
for _ in range(10):
# "v" approaches 5, "loss" approaches 0
print("loss={:.2f} v={:.2f}".format(train_step(), imported.v.numpy()))
Explanation: 由于没有保存 Python 代码,所以使用新输入签名调用 tf.function 会失败:
python
imported(tf.constant([3.]))
<pre data-segment-approved="false">ValueError: Could not find matching function to call for canonicalized inputs ((<tf.tensor shape="(1,)" dtype="float32">,), {}). Only existing signatures are [((TensorSpec(shape=(), dtype=tf.float32, name=u'x'),), {})]. </tf.tensor></pre>
基本微调
可以使用变量对象,还可以通过导入的函数向后传播。对于简单情形,这足以支持 SavedModel 的微调(即重新训练)。
End of explanation
loaded = tf.saved_model.load(mobilenet_save_path)
print("MobileNet has {} trainable variables: {}, ...".format(
len(loaded.trainable_variables),
", ".join([v.name for v in loaded.trainable_variables[:5]])))
trainable_variable_ids = {id(v) for v in loaded.trainable_variables}
non_trainable_variables = [v for v in loaded.variables
if id(v) not in trainable_variable_ids]
print("MobileNet also has {} non-trainable variables: {}, ...".format(
len(non_trainable_variables),
", ".join([v.name for v in non_trainable_variables[:3]])))
Explanation: 一般微调
与普通 __call__ 相比,Keras 的 SavedModel 提供了更多详细信息来解决更复杂的微调情形。TensorFlow Hub 建议在共享的 SavedModel 中提供以下详细信息(如果适用),以便进行微调:
如果模型使用随机失活,或者是训练与推理之间的前向传递不同的另一种技术(如批次归一化),则 __call__ 方法会获取一个可选的 Python 值 training= 参数。该参数的默认值为 False,但可将其设置为 True。
对于变量的对应列表,除了 __call__ 特性,还有 .variable 和 .trainable_variable 特性。在微调过程中,.trainable_variables 省略了一个变量,该变量原本可训练,但打算将其冻结。
对于 Keras 等将权重正则化项表示为层或子模型特性的框架,还有一个 .regularization_losses 特性。它包含一个零参数函数的列表,这些函数的值应加到总损失中。
回到初始 MobileNet 示例,我们来看看具体操作:
End of explanation
assert len(imported.signatures) == 0
Explanation: 导出时指定签名
TensorFlow Serving 之类的工具和 saved_model_cli 可以与 SavedModel 交互。为了帮助这些工具确定要使用的 ConcreteFunction,我们需要指定服务上线签名。tf.keras.Model 会自动指定服务上线签名,但是,对于自定义模块,我们必须明确声明服务上线签名。
重要提示:除非您需要使用 Python 将模型导出到 TensorFlow 2.x 之外的环境,否则您不需要明确导出签名。如果您在寻找为特定函数强制输入签名的方式,请参阅 tf.function 的 input_signature 参数。
默认情况下,自定义 tf.Module 中不会声明签名。
End of explanation
module_with_signature_path = os.path.join(tmpdir, 'module_with_signature')
call = module.__call__.get_concrete_function(tf.TensorSpec(None, tf.float32))
tf.saved_model.save(module, module_with_signature_path, signatures=call)
imported_with_signatures = tf.saved_model.load(module_with_signature_path)
list(imported_with_signatures.signatures.keys())
Explanation: 要声明服务上线签名,请使用 signatures 关键字参数指定 ConcreteFunction。当指定单个签名时,签名键为 'serving_default',并将保存为常量 tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY。
End of explanation
module_multiple_signatures_path = os.path.join(tmpdir, 'module_with_multiple_signatures')
signatures = {"serving_default": call,
"array_input": module.__call__.get_concrete_function(tf.TensorSpec([None], tf.float32))}
tf.saved_model.save(module, module_multiple_signatures_path, signatures=signatures)
imported_with_multiple_signatures = tf.saved_model.load(module_multiple_signatures_path)
list(imported_with_multiple_signatures.signatures.keys())
Explanation: 要导出多个签名,请将签名键的字典传递给 ConcreteFunction。每个签名键对应一个 ConcreteFunction。
End of explanation
class CustomModuleWithOutputName(tf.Module):
def __init__(self):
super(CustomModuleWithOutputName, self).__init__()
self.v = tf.Variable(1.)
@tf.function(input_signature=[tf.TensorSpec([], tf.float32)])
def __call__(self, x):
return {'custom_output_name': x * self.v}
module_output = CustomModuleWithOutputName()
call_output = module_output.__call__.get_concrete_function(tf.TensorSpec(None, tf.float32))
module_output_path = os.path.join(tmpdir, 'module_with_output_name')
tf.saved_model.save(module_output, module_output_path,
signatures={'serving_default': call_output})
imported_with_output_name = tf.saved_model.load(module_output_path)
imported_with_output_name.signatures['serving_default'].structured_outputs
Explanation: 默认情况下,输出张量名称非常通用,如 output_0。为了控制输出的名称,请修改 tf.function,以便返回将输出名称映射到输出的字典。输入的名称来自 Python 函数参数名称。
End of explanation |
11,152 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression Week 1
Step1: Load house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
Step2: Split data into training and testing
We use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).
Step3: Useful SFrame summary functions
In order to make use of the closed form soltion as well as take advantage of graphlab's built in functions we will review some important ones. In particular
Step4: As we see we get the same answer both ways
Step5: Aside
Step6: We can test that our function works by passing it something where we know the answer. In particular we can generate a feature and then put the output exactly on a line
Step7: Now that we know it works let's build a regression model for predicting price based on sqft_living. Rembember that we train on train_data!
Step8: Predicting Values
Now that we have the model parameters
Step9: Now that we can calculate a prediction given the slop and intercept let's make a prediction. Use (or alter) the following to find out the estimated price for a house with 2650 squarefeet according to the squarefeet model we estiamted above.
Quiz Question
Step10: Residual Sum of Squares
Now that we have a model and can make predictions let's evaluate our model using Residual Sum of Squares (RSS). Recall that RSS is the sum of the squares of the residuals and the residuals is just a fancy word for the difference between the predicted output and the true output.
Complete the following (or write your own) function to compute the RSS of a simple linear regression model given the input_feature, output, intercept and slope
Step11: Let's test our get_residual_sum_of_squares function by applying it to the test model where the data lie exactly on a line. Since they lie exactly on a line the residual sum of squares should be zero!
Step12: Now use your function to calculate the RSS on training data from the squarefeet model calculated above.
Quiz Question
Step13: Predict the squarefeet given price
What if we want to predict the squarefoot given the price? Since we have an equation y = a + b*x we can solve the function for x. So that if we have the intercept (a) and the slope (b) and the price (y) we can solve for the estimated squarefeet (x).
Comlplete the following function to compute the inverse regression estimate, i.e. predict the input_feature given the output!
Step14: Now that we have a function to compute the squarefeet given the price from our simple regression model let's see how big we might expect a house that coses $800,000 to be.
Quiz Question
Step15: New Model
Step16: Test your Linear Regression Algorithm
Now we have two models for predicting the price of a house. How do we know which one is better? Calculate the RSS on the TEST data (remember this data wasn't involved in learning the model). Compute the RSS from predicting prices using bedrooms and from predicting prices using squarefeet.
Quiz Question | Python Code:
import sys
sys.path.append('C:\Anaconda2\envs\dato-env\Lib\site-packages')
import graphlab
Explanation: Regression Week 1: Simple Linear Regression
In this notebook we will use data on house sales in King County to predict house prices using simple (one input) linear regression. You will:
* Use graphlab SArray and SFrame functions to compute important summary statistics
* Write a function to compute the Simple Linear Regression weights using the closed form solution
* Write a function to make predictions of the output given the input feature
* Turn the regression around to predict the input given the output
* Compare two different models for predicting house prices
In this notebook you will be provided with some already complete code as well as some code that you should complete yourself in order to answer quiz questions. The code we provide to complte is optional and is there to assist you with solving the problems but feel free to ignore the helper code and write your own.
Fire up graphlab create
End of explanation
sales = graphlab.SFrame('kc_house_data.gl/')
Explanation: Load house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
End of explanation
train_data,test_data = sales.random_split(.8,seed=0)
Explanation: Split data into training and testing
We use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).
End of explanation
# Let's compute the mean of the House Prices in King County in 2 different ways.
prices = sales['price'] # extract the price column of the sales SFrame -- this is now an SArray
# recall that the arithmetic average (the mean) is the sum of the prices divided by the total number of houses:
sum_prices = prices.sum()
num_houses = prices.size() # when prices is an SArray .size() returns its length
avg_price_1 = sum_prices/num_houses
avg_price_2 = prices.mean() # if you just want the average, the .mean() function
print "average price via method 1: " + str(avg_price_1)
print "average price via method 2: " + str(avg_price_2)
Explanation: Useful SFrame summary functions
In order to make use of the closed form soltion as well as take advantage of graphlab's built in functions we will review some important ones. In particular:
* Computing the sum of an SArray
* Computing the arithmetic average (mean) of an SArray
* multiplying SArrays by constants
* multiplying SArrays by other SArrays
End of explanation
# if we want to multiply every price by 0.5 it's a simple as:
half_prices = 0.5*prices
# Let's compute the sum of squares of price. We can multiply two SArrays of the same length elementwise also with *
prices_squared = prices*prices
sum_prices_squared = prices_squared.sum() # price_squared is an SArray of the squares and we want to add them up.
print "the sum of price squared is: " + str(sum_prices_squared)
Explanation: As we see we get the same answer both ways
End of explanation
def simple_linear_regression(input_feature, output):
Xi = input_feature
Yi = output
N = len(Xi)
# compute the mean of input_feature and output
Ymean = Yi.mean()
Xmean = Xi.mean()
# compute the product of the output and the input_feature and its mean
SumYiXi = (Yi * Xi).sum()
YiXiByN = (Yi.sum() * Xi.sum()) / N
# compute the squared value of the input_feature and its mean
XiSq = (Xi * Xi).sum()
XiXiByN = (Xi.sum() * Xi.sum()) / N
# use the formula for the slope
slope = (SumYiXi - YiXiByN) / (XiSq - XiXiByN)
# use the formula for the intercept
intercept = Ymean - (slope * Xmean)
return (intercept, slope)
Explanation: Aside: The python notation x.xxe+yy means x.xx * 10^(yy). e.g 100 = 10^2 = 1*10^2 = 1e2
Build a generic simple linear regression function
Armed with these SArray functions we can use the closed form solution found from lecture to compute the slope and intercept for a simple linear regression on observations stored as SArrays: input_feature, output.
Complete the following function (or write your own) to compute the simple linear regression slope and intercept:
Hint:
https://www.coursera.org/learn/ml-regression/module/9crXk/discussions/MZT-xZnVEeWPmAru8qzZow
Follow slide 68, which is Approach 1: Set gradient = 0
End of explanation
test_feature = graphlab.SArray(range(5))
test_output = graphlab.SArray(1 + 1*test_feature)
(test_intercept, test_slope) = simple_linear_regression(test_feature, test_output)
print "Intercept: " + str(test_intercept)
print "Slope: " + str(test_slope)
Explanation: We can test that our function works by passing it something where we know the answer. In particular we can generate a feature and then put the output exactly on a line: output = 1 + 1*input_feature then we know both our slope and intercept should be 1
End of explanation
sqft_intercept, sqft_slope = simple_linear_regression(train_data['sqft_living'], train_data['price'])
print "Intercept: " + str(sqft_intercept)
print "Slope: " + str(sqft_slope)
Explanation: Now that we know it works let's build a regression model for predicting price based on sqft_living. Rembember that we train on train_data!
End of explanation
def get_regression_predictions(input_feature, intercept, slope):
# calculate the predicted values:
predicted_values = intercept + (slope * input_feature)
return predicted_values
Explanation: Predicting Values
Now that we have the model parameters: intercept & slope we can make predictions. Using SArrays it's easy to multiply an SArray by a constant and add a constant value. Complete the following function to return the predicted output given the input_feature, slope and intercept:
End of explanation
my_house_sqft = 2650
estimated_price = get_regression_predictions(my_house_sqft, sqft_intercept, sqft_slope)
print "The estimated price for a house with %d squarefeet is $%.2f" % (my_house_sqft, estimated_price)
Explanation: Now that we can calculate a prediction given the slop and intercept let's make a prediction. Use (or alter) the following to find out the estimated price for a house with 2650 squarefeet according to the squarefeet model we estiamted above.
Quiz Question: Using your Slope and Intercept from (4), What is the predicted price for a house with 2650 sqft?
End of explanation
def get_residual_sum_of_squares(input_feature, output, intercept, slope):
# First get the predictions
predicted_values = intercept + (slope * input_feature)
# then compute the residuals (since we are squaring it doesn't matter which order you subtract)
residuals = output - predicted_values
# square the residuals and add them up
RSS = (residuals * residuals).sum()
return(RSS)
Explanation: Residual Sum of Squares
Now that we have a model and can make predictions let's evaluate our model using Residual Sum of Squares (RSS). Recall that RSS is the sum of the squares of the residuals and the residuals is just a fancy word for the difference between the predicted output and the true output.
Complete the following (or write your own) function to compute the RSS of a simple linear regression model given the input_feature, output, intercept and slope:
End of explanation
print get_residual_sum_of_squares(test_feature, test_output, test_intercept, test_slope) # should be 0.0
Explanation: Let's test our get_residual_sum_of_squares function by applying it to the test model where the data lie exactly on a line. Since they lie exactly on a line the residual sum of squares should be zero!
End of explanation
rss_prices_on_sqft = get_residual_sum_of_squares(train_data['sqft_living'], train_data['price'], sqft_intercept, sqft_slope)
print 'The RSS of predicting Prices based on Square Feet is : ' + str(rss_prices_on_sqft)
Explanation: Now use your function to calculate the RSS on training data from the squarefeet model calculated above.
Quiz Question: According to this function and the slope and intercept from the squarefeet model What is the RSS for the simple linear regression using squarefeet to predict prices on TRAINING data?
End of explanation
def inverse_regression_predictions(output, intercept, slope):
# solve output = intercept + slope*input_feature for input_feature. Use this equation to compute the inverse predictions:
estimated_feature = (output - intercept)/slope
return estimated_feature
Explanation: Predict the squarefeet given price
What if we want to predict the squarefoot given the price? Since we have an equation y = a + b*x we can solve the function for x. So that if we have the intercept (a) and the slope (b) and the price (y) we can solve for the estimated squarefeet (x).
Comlplete the following function to compute the inverse regression estimate, i.e. predict the input_feature given the output!
End of explanation
my_house_price = 800000
estimated_squarefeet = inverse_regression_predictions(my_house_price, sqft_intercept, sqft_slope)
print "The estimated squarefeet for a house worth $%.2f is %d" % (my_house_price, estimated_squarefeet)
Explanation: Now that we have a function to compute the squarefeet given the price from our simple regression model let's see how big we might expect a house that coses $800,000 to be.
Quiz Question: According to this function and the regression slope and intercept from (3) what is the estimated square-feet for a house costing $800,000?
End of explanation
# Estimate the slope and intercept for predicting 'price' based on 'bedrooms'
sqft_intercept, sqft_slope = simple_linear_regression(train_data['bedrooms'], train_data['price'])
print "Intercept: " + str(sqft_intercept)
print "Slope: " + str(sqft_slope)
Explanation: New Model: estimate prices from bedrooms
We have made one model for predicting house prices using squarefeet, but there are many other features in the sales SFrame.
Use your simple linear regression function to estimate the regression parameters from predicting Prices based on number of bedrooms. Use the training data!
End of explanation
# Compute RSS when using bedrooms on TEST data:
sqft_intercept, sqft_slope = simple_linear_regression(train_data['bedrooms'], train_data['price'])
rss_prices_on_bedrooms = get_residual_sum_of_squares(test_data['bedrooms'], test_data['price'], sqft_intercept, sqft_slope)
print 'The RSS of predicting Prices based on Bedrooms is : ' + str(rss_prices_on_bedrooms)
# Compute RSS when using squarfeet on TEST data:
sqft_intercept, sqft_slope = simple_linear_regression(train_data['sqft_living'], train_data['price'])
rss_prices_on_sqft = get_residual_sum_of_squares(test_data['sqft_living'], test_data['price'], sqft_intercept, sqft_slope)
print 'The RSS of predicting Prices based on Square Feet is : ' + str(rss_prices_on_sqft)
Explanation: Test your Linear Regression Algorithm
Now we have two models for predicting the price of a house. How do we know which one is better? Calculate the RSS on the TEST data (remember this data wasn't involved in learning the model). Compute the RSS from predicting prices using bedrooms and from predicting prices using squarefeet.
Quiz Question: Which model (square feet or bedrooms) has lowest RSS on TEST data? Think about why this might be the case.
End of explanation |
11,153 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Segmentation Evaluation <a href="https
Step2: Utility method for display
Step3: Fetch the data
Retrieve a single CT scan and three manual delineations of a liver tumor. Visual inspection of the data highlights the variability between experts.
Step4: Derive a reference
There are a variety of ways to derive a reference segmentation from multiple expert inputs. Several options, there are more, are described in "A comparison of ground truth estimation methods", A. M. Biancardi, A. C. Jirapatnakul, A. P. Reeves.
Two methods that are available in SimpleITK are <b>majority vote</b> and the <b>STAPLE</b> algorithm.
Step5: Evaluate segmentations using the reference
Once we derive a reference from our experts input we can compare segmentation results to it.
Note that in this notebook we compare the expert segmentations to the reference derived from them. This is not relevant for algorithm evaluation, but it can potentially be used to rank your experts.
In this specific implementation we take advantage of the fact that we have a binary segmentation with 1 for foreground and 0 for background.
Step6: Improved output
If the pandas package is installed in your Python environment then you can easily produce high quality output.
Step7: You can also export the data as a table for your LaTeX manuscript using the to_latex function.
<b>Note</b>
Step9: Segmentation Representation and the Hausdorff Distance
The results of segmentation can be represented as a set of closed contours/surfaces or as the discrete set of points (pixels/voxels) belonging to the segmented objects. Ideally using either representation would yield the same values for the segmentation evaluation metrics. Unfortunately, the Hausdorff distance computed directly from each of these representations will generally not yield the same results. In some cases, such as the one above, the two values do match (table entries hausdorff_distance and max_surface_distance).
The following example illustrates that the Hausdorff distance for the contour/surface representation and the discrete point set representing the segmented object differ, and that there is no correlation between the two.
Our object of interest is annulus shaped (e.g. myocardium in short axis MRI). It has an internal radius, $r$, and an external radius $R>r$. We over-segmented the object and obtained a filled circle of radius $R$.
The contour/surface based Hausdorff distance is $R-r$, the distance between external contours is zero and between internal and external contours is $R-r$. The pixel/voxel object based Hausdorff distance is $r$, corresponding to the distance between the center point in the over-segmented result to the inner circle contour. For different values of $r$ we can either have $R-r \geq r$ or $R-r \leq r$.
Note | Python Code:
import SimpleITK as sitk
import numpy as np
%run update_path_to_download_script
from downloaddata import fetch_data as fdata
%matplotlib inline
import matplotlib.pyplot as plt
import gui
from ipywidgets import interact, fixed
Explanation: Segmentation Evaluation <a href="https://mybinder.org/v2/gh/InsightSoftwareConsortium/SimpleITK-Notebooks/master?filepath=Python%2F34_Segmentation_Evaluation.ipynb"><img style="float: right;" src="https://mybinder.org/badge_logo.svg"></a>
Evaluating segmentation algorithms is most often done using reference data to which you compare your results.
In the medical domain reference data is commonly obtained via manual segmentation by an expert (don't forget to thank your clinical colleagues for their hard work). When you are resource limited, the reference data may be defined by a single expert. This is less than ideal. When multiple experts provide you with their input then you can potentially combine them to obtain reference data that is closer to the ever elusive "ground truth". In this notebook we show two approaches to combining input from multiple observers, majority vote and the Simultaneous Truth and Performance Level
Estimation (STAPLE).
Once we have a reference, we compare the algorithm's performance using multiple criteria, as usually there is no single evaluation measure that conveys all of the relevant information. In this notebook we illustrate the use of the following evaluation criteria:
* Overlap measures:
* Jaccard and Dice coefficients
* false negative and false positive errors
* Surface distance measures:
* Hausdorff distance (symmetric)
* mean, median, max and standard deviation between surfaces
* Volume measures:
* volume similarity $ \frac{2*(v1-v2)}{v1+v2}$
The relevant criteria are task dependent, so you need to ask yourself whether you are interested in detecting spurious errors or not (mean or max surface distance), whether over/under segmentation should be differentiated (volume similarity and Dice or just Dice), and what is the ratio between acceptable errors and the size of the segmented object (Dice coefficient may be too sensitive to small errors when the segmented object is small and not sensitive enough to large errors when the segmented object is large).
The data we use in the notebook is a set of manually segmented liver tumors from a single clinical CT scan. The relevant publication is: T. Popa et al., "Tumor Volume Measurement and Volume Measurement Comparison Plug-ins for VolView Using ITK", SPIE Medical Imaging: Visualization, Image-Guided Procedures, and Display, 2006.
Note: The approach described here can also be used to evaluate Registration, as illustrated in the free form deformation notebook.
Recommended read: A community effort describing limitations of various evaluation metrics,
A. Reinke et al., "Common Limitations of Image Processing Metrics: A Picture Story", available from arxiv (PDF).
End of explanation
def display_with_overlay(
segmentation_number, slice_number, image, segs, window_min, window_max
):
Display a CT slice with segmented contours overlaid onto it. The contours are the edges of
the labeled regions.
img = image[:, :, slice_number]
msk = segs[segmentation_number][:, :, slice_number]
overlay_img = sitk.LabelMapContourOverlay(
sitk.Cast(msk, sitk.sitkLabelUInt8),
sitk.Cast(
sitk.IntensityWindowing(
img, windowMinimum=window_min, windowMaximum=window_max
),
sitk.sitkUInt8,
),
opacity=1,
contourThickness=[2, 2],
)
# We assume the original slice is isotropic, otherwise the display would be distorted
plt.imshow(sitk.GetArrayViewFromImage(overlay_img))
plt.axis("off")
plt.show()
Explanation: Utility method for display
End of explanation
image = sitk.ReadImage(fdata("liverTumorSegmentations/Patient01Homo.mha"))
segmentation_file_names = [
"liverTumorSegmentations/Patient01Homo_Rad01.mha",
"liverTumorSegmentations/Patient01Homo_Rad02.mha",
"liverTumorSegmentations/Patient01Homo_Rad03.mha",
]
segmentations = [
sitk.ReadImage(fdata(file_name), sitk.sitkUInt8)
for file_name in segmentation_file_names
]
interact(
display_with_overlay,
segmentation_number=(0, len(segmentations) - 1),
slice_number=(0, image.GetSize()[2] - 1),
image=fixed(image),
segs=fixed(segmentations),
window_min=fixed(-1024),
window_max=fixed(976),
);
Explanation: Fetch the data
Retrieve a single CT scan and three manual delineations of a liver tumor. Visual inspection of the data highlights the variability between experts.
End of explanation
# Use majority voting to obtain the reference segmentation. Note that this filter does not resolve ties. In case of
# ties, it will assign max_label_value+1 or a user specified label value (labelForUndecidedPixels) to the result.
# Before using the results of this filter you will have to check whether there were ties and modify the results to
# resolve the ties in a manner that makes sense for your task. The filter implicitly accommodates multiple labels.
labelForUndecidedPixels = 10
reference_segmentation_majority_vote = sitk.LabelVoting(
segmentations, labelForUndecidedPixels
)
manual_plus_majority_vote = list(segmentations)
# Append the reference segmentation to the list of manual segmentations
manual_plus_majority_vote.append(reference_segmentation_majority_vote)
interact(
display_with_overlay,
segmentation_number=(0, len(manual_plus_majority_vote) - 1),
slice_number=(0, image.GetSize()[1] - 1),
image=fixed(image),
segs=fixed(manual_plus_majority_vote),
window_min=fixed(-1024),
window_max=fixed(976),
);
# Use the STAPLE algorithm to obtain the reference segmentation. This implementation of the original algorithm
# combines a single label from multiple segmentations, the label is user specified. The result of the
# filter is the voxel's probability of belonging to the foreground. We then have to threshold the result to obtain
# a reference binary segmentation.
foregroundValue = 1
threshold = 0.95
reference_segmentation_STAPLE_probabilities = sitk.STAPLE(
segmentations, foregroundValue
)
# We use the overloaded operator to perform thresholding, another option is to use the BinaryThreshold function.
reference_segmentation_STAPLE = reference_segmentation_STAPLE_probabilities > threshold
manual_plus_staple = list(segmentations)
# Append the reference segmentation to the list of manual segmentations
manual_plus_staple.append(reference_segmentation_STAPLE)
interact(
display_with_overlay,
segmentation_number=(0, len(manual_plus_staple) - 1),
slice_number=(0, image.GetSize()[1] - 1),
image=fixed(image),
segs=fixed(manual_plus_staple),
window_min=fixed(-1024),
window_max=fixed(976),
);
Explanation: Derive a reference
There are a variety of ways to derive a reference segmentation from multiple expert inputs. Several options, there are more, are described in "A comparison of ground truth estimation methods", A. M. Biancardi, A. C. Jirapatnakul, A. P. Reeves.
Two methods that are available in SimpleITK are <b>majority vote</b> and the <b>STAPLE</b> algorithm.
End of explanation
from enum import Enum
# Use enumerations to represent the various evaluation measures
class OverlapMeasures(Enum):
jaccard, dice, volume_similarity, false_negative, false_positive = range(5)
class SurfaceDistanceMeasures(Enum):
(
hausdorff_distance,
mean_surface_distance,
median_surface_distance,
std_surface_distance,
max_surface_distance,
) = range(5)
# Select which reference we want to use (majority vote or STAPLE)
reference_segmentation = reference_segmentation_STAPLE
# Empty numpy arrays to hold the results
overlap_results = np.zeros(
(len(segmentations), len(OverlapMeasures.__members__.items()))
)
surface_distance_results = np.zeros(
(len(segmentations), len(SurfaceDistanceMeasures.__members__.items()))
)
# Compute the evaluation criteria
# Note that for the overlap measures filter, because we are dealing with a single label we
# use the combined, all labels, evaluation measures without passing a specific label to the methods.
overlap_measures_filter = sitk.LabelOverlapMeasuresImageFilter()
hausdorff_distance_filter = sitk.HausdorffDistanceImageFilter()
reference_surface = sitk.LabelContour(reference_segmentation)
# Use the absolute values of the distance map to compute the surface distances (distance map sign, outside or inside
# relationship, is irrelevant)
reference_distance_map = sitk.Abs(
sitk.SignedMaurerDistanceMap(
reference_surface, squaredDistance=False, useImageSpacing=True
)
)
statistics_image_filter = sitk.StatisticsImageFilter()
# Get the number of pixels in the reference surface by counting all pixels that are 1.
statistics_image_filter.Execute(reference_surface)
num_reference_surface_pixels = int(statistics_image_filter.GetSum())
for i, seg in enumerate(segmentations):
# Overlap measures
overlap_measures_filter.Execute(seg, reference_segmentation)
overlap_results[
i, OverlapMeasures.jaccard.value
] = overlap_measures_filter.GetJaccardCoefficient()
overlap_results[
i, OverlapMeasures.dice.value
] = overlap_measures_filter.GetDiceCoefficient()
overlap_results[
i, OverlapMeasures.volume_similarity.value
] = overlap_measures_filter.GetVolumeSimilarity()
overlap_results[
i, OverlapMeasures.false_negative.value
] = overlap_measures_filter.GetFalseNegativeError()
overlap_results[
i, OverlapMeasures.false_positive.value
] = overlap_measures_filter.GetFalsePositiveError()
# Hausdorff distance
hausdorff_distance_filter.Execute(reference_segmentation, seg)
surface_distance_results[
i, SurfaceDistanceMeasures.hausdorff_distance.value
] = hausdorff_distance_filter.GetHausdorffDistance()
segmented_surface = sitk.LabelContour(seg)
# Symmetric surface distance measures
segmented_distance_map = sitk.Abs(
sitk.SignedMaurerDistanceMap(
segmented_surface, squaredDistance=False, useImageSpacing=True
)
)
# Multiply the binary surface segmentations with the distance maps. The resulting distance
# maps contain non-zero values only on the surface (they can also contain zero on the surface)
seg2ref_distance_map = reference_distance_map * sitk.Cast(
segmented_surface, sitk.sitkFloat32
)
ref2seg_distance_map = segmented_distance_map * sitk.Cast(
reference_surface, sitk.sitkFloat32
)
# Get the number of pixels in the reference surface by counting all pixels that are 1.
statistics_image_filter.Execute(segmented_surface)
num_segmented_surface_pixels = int(statistics_image_filter.GetSum())
# Get all non-zero distances and then add zero distances if required.
seg2ref_distance_map_arr = sitk.GetArrayViewFromImage(seg2ref_distance_map)
seg2ref_distances = list(seg2ref_distance_map_arr[seg2ref_distance_map_arr != 0])
seg2ref_distances = seg2ref_distances + list(
np.zeros(num_segmented_surface_pixels - len(seg2ref_distances))
)
ref2seg_distance_map_arr = sitk.GetArrayViewFromImage(ref2seg_distance_map)
ref2seg_distances = list(ref2seg_distance_map_arr[ref2seg_distance_map_arr != 0])
ref2seg_distances = ref2seg_distances + list(
np.zeros(num_reference_surface_pixels - len(ref2seg_distances))
)
all_surface_distances = seg2ref_distances + ref2seg_distances
# The maximum of the symmetric surface distances is the Hausdorff distance between the surfaces. In
# general, it is not equal to the Hausdorff distance between all voxel/pixel points of the two
# segmentations, though in our case it is. More on this below.
surface_distance_results[
i, SurfaceDistanceMeasures.mean_surface_distance.value
] = np.mean(all_surface_distances)
surface_distance_results[
i, SurfaceDistanceMeasures.median_surface_distance.value
] = np.median(all_surface_distances)
surface_distance_results[
i, SurfaceDistanceMeasures.std_surface_distance.value
] = np.std(all_surface_distances)
surface_distance_results[
i, SurfaceDistanceMeasures.max_surface_distance.value
] = np.max(all_surface_distances)
# Print the matrices
np.set_printoptions(precision=3)
print(overlap_results)
print(surface_distance_results)
Explanation: Evaluate segmentations using the reference
Once we derive a reference from our experts input we can compare segmentation results to it.
Note that in this notebook we compare the expert segmentations to the reference derived from them. This is not relevant for algorithm evaluation, but it can potentially be used to rank your experts.
In this specific implementation we take advantage of the fact that we have a binary segmentation with 1 for foreground and 0 for background.
End of explanation
import pandas as pd
from IPython.display import display, HTML
# Graft our results matrix into pandas data frames
overlap_results_df = pd.DataFrame(
data=overlap_results,
index=list(range(len(segmentations))),
columns=[name for name, _ in OverlapMeasures.__members__.items()],
)
surface_distance_results_df = pd.DataFrame(
data=surface_distance_results,
index=list(range(len(segmentations))),
columns=[name for name, _ in SurfaceDistanceMeasures.__members__.items()],
)
# Display the data as HTML tables and graphs
display(HTML(overlap_results_df.to_html(float_format=lambda x: "%.3f" % x)))
display(HTML(surface_distance_results_df.to_html(float_format=lambda x: "%.3f" % x)))
overlap_results_df.plot(kind="bar").legend(bbox_to_anchor=(1.6, 0.9))
surface_distance_results_df.plot(kind="bar").legend(bbox_to_anchor=(1.6, 0.9))
Explanation: Improved output
If the pandas package is installed in your Python environment then you can easily produce high quality output.
End of explanation
# The formatting of the table using the default settings is less than ideal
print(overlap_results_df.to_latex())
# We can improve on this by specifying the table's column format and the float format
print(
overlap_results_df.to_latex(
column_format="ccccccc", float_format=lambda x: "%.3f" % x
)
)
Explanation: You can also export the data as a table for your LaTeX manuscript using the to_latex function.
<b>Note</b>: You will need to add the \usepackage{booktabs} to your LaTeX document's preamble.
To create the minimal LaTeX document which will allow you to see the difference between the tables below, copy paste:
\documentclass{article}
\usepackage{booktabs}
\begin{document}
paste the tables here
\end{document}
End of explanation
# Create our segmentations and display
image_size = [64, 64]
circle_center = [30, 30]
circle_radius = [20, 20]
# A filled circle with radius R
seg = (
sitk.GaussianSource(sitk.sitkUInt8, image_size, circle_radius, circle_center) > 200
)
# A torus with inner radius r
reference_segmentation1 = seg - (
sitk.GaussianSource(sitk.sitkUInt8, image_size, circle_radius, circle_center) > 240
)
# A torus with inner radius r_2<r
reference_segmentation2 = seg - (
sitk.GaussianSource(sitk.sitkUInt8, image_size, circle_radius, circle_center) > 250
)
gui.multi_image_display2D(
[reference_segmentation1, reference_segmentation2, seg],
["reference 1", "reference 2", "segmentation"],
figure_size=(12, 4),
);
def surface_hausdorff_distance(reference_segmentation, seg):
Compute symmetric surface distances and take the maximum.
reference_surface = sitk.LabelContour(reference_segmentation)
reference_distance_map = sitk.Abs(
sitk.SignedMaurerDistanceMap(
reference_surface, squaredDistance=False, useImageSpacing=True
)
)
statistics_image_filter = sitk.StatisticsImageFilter()
# Get the number of pixels in the reference surface by counting all pixels that are 1.
statistics_image_filter.Execute(reference_surface)
num_reference_surface_pixels = int(statistics_image_filter.GetSum())
segmented_surface = sitk.LabelContour(seg)
segmented_distance_map = sitk.Abs(
sitk.SignedMaurerDistanceMap(
segmented_surface, squaredDistance=False, useImageSpacing=True
)
)
# Multiply the binary surface segmentations with the distance maps. The resulting distance
# maps contain non-zero values only on the surface (they can also contain zero on the surface)
seg2ref_distance_map = reference_distance_map * sitk.Cast(
segmented_surface, sitk.sitkFloat32
)
ref2seg_distance_map = segmented_distance_map * sitk.Cast(
reference_surface, sitk.sitkFloat32
)
# Get the number of pixels in the reference surface by counting all pixels that are 1.
statistics_image_filter.Execute(segmented_surface)
num_segmented_surface_pixels = int(statistics_image_filter.GetSum())
# Get all non-zero distances and then add zero distances if required.
seg2ref_distance_map_arr = sitk.GetArrayViewFromImage(seg2ref_distance_map)
seg2ref_distances = list(seg2ref_distance_map_arr[seg2ref_distance_map_arr != 0])
seg2ref_distances = seg2ref_distances + list(
np.zeros(num_segmented_surface_pixels - len(seg2ref_distances))
)
ref2seg_distance_map_arr = sitk.GetArrayViewFromImage(ref2seg_distance_map)
ref2seg_distances = list(ref2seg_distance_map_arr[ref2seg_distance_map_arr != 0])
ref2seg_distances = ref2seg_distances + list(
np.zeros(num_reference_surface_pixels - len(ref2seg_distances))
)
all_surface_distances = seg2ref_distances + ref2seg_distances
return np.max(all_surface_distances)
hausdorff_distance_filter = sitk.HausdorffDistanceImageFilter()
# Use reference1, larger inner annulus radius, the surface based computation
# has a smaller difference.
hausdorff_distance_filter.Execute(reference_segmentation1, seg)
print(
"HausdorffDistanceImageFilter result (reference1-segmentation): "
+ str(hausdorff_distance_filter.GetHausdorffDistance())
)
print(
"Surface Hausdorff result (reference1-segmentation): "
+ str(surface_hausdorff_distance(reference_segmentation1, seg))
)
# Use reference2, smaller inner annulus radius, the surface based computation
# has a larger difference.
hausdorff_distance_filter.Execute(reference_segmentation2, seg)
print(
"HausdorffDistanceImageFilter result (reference2-segmentation): "
+ str(hausdorff_distance_filter.GetHausdorffDistance())
)
print(
"Surface Hausdorff result (reference2-segmentation): "
+ str(surface_hausdorff_distance(reference_segmentation2, seg))
)
Explanation: Segmentation Representation and the Hausdorff Distance
The results of segmentation can be represented as a set of closed contours/surfaces or as the discrete set of points (pixels/voxels) belonging to the segmented objects. Ideally using either representation would yield the same values for the segmentation evaluation metrics. Unfortunately, the Hausdorff distance computed directly from each of these representations will generally not yield the same results. In some cases, such as the one above, the two values do match (table entries hausdorff_distance and max_surface_distance).
The following example illustrates that the Hausdorff distance for the contour/surface representation and the discrete point set representing the segmented object differ, and that there is no correlation between the two.
Our object of interest is annulus shaped (e.g. myocardium in short axis MRI). It has an internal radius, $r$, and an external radius $R>r$. We over-segmented the object and obtained a filled circle of radius $R$.
The contour/surface based Hausdorff distance is $R-r$, the distance between external contours is zero and between internal and external contours is $R-r$. The pixel/voxel object based Hausdorff distance is $r$, corresponding to the distance between the center point in the over-segmented result to the inner circle contour. For different values of $r$ we can either have $R-r \geq r$ or $R-r \leq r$.
Note: Both computations of Hausdorff distance are valid, though the common approach is to use the pixel/voxel based representation for computing the Hausdorff distance.
The following cells show these differences in detail.
End of explanation |
11,154 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Objective
In this notebook I am testing a reduced workflow that will
Step1: Import test image. The colormap is Matlab's Jet
Step2: Reduce number of colours
I use here Scikit-learn segmentation using k-means clustering in Color-(x,y,z) space
Step3: Convert from RGB to HSL, get unique values of H, S, and L then sort both lightness L and hue H, by increasing values of H
I tried two methods that return the same result
Step4: Import a function to plot colored lines in the final plot using the colormap created above
From David Sanders
Step5: Make final plot of the sorted hue, H versus lightness, L, colored by L
Step6: Run perceptual test checks for monotonicity
Step7: Now we try it on an abstract rainbow image
Step8: Try it on mycarta perceptual rainbow
Step9: The test should have worked but it did not.
We need to include some smoothing, or despiking to deal with small non monotonic value pairs. See below tests
Step10: From Matt Hall
https | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from skimage import data, io, segmentation, color
from skimage.future import graph
%matplotlib inline
import requests
from PIL import Image
from io import StringIO
Explanation: Objective
In this notebook I am testing a reduced workflow that will:
1) input a geophysical map as an image
2) reduce the number of colours
3) convert the image with reduced colours from RGB to HSL
4) extract unique values of H, then sort those and also the L values in the correponding pixels, by the same index
5) visualize the result as an H vs. L curve to QC of the method
6) run the perceptual test by checking for monotonicity
Preliminaries - import main libraries
End of explanation
url = 'https://mycarta.files.wordpress.com/2015/04/jet_tight.png'
r = requests.get(url)
img = np.asarray(Image.open(StringIO(r.content)).convert('RGB'))
img = np.asarray(Image.open('data/cbar/test.png'))[...,:3]
# plot image
fig = plt.figure(figsize=(6,6))
ax1 = fig.add_subplot(111)
plt.imshow(img)
ax1.xaxis.set_ticks([])
ax1.yaxis.set_ticks([])
plt.show()
Explanation: Import test image. The colormap is Matlab's Jet
End of explanation
# parameters chosen by trial and error. Will have to find a way to automatically optimize them
labels1 = segmentation.slic(img, compactness=30, n_segments=32)
out1 = color.label2rgb(labels1, img, kind='avg')
fig = plt.figure(figsize=(6,6))
ax1 = fig.add_subplot(111)
plt.imshow(out1)
ax1.xaxis.set_ticks([])
ax1.yaxis.set_ticks([])
plt.show()
Explanation: Reduce number of colours
I use here Scikit-learn segmentation using k-means clustering in Color-(x,y,z) space:
http://scikit-image.org/docs/dev/api/skimage.segmentation.html#skimage.segmentation.slic
End of explanation
width, height, dump = np.shape(out1)
print(width, height, dump)
# method 1
# extract lightness and hue, combine them into a 2D array
# extract
from skimage.color import rgb2lab, lab2lch, lch2lab, lab2rgb
lab = rgb2lab(out1)
lch = lab2lch(lab)
lab = np.asarray(lab)
lch = np.asarray(lch)
# reshape
pixels_lab = np.reshape(lab, (width*height, -1))
l1, a, b = np.split(pixels_lab, 3, axis=-1)
pixels_lch = np.reshape(lch, (width*height, -1))
l2, c, h = np.split(pixels_lch, 3, axis=-1)
# flatten
import itertools
lM = list(itertools.chain.from_iterable(l2))
hM = list(itertools.chain.from_iterable(h))
# zip together to make 2D numpy array
lhM = np.asarray(list(zip(hM,lM)))
lhM
# Sorting unique rows
# Joe Kington's answer on Stackoverflow: http://stackoverflow.com/a/16971224
def unique_rows(data):
print(data.shape)
uniq = np.unique(data.view(data.dtype.descr * data.shape[1]))
return uniq.view(data.dtype).reshape(-1, data.shape[1])
uniqLM = unique_rows(lhM)
uniqLM.shape
# method 2
# sorting both lightness and hue by hue separately
from skimage.color import rgb2lab, lab2lch, lch2lab, lab2rgb
lab = rgb2lab(out1)
lch = lab2lch(lab)
lab = np.asarray(lab)
lch = np.asarray(lch)
pixels_lab = np.reshape(lab, (width*height, -1))
l1, a, b = np.split(pixels_lab, 3, axis=-1)
pixels_lch = np.reshape(lch, (width*height, -1))
l2, c, h = np.split(pixels_lch, 3, axis=-1)
huniq, unidx = np.unique(h, return_index=True)
luniq = l2[unidx]
cuniq = c[unidx]
# flatten luniq, cuniq
import itertools
luniqM = list(itertools.chain.from_iterable(luniq))
cuniqM = list(itertools.chain.from_iterable(cuniq))
# compare output of two methods
lhM2 = np.asarray(list(zip(huniq,luniqM)))
print('method 2')
print(' H L')
print(lhM2[:4])
print(lhM2[-4:])
print('method 1')
print(' H L')
print(uniqLM[:4])
print(uniqLM[-4:])
Explanation: Convert from RGB to HSL, get unique values of H, S, and L then sort both lightness L and hue H, by increasing values of H
I tried two methods that return the same result:
1) get H, L, combine into 2D array, sort unique rows
2) sort H, with index returned then sort L with index
End of explanation
from matplotlib.collections import LineCollection
from matplotlib.colors import ListedColormap, BoundaryNorm
# Data manipulation:
def make_segments(x, y):
'''
Create list of line segments from x and y coordinates, in the correct format for LineCollection:
an array of the form numlines x (points per line) x 2 (x and y) array
'''
points = np.array([x, y]).T.reshape(-1, 1, 2)
segments = np.concatenate([points[:-1], points[1:]], axis=1)
return segments
# Interface to LineCollection:
def colorline(x, y, z=None, cmap=plt.get_cmap('copper'), norm=plt.Normalize(0.0, 1.0), linewidth=3, alpha=1.0):
'''
Plot a colored line with coordinates x and y
Optionally specify colors in the array z
Optionally specify a colormap, a norm function and a line width
'''
# Default colors equally spaced on [0,1]:
if z is None:
z = np.linspace(0.0, 1.0, len(x))
# Special case if a single number:
if not hasattr(z, "__iter__"): # to check for numerical input -- this is a hack
z = np.array([z])
z = np.asarray(z)
segments = make_segments(x, y)
lc = LineCollection(segments, array=z, cmap=cmap, norm=norm, linewidth=linewidth, alpha=alpha)
ax = plt.gca()
ax.add_collection(lc)
return lc
def clear_frame(ax=None):
# Taken from a post by Tony S Yu
if ax is None:
ax = plt.gca()
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
for spine in ax.spines.itervalues():
spine.set_visible(False)
Explanation: Import a function to plot colored lines in the final plot using the colormap created above
From David Sanders:
https://github.com/dpsanders/matplotlib-examples/blob/master/colorline.ipynb
End of explanation
# To color by L, it has to be normalized to [0 1]
luniqM_n=(luniqM-min(luniqM))/(max(luniqM)-min(luniqM))
fig = plt.figure(figsize=(16,4))
plt.xticks(np.arange(0, 2.25*np.pi,0.25*np.pi),[0., 45., 90., 135., 180., 225., 270., 315., 360.])
# Hue as 0-360 angle
ax1 = fig.add_subplot(111)
# ax1.scatter(huniq, luniq)
colorline(huniq,luniqM, luniqM_n, linewidth=4,cmap='gray')
ax1.set_xlim(0, 2.25*np.pi)
ax1.set_ylim(-5, 105)
ax1.text(5, 95, 'H vs. L - colored by L', va='top')
plt.show()
# as of Nov 05, 2015 after reinstalling Anaconda, using Colorline gives a future warning:
# https://github.com/dpsanders/matplotlib-examples/issues/1
# rerun above to suppress warning
Explanation: Make final plot of the sorted hue, H versus lightness, L, colored by L
End of explanation
# Stackoverflow answer http://stackoverflow.com/a/4985520
def pairwise(seq):
items = iter(seq)
last = next(items)
for item in items:
yield last, item
last = item
def strictly_increasing(L):
return all(x<y for x, y in pairwise(L))
def strictly_decreasing(L):
return all(x>y for x, y in pairwise(L))
def non_increasing(L):
return all(x>=y for x, y in pairwise(L))
def non_decreasing(L):
return all(x<=y for x, y in pairwise(L))
print(strictly_increasing(luniq))
print(non_decreasing(luniq))
print(strictly_decreasing(luniq))
print(non_increasing(luniq))
Explanation: Run perceptual test checks for monotonicity
End of explanation
# Originally from: http://bgfons.com/upload/rainbow_texture1761.jpg. Resized and saved as png
url = 'https://mycarta.files.wordpress.com/2015/11/rainbow_texture17611.png'
r = requests.get(url)
img = np.asarray(Image.open(StringIO(r.content)).convert('RGB'))
# plot image
fig = plt.figure(figsize=(6,6))
ax1 = fig.add_subplot(111)
plt.imshow(img)
ax1.xaxis.set_ticks([])
ax1.yaxis.set_ticks([])
plt.show()
labels1 = segmentation.slic(img, compactness=30, n_segments=32)
out1 = color.label2rgb(labels1, img, kind='avg')
fig = plt.figure(figsize=(6,6))
ax1 = fig.add_subplot(111)
plt.imshow(out1)
ax1.xaxis.set_ticks([])
ax1.yaxis.set_ticks([])
plt.show()
width, height, dump = np.shape(out1)
# method 2
# sorting both lightness and hue by hue separately
from skimage.color import rgb2lab, lab2lch, lch2lab, lab2rgb
lab = rgb2lab(out1)
lch = lab2lch(lab)
lab = np.asarray(lab)
lch = np.asarray(lch)
pixels_lab = np.reshape(lab, (width*height, -1))
l1, a, b = np.split(pixels_lab, 3, axis=-1)
pixels_lch = np.reshape(lch, (width*height, -1))
l2, c, h = np.split(pixels_lch, 3, axis=-1)
huniq, unidx = np.unique(h, return_index=True)
luniq = l2[unidx]
cuniq = c[unidx]
# flatten luniq, cuniq
import itertools
luniqM = list(itertools.chain.from_iterable(luniq))
cuniqM = list(itertools.chain.from_iterable(cuniq))
# To color by L, it has to be normalized to [0 1]
luniqM_n=(luniqM-min(luniqM))/(max(luniqM)-min(luniqM))
fig = plt.figure(figsize=(8,4))
plt.xticks(np.arange(0, 2.25*np.pi,0.25*np.pi),[0., 45., 90., 135., 180., 225., 270., 315., 360.])
# Hue as 0-360 angle
ax1 = fig.add_subplot(111)
# ax1.scatter(huniq, luniq)
colorline(huniq,luniqM, luniqM_n, linewidth=4,cmap='gray')
ax1.set_xlim(0, 2.25*np.pi)
ax1.set_ylim(-5, 105)
ax1.text(5, 95, 'H vs. L - colored by L', va='top')
plt.show()
print strictly_increasing(luniq)
print non_decreasing(luniq)
print strictly_decreasing(luniq)
print non_increasing(luniq)
Explanation: Now we try it on an abstract rainbow image
End of explanation
url = 'https://mycarta.files.wordpress.com/2015/04/cubic_no_red_tight.png'
r = requests.get(url)
img = np.asarray(Image.open(StringIO(r.content)).convert('RGB'))
# plot image
fig = plt.figure(figsize=(6,6))
ax1 = fig.add_subplot(111)
plt.imshow(img)
ax1.xaxis.set_ticks([])
ax1.yaxis.set_ticks([])
plt.show()
labels1 = segmentation.slic(img, compactness=30, n_segments=32)
out1 = color.label2rgb(labels1, img, kind='avg')
fig = plt.figure(figsize=(6,6))
ax1 = fig.add_subplot(111)
plt.imshow(out1)
ax1.xaxis.set_ticks([])
ax1.yaxis.set_ticks([])
plt.show()
width, height, dump = np.shape(out1)
# method 2
# sorting both lightness and hue by hue separately
from skimage.color import rgb2lab, lab2lch, lch2lab, lab2rgb
lab = rgb2lab(out1)
lch = lab2lch(lab)
lab = np.asarray(lab)
lch = np.asarray(lch)
pixels_lab = np.reshape(lab, (width*height, -1))
l1, a, b = np.split(pixels_lab, 3, axis=-1)
pixels_lch = np.reshape(lch, (width*height, -1))
l2, c, h = np.split(pixels_lch, 3, axis=-1)
huniq, unidx = np.unique(h, return_index=True)
luniq = l2[unidx]
cuniq = c[unidx]
# flatten luniq, cuniq
import itertools
luniqM = list(itertools.chain.from_iterable(luniq))
cuniqM = list(itertools.chain.from_iterable(cuniq))
# To color by L, it has to be normalized to [0 1]
luniqM_n=(luniqM-min(luniqM))/(max(luniqM)-min(luniqM))
fig = plt.figure(figsize=(8,4))
plt.xticks(np.arange(0, 2.25*np.pi,0.25*np.pi),[0., 45., 90., 135., 180., 225., 270., 315., 360.])
# Hue as 0-360 angle
ax1 = fig.add_subplot(111)
# ax1.scatter(huniq, luniq)
colorline(huniq,luniqM, luniqM_n, linewidth=4,cmap='gray')
ax1.set_xlim(0, 2.25*np.pi)
ax1.set_ylim(-5, 105)
ax1.text(5, 95, 'H vs. L - colored by L', va='top')
plt.show()
print strictly_increasing(luniq)
print strictly_decreasing(luniq)
print non_increasing(luniq)
print non_decreasing(luniq)
Explanation: Try it on mycarta perceptual rainbow
End of explanation
print luniqM[:15]
plt.plot(luniqM[:15])
Explanation: The test should have worked but it did not.
We need to include some smoothing, or despiking to deal with small non monotonic value pairs. See below tests
End of explanation
def moving_average(a, length, mode='valid'):
#pad = np.floor(length/2)
pad = int(np.floor(length/2)) # replace to prevent a deprecation warning
# due to passing a float as an index
if mode == 'full':
pad *= 2
# Make a padded version, paddding with first and last values
r = np.empty(a.shape[0] + 2*pad)
r[:pad] = a[0]
r[pad:-pad] = a
r[-pad:] = a[-1]
# Cumsum with shifting trick
s = np.cumsum(r, dtype=float)
s[length:] = s[length:] - s[:-length]
out = s[length-1:]/length
# Decide what to return
if mode == 'same':
if out.shape[0] != a.shape[0]:
# If size doesn't match, then interpolate.
out = (out[:-1,...] + out[1:,...]) / 2
return out
elif mode == 'valid':
return out[pad:-pad]
else: # mode=='full' and we used a double pad
return out
avg = moving_average(np.asarray(luniqM), 7, mode='same')
plt.plot(avg[:15])
print strictly_increasing(avg)
print non_decreasing(avg)
print strictly_decreasing(avg)
print non_increasing(avg)
Explanation: From Matt Hall
https://github.com/kwinkunks/notebooks/blob/master/Backus.ipynb
End of explanation |
11,155 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
W3C prov based provenance storage in Neo4j
This notebook tries to provides a nearly complete mapping between a W3C prov standard based provenance descriptions and a Neo4j graph representation.
The approach taken is as follows
Step1: Example Prov-JSON export and import
Step2: Example Transformation to Neo4j graph
The transformation code is based on the prov_to_dot() function in the dot.py package of the prov python package mentioned above ( https
Step3: generate neo4j graph based on generated neo4j Nodes map and Relationship list
Step4: "remember" cells | Python Code:
from IPython.display import display, Image
Image(filename='key-concepts.png')
Explanation: W3C prov based provenance storage in Neo4j
This notebook tries to provides a nearly complete mapping between a W3C prov standard based provenance descriptions and a Neo4j graph representation.
The approach taken is as follows:
- take W3C prov json formated document as input
- import using python-prov tool (result internal w3c prov representation)
- generate neo4j nodes and relationships and generate graph
Next steps are:
- example queries
- refinement and discussion of neo-prov-utils based on this notebook
- development of a neo-prov-utils python package to be included in provenance capture software
- example usage of package in ENES community provenance capture activities
W3C prov standard and used w3c prov tool
W3C prov documents overview: https://www.w3.org/TR/prov-overview/
The python prov software library is used ( https://github.com/trungdong/prov ) supporting the W3C provenance data model and providing PROV-JSON and PROV-XML implementations.
The PROV-JSON representation proposal is described in https://www.w3.org/Submission/2013/SUBM-prov-json-20130424/
In the following PROV-JSON documents are used as a compact way to
- specify provenance records
- exchange provenance info between client and server components
The generic W3C prov graph model
The generic prov graph model is based on:
- Nodes (Agent, Entity, Activity) connected by
- Edges (wasAttributedTo, wasDerivedFrom, wasGeneratedBy, used, wasAssociatedWith)
see the following grahical representatin (taken from https://www.w3.org/TR/prov-overview/)
End of explanation
from prov.model import ProvDocument
d1 = ProvDocument()
%%writefile wps-prov.json
{
"prefix": {
"enes": "http://www.enes.org/enes_entitiy/",
"workflow": "http://www.enes.org/enes/workflow/#",
"dc": "http://dublin-core.org/",
"user": "http://www.enes.org/enes_entity/user/",
"file": "http://www.enes.org/enes_entity/file/",
"esgf": "http://carbon.dkrz.de/file/cmip5/",
"enes_data": "http://enes_org/enes_data#"
},
"entity": {
"enes:input-data-set.nc": {
"dc:title": "eval_series_1",
"prov:type": "Dataset",
"prov:label": "experiment-mpi-m"
},
"enes:temporal-mean-result1-v1.nc": {
"dc:title": "eval_series_1_1"
}
},
"wasDerivedFrom": {
"enes:process-step1": {
"prov:usedEntity": "enes:input-data-set.nc",
"prov:generatedEntity": "enes:temporal-mean-result1-v1.nc"
}
},
"activity": {
"workflow:temporal-mean-cdo": {
}
},
"used": {
"enes:used-rel1": {
"prov:entity": "enes:input-data-set.nc",
"prov:activity": "workflow:temporal-mean-cdo"
}
},
"wasGeneratedBy": {
"enes:gen-rel1": {
"prov:entity": "enes:temporal-mean-result1-v1.nc",
"prov:activity": "workflow:temporal-mean-cdo"
}
},
"agent": {
"enes:Stephan Kindermann": {}
},
"wasAttributedTo": {
"enes:data-generator-rel1": {
"prov:entity": "enes:temporal-mean-result1-v1.nc",
"prov:agent": "enes:Stephan Kindermann"
}
}
}
d2 = ProvDocument.deserialize('wps-prov.json')
xml_result = d2.serialize(format='xml')
%%writefile wps-prov2.xml
<?xml version=1.0 encoding=ASCII?>\n<prov:document xmlns:dc="http://dublin-core.org/" xmlns:enes="http://www.enes.org/enes_entitiy/" xmlns:enes_data="http://enes_org/enes_data#" xmlns:esgf="http://carbon.dkrz.de/file/cmip5/" xmlns:file="http://www.enes.org/enes_entity/file/" xmlns:prov="http://www.w3.org/ns/prov#" xmlns:user="http://www.enes.org/enes_entity/user/" xmlns:workflow="http://www.enes.org/enes/workflow/#" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">\n <prov:wasDerivedFrom prov:id="enes:process-step1">\n <prov:generatedEntity prov:ref="enes:temporal-mean-result1-v1.nc"/>\n <prov:usedEntity prov:ref="enes:input-data-set.nc"/>\n </prov:wasDerivedFrom>\n <prov:used prov:id="enes:used-rel1">\n <prov:activity prov:ref="workflow:temporal-mean-cdo"/>\n <prov:entity prov:ref="enes:input-data-set.nc"/>\n </prov:used>\n <prov:wasAttributedTo prov:id="enes:data-generator-rel1">\n <prov:entity prov:ref="enes:temporal-mean-result1-v1.nc"/>\n <prov:agent prov:ref="enes:Stephan Kindermann"/>\n </prov:wasAttributedTo>\n <prov:agent prov:id="enes:Stephan Kindermann"/>\n <prov:entity prov:id="enes:temporal-mean-result1-v1.nc">\n <dc:title>eval_series_1_1</dc:title>\n </prov:entity>\n <prov:entity prov:id="enes:input-data-set.nc">\n <prov:label>experiment-mpi-m</prov:label>\n <prov:type xsi:type="xsd:string">Dataset</prov:type>\n <dc:title>eval_series_1</dc:title>\n </prov:entity>\n <prov:activity prov:id="workflow:temporal-mean-cdo"/>\n <prov:wasGeneratedBy prov:id="enes:gen-rel1">\n <prov:entity prov:ref="enes:temporal-mean-result1-v1.nc"/>\n <prov:activity prov:ref="workflow:temporal-mean-cdo"/>\n </prov:wasGeneratedBy>\n</prov:document>
d_xml_test = ProvDocument.deserialize('wps-prov2.xml',format='xml')
print d2.serialize(indent=2)
def visualize_prov(prov_doc):
from prov.dot import prov_to_dot
from IPython.display import Image
dot = prov_to_dot(prov_doc)
dot.write_png('tmp1.png')
dot.write_pdf('tmp1.pdf')
return Image('tmp1.png')
visualize_prov(d2)
Explanation: Example Prov-JSON export and import
End of explanation
## d2 graph is input parameter for this cell ..
import six
from py2neo import Graph, Node, Relationship, authenticate
node_map = {}
count = [0, 0, 0, 0] # counters for node ids
records = d2.get_records()
relations = []
use_labels = True
show_relation_attributes = True
other_attributes = True
show_nary = True
def _add_node(record):
count[0] += 1
node_id = 'n%d' % count[0]
if use_labels:
if record.label == record.identifier:
node_label = '"%s"' % six.text_type(record.label)
else:
# Fancier label if both are different. The label will be
# the main node text, whereas the identifier will be a
# kind of suptitle.
node_label = six.text_type(record.label)+','+six.text_type(record.identifier)
else:
node_label = six.text_type(record.identifier)
uri = record.identifier.uri
node = Node(node_id, label=node_label, URL=uri)
node_map[uri] = node
## create Node ... ##dot.add_node(node)
return node
def _get_node(qname):
if qname is None:
print "ERROR: _get_node called for empty node"
#return _get_bnode()
uri = qname.uri
if uri not in node_map:
_add_generic_node(qname)
return node_map[uri]
from prov.model import (
PROV_ACTIVITY, PROV_AGENT, PROV_ALTERNATE, PROV_ASSOCIATION,
PROV_ATTRIBUTION, PROV_BUNDLE, PROV_COMMUNICATION, PROV_DERIVATION,
PROV_DELEGATION, PROV_ENTITY, PROV_GENERATION, PROV_INFLUENCE,
PROV_INVALIDATION, PROV_END, PROV_MEMBERSHIP, PROV_MENTION,
PROV_SPECIALIZATION, PROV_START, PROV_USAGE, Identifier,
PROV_ATTRIBUTE_QNAMES, sorted_attributes, ProvException
)
for rec in records:
if rec.is_element():
_add_node(rec)
else:
# Saving the relations for later processing
relations.append(rec)
neo_rels = []
for rec in relations:
args = rec.args
# skipping empty records
if not args:
continue
# picking element nodes
nodes = [
value for attr_name, value in rec.formal_attributes
if attr_name in PROV_ATTRIBUTE_QNAMES
]
other_attributes = [
(attr_name, value) for attr_name, value in rec.attributes
if attr_name not in PROV_ATTRIBUTE_QNAMES
]
add_attribute_annotation = (
show_relation_attributes and other_attributes
)
add_nary_elements = len(nodes) > 2 and show_nary
if len(nodes) < 2: # too few elements for a relation?
continue # cannot draw this
if add_nary_elements or add_attribute_annotation:
# a blank node for n-ary relations or the attribute annotation
# the first segment
rel = Relationship(_get_node(nodes[0]), rec.get_type()._str,_get_node(nodes[1]))
print "relationship: ",rel
neo_rels.append(rel)
if add_nary_elements:
for node in nodes[2:]:
if node is not None:
relx = Relationshipdot.ad(_get_node(nodes[0]), "...rel_name",_get_node(node))
neo_rels.append(rel)
else:
# show a simple binary relations with no annotation
rel = Relationship(_get_node(nodes[0]), rec.get_type()._str,_get_node(nodes[1]))
neo_rels.append(rel)
print node_map
print neo_rels
Explanation: Example Transformation to Neo4j graph
The transformation code is based on the prov_to_dot() function in the dot.py package of the prov python package mentioned above ( https://github.com/trungdong/prov ). The code was simplified and modified to generate neo4j nodes and relation instead of dot nodes and relations.
End of explanation
authenticate("localhost:7474", "neo4j", "prolog16")
# connect to authenticated graph database
graph = Graph("http://localhost:7474/db/data/")
graph.delete_all()
for rel in neo_rels:
graph.create(rel)
%load_ext cypher
results = %cypher http://neo4j:prolog16@localhost:7474/db/data MATCH (a)-[r]-(b) RETURN a,r, b
%matplotlib inline
results.get_graph()
results.draw()
Explanation: generate neo4j graph based on generated neo4j Nodes map and Relationship list
End of explanation
# example info calls on nodes and relations ..
der = d2.get_records()[0]
print der.get_type()._str + "tst"
print der.attributes
print der.is_relation()
print der.label
print der.value
print der.args
print der.is_element()
print der.formal_attributes
print der.get_asserted_types()
print der.get_provn
Explanation: "remember" cells
End of explanation |
11,156 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TensorFlow Authors.
Step1: Object Detection with TensorFlow Lite Model Maker
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Import the required packages.
Step3: Prepare the dataset
Here you'll use the same dataset as the AutoML quickstart.
The Salads dataset is available at
Step4: Step 2. Load the dataset.
Model Maker will take input data in the CSV format. Use the object_detector.DataLoader.from_csv method to load the dataset and split them into the training, validation and test images.
Training images
Step5: Step 3. Train the TensorFlow model with the training data.
The EfficientDet-Lite0 model uses epochs = 50 by default, which means it will go through the training dataset 50 times. You can look at the validation accuracy during training and stop early to avoid overfitting.
Set batch_size = 8 here so you will see that it takes 21 steps to go through the 175 images in the training dataset.
Set train_whole_model=True to fine-tune the whole model instead of just training the head layer to improve accuracy. The trade-off is that it may take longer to train the model.
Step6: Step 4. Evaluate the model with the test data.
After training the object detection model using the images in the training dataset, use the remaining 25 images in the test dataset to evaluate how the model performs against new data it has never seen before.
As the default batch size is 64, it will take 1 step to go through the 25 images in the test dataset.
The evaluation metrics are same as COCO.
Step7: Step 5. Export as a TensorFlow Lite model.
Export the trained object detection model to the TensorFlow Lite format by specifying which folder you want to export the quantized model to. The default post-training quantization technique is full integer quantization.
Step8: Step 6. Evaluate the TensorFlow Lite model.
Several factors can affect the model accuracy when exporting to TFLite
Step12: You can download the TensorFlow Lite model file using the left sidebar of Colab. Right-click on the model.tflite file and choose Download to download it to your local computer.
This model can be integrated into an Android or an iOS app using the ObjectDetector API of the TensorFlow Lite Task Library.
See the TFLite Object Detection sample app for more details on how the model is used in an working app.
Note
Step13: (Optional) Compile For the Edge TPU
Now that you have a quantized EfficientDet Lite model, it is possible to compile and deploy to a Coral EdgeTPU.
Step 1. Install the EdgeTPU Compiler
Step14: Step 2. Select number of Edge TPUs, Compile
The EdgeTPU has 8MB of SRAM for caching model paramaters (more info). This means that for models that are larger than 8MB, inference time will be increased in order to transfer over model paramaters. One way to avoid this is Model Pipelining - splitting the model into segments that can have a dedicated EdgeTPU. This can significantly improve latency.
The below table can be used as a reference for the number of Edge TPUs to use - the larger models will not compile for a single TPU as the intermediate tensors can't fit in on-chip memory.
| Model architecture | Minimum TPUs | Recommended TPUs
|--------------------|-------|-------|
| EfficientDet-Lite0 | 1 | 1 |
| EfficientDet-Lite1 | 1 | 1 |
| EfficientDet-Lite2 | 1 | 2 |
| EfficientDet-Lite3 | 2 | 2 |
| EfficientDet-Lite4 | 2 | 3 | | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 The TensorFlow Authors.
End of explanation
!pip install -q --use-deprecated=legacy-resolver tflite-model-maker
!pip install -q pycocotools
Explanation: Object Detection with TensorFlow Lite Model Maker
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lite/tutorials/model_maker_object_detection"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_object_detection.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_object_detection.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/tutorials/model_maker_object_detection.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
In this colab notebook, you'll learn how to use the TensorFlow Lite Model Maker library to train a custom object detection model capable of detecting salads within images on a mobile device.
The Model Maker library uses transfer learning to simplify the process of training a TensorFlow Lite model using a custom dataset. Retraining a TensorFlow Lite model with your own custom dataset reduces the amount of training data required and will shorten the training time.
You'll use the publicly available Salads dataset, which was created from the Open Images Dataset V4.
Each image in the dataset contains objects labeled as one of the following classes:
* Baked Good
* Cheese
* Salad
* Seafood
* Tomato
The dataset contains the bounding-boxes specifying where each object locates, together with the object's label.
Here is an example image from the dataset:
<br/>
<img src="https://cloud.google.com/vision/automl/object-detection/docs/images/quickstart-preparing_a_dataset.png" width="400" hspace="0">
Prerequisites
Install the required packages
Start by installing the required packages, including the Model Maker package from the GitHub repo and the pycocotools library you'll use for evaluation.
End of explanation
import numpy as np
import os
from tflite_model_maker.config import QuantizationConfig
from tflite_model_maker.config import ExportFormat
from tflite_model_maker import model_spec
from tflite_model_maker import object_detector
import tensorflow as tf
assert tf.__version__.startswith('2')
tf.get_logger().setLevel('ERROR')
from absl import logging
logging.set_verbosity(logging.ERROR)
Explanation: Import the required packages.
End of explanation
spec = model_spec.get('efficientdet_lite0')
Explanation: Prepare the dataset
Here you'll use the same dataset as the AutoML quickstart.
The Salads dataset is available at:
gs://cloud-ml-data/img/openimage/csv/salads_ml_use.csv.
It contains 175 images for training, 25 images for validation, and 25 images for testing. The dataset has five classes: Salad, Seafood, Tomato, Baked goods, Cheese.
<br/>
The dataset is provided in CSV format:
TRAINING,gs://cloud-ml-data/img/openimage/3/2520/3916261642_0a504acd60_o.jpg,Salad,0.0,0.0954,,,0.977,0.957,,
VALIDATION,gs://cloud-ml-data/img/openimage/3/2520/3916261642_0a504acd60_o.jpg,Seafood,0.0154,0.1538,,,1.0,0.802,,
TEST,gs://cloud-ml-data/img/openimage/3/2520/3916261642_0a504acd60_o.jpg,Tomato,0.0,0.655,,,0.231,0.839,,
Each row corresponds to an object localized inside a larger image, with each object specifically designated as test, train, or validation data. You'll learn more about what that means in a later stage in this notebook.
The three lines included here indicate three distinct objects located inside the same image available at gs://cloud-ml-data/img/openimage/3/2520/3916261642_0a504acd60_o.jpg.
Each row has a different label: Salad, Seafood, Tomato, etc.
Bounding boxes are specified for each image using the top left and bottom right vertices.
Here is a visualzation of these three lines:
<br>
<img src="https://cloud.google.com/vision/automl/object-detection/docs/images/quickstart-preparing_a_dataset.png" width="400" hspace="100">
If you want to know more about how to prepare your own CSV file and the minimum requirements for creating a valid dataset, see the Preparing your training data guide for more details.
If you are new to Google Cloud, you may wonder what the gs:// URL means. They are URLs of files stored on Google Cloud Storage (GCS). If you make your files on GCS public or authenticate your client, Model Maker can read those files similarly to your local files.
However, you don't need to keep your images on Google Cloud to use Model Maker. You can use a local path in your CSV file and Model Maker will just work.
Quickstart
There are six steps to training an object detection model:
Step 1. Choose an object detection model archiecture.
This tutorial uses the EfficientDet-Lite0 model. EfficientDet-Lite[0-4] are a family of mobile/IoT-friendly object detection models derived from the EfficientDet architecture.
Here is the performance of each EfficientDet-Lite models compared to each others.
| Model architecture | Size(MB) | Latency(ms) | Average Precision** |
|--------------------|-----------|---------------|----------------------|
| EfficientDet-Lite0 | 4.4 | 37 | 25.69% |
| EfficientDet-Lite1 | 5.8 | 49 | 30.55% |
| EfficientDet-Lite2 | 7.2 | 69 | 33.97% |
| EfficientDet-Lite3 | 11.4 | 116 | 37.70% |
| EfficientDet-Lite4 | 19.9 | 260 | 41.96% |
<i> * Size of the integer quantized models. <br/>
Latency measured on Pixel 4 using 4 threads on CPU. <br/>
* Average Precision is the mAP (mean Average Precision) on the COCO 2017 validation dataset.
</i>
End of explanation
train_data, validation_data, test_data = object_detector.DataLoader.from_csv('gs://cloud-ml-data/img/openimage/csv/salads_ml_use.csv')
Explanation: Step 2. Load the dataset.
Model Maker will take input data in the CSV format. Use the object_detector.DataLoader.from_csv method to load the dataset and split them into the training, validation and test images.
Training images: These images are used to train the object detection model to recognize salad ingredients.
Validation images: These are images that the model didn't see during the training process. You'll use them to decide when you should stop the training, to avoid overfitting.
Test images: These images are used to evaluate the final model performance.
You can load the CSV file directly from Google Cloud Storage, but you don't need to keep your images on Google Cloud to use Model Maker. You can specify a local CSV file on your computer, and Model Maker will work just fine.
End of explanation
model = object_detector.create(train_data, model_spec=spec, batch_size=8, train_whole_model=True, validation_data=validation_data)
Explanation: Step 3. Train the TensorFlow model with the training data.
The EfficientDet-Lite0 model uses epochs = 50 by default, which means it will go through the training dataset 50 times. You can look at the validation accuracy during training and stop early to avoid overfitting.
Set batch_size = 8 here so you will see that it takes 21 steps to go through the 175 images in the training dataset.
Set train_whole_model=True to fine-tune the whole model instead of just training the head layer to improve accuracy. The trade-off is that it may take longer to train the model.
End of explanation
model.evaluate(test_data)
Explanation: Step 4. Evaluate the model with the test data.
After training the object detection model using the images in the training dataset, use the remaining 25 images in the test dataset to evaluate how the model performs against new data it has never seen before.
As the default batch size is 64, it will take 1 step to go through the 25 images in the test dataset.
The evaluation metrics are same as COCO.
End of explanation
model.export(export_dir='.')
Explanation: Step 5. Export as a TensorFlow Lite model.
Export the trained object detection model to the TensorFlow Lite format by specifying which folder you want to export the quantized model to. The default post-training quantization technique is full integer quantization.
End of explanation
model.evaluate_tflite('model.tflite', test_data)
Explanation: Step 6. Evaluate the TensorFlow Lite model.
Several factors can affect the model accuracy when exporting to TFLite:
* Quantization helps shrinking the model size by 4 times at the expense of some accuracy drop.
* The original TensorFlow model uses per-class non-max supression (NMS) for post-processing, while the TFLite model uses global NMS that's much faster but less accurate.
Keras outputs maximum 100 detections while tflite outputs maximum 25 detections.
Therefore you'll have to evaluate the exported TFLite model and compare its accuracy with the original TensorFlow model.
End of explanation
#@title Load the trained TFLite model and define some visualization functions
import cv2
from PIL import Image
model_path = 'model.tflite'
# Load the labels into a list
classes = ['???'] * model.model_spec.config.num_classes
label_map = model.model_spec.config.label_map
for label_id, label_name in label_map.as_dict().items():
classes[label_id-1] = label_name
# Define a list of colors for visualization
COLORS = np.random.randint(0, 255, size=(len(classes), 3), dtype=np.uint8)
def preprocess_image(image_path, input_size):
Preprocess the input image to feed to the TFLite model
img = tf.io.read_file(image_path)
img = tf.io.decode_image(img, channels=3)
img = tf.image.convert_image_dtype(img, tf.uint8)
original_image = img
resized_img = tf.image.resize(img, input_size)
resized_img = resized_img[tf.newaxis, :]
resized_img = tf.cast(resized_img, dtype=tf.uint8)
return resized_img, original_image
def detect_objects(interpreter, image, threshold):
Returns a list of detection results, each a dictionary of object info.
signature_fn = interpreter.get_signature_runner()
# Feed the input image to the model
output = signature_fn(images=image)
# Get all outputs from the model
count = int(np.squeeze(output['output_0']))
scores = np.squeeze(output['output_1'])
classes = np.squeeze(output['output_2'])
boxes = np.squeeze(output['output_3'])
results = []
for i in range(count):
if scores[i] >= threshold:
result = {
'bounding_box': boxes[i],
'class_id': classes[i],
'score': scores[i]
}
results.append(result)
return results
def run_odt_and_draw_results(image_path, interpreter, threshold=0.5):
Run object detection on the input image and draw the detection results
# Load the input shape required by the model
_, input_height, input_width, _ = interpreter.get_input_details()[0]['shape']
# Load the input image and preprocess it
preprocessed_image, original_image = preprocess_image(
image_path,
(input_height, input_width)
)
# Run object detection on the input image
results = detect_objects(interpreter, preprocessed_image, threshold=threshold)
# Plot the detection results on the input image
original_image_np = original_image.numpy().astype(np.uint8)
for obj in results:
# Convert the object bounding box from relative coordinates to absolute
# coordinates based on the original image resolution
ymin, xmin, ymax, xmax = obj['bounding_box']
xmin = int(xmin * original_image_np.shape[1])
xmax = int(xmax * original_image_np.shape[1])
ymin = int(ymin * original_image_np.shape[0])
ymax = int(ymax * original_image_np.shape[0])
# Find the class index of the current object
class_id = int(obj['class_id'])
# Draw the bounding box and label on the image
color = [int(c) for c in COLORS[class_id]]
cv2.rectangle(original_image_np, (xmin, ymin), (xmax, ymax), color, 2)
# Make adjustments to make the label visible for all objects
y = ymin - 15 if ymin - 15 > 15 else ymin + 15
label = "{}: {:.0f}%".format(classes[class_id], obj['score'] * 100)
cv2.putText(original_image_np, label, (xmin, y),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2)
# Return the final image
original_uint8 = original_image_np.astype(np.uint8)
return original_uint8
#@title Run object detection and show the detection results
INPUT_IMAGE_URL = "https://storage.googleapis.com/cloud-ml-data/img/openimage/3/2520/3916261642_0a504acd60_o.jpg" #@param {type:"string"}
DETECTION_THRESHOLD = 0.3 #@param {type:"number"}
TEMP_FILE = '/tmp/image.png'
!wget -q -O $TEMP_FILE $INPUT_IMAGE_URL
im = Image.open(TEMP_FILE)
im.thumbnail((512, 512), Image.ANTIALIAS)
im.save(TEMP_FILE, 'PNG')
# Load the TFLite model
interpreter = tf.lite.Interpreter(model_path=model_path)
interpreter.allocate_tensors()
# Run inference and draw detection result on the local copy of the original file
detection_result_image = run_odt_and_draw_results(
TEMP_FILE,
interpreter,
threshold=DETECTION_THRESHOLD
)
# Show the detection result
Image.fromarray(detection_result_image)
Explanation: You can download the TensorFlow Lite model file using the left sidebar of Colab. Right-click on the model.tflite file and choose Download to download it to your local computer.
This model can be integrated into an Android or an iOS app using the ObjectDetector API of the TensorFlow Lite Task Library.
See the TFLite Object Detection sample app for more details on how the model is used in an working app.
Note: Android Studio Model Binding does not support object detection yet so please use the TensorFlow Lite Task Library.
(Optional) Test the TFLite model on your image
You can test the trained TFLite model using images from the internet.
* Replace the INPUT_IMAGE_URL below with your desired input image.
* Adjust the DETECTION_THRESHOLD to change the sensitivity of the model. A lower threshold means the model will pickup more objects but there will also be more false detection. Meanwhile, a higher threshold means the model will only pickup objects that it has confidently detected.
Although it requires some of boilerplate code to run the model in Python at this moment, integrating the model into a mobile app only requires a few lines of code.
End of explanation
! curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
! echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list
! sudo apt-get update
! sudo apt-get install edgetpu-compiler
Explanation: (Optional) Compile For the Edge TPU
Now that you have a quantized EfficientDet Lite model, it is possible to compile and deploy to a Coral EdgeTPU.
Step 1. Install the EdgeTPU Compiler
End of explanation
NUMBER_OF_TPUS = 1#@param {type:"number"}
!edgetpu_compiler model.tflite --num_segments=$NUMBER_OF_TPUS
Explanation: Step 2. Select number of Edge TPUs, Compile
The EdgeTPU has 8MB of SRAM for caching model paramaters (more info). This means that for models that are larger than 8MB, inference time will be increased in order to transfer over model paramaters. One way to avoid this is Model Pipelining - splitting the model into segments that can have a dedicated EdgeTPU. This can significantly improve latency.
The below table can be used as a reference for the number of Edge TPUs to use - the larger models will not compile for a single TPU as the intermediate tensors can't fit in on-chip memory.
| Model architecture | Minimum TPUs | Recommended TPUs
|--------------------|-------|-------|
| EfficientDet-Lite0 | 1 | 1 |
| EfficientDet-Lite1 | 1 | 1 |
| EfficientDet-Lite2 | 1 | 2 |
| EfficientDet-Lite3 | 2 | 2 |
| EfficientDet-Lite4 | 2 | 3 |
End of explanation |
11,157 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced
Step1: As always, let's do imports and initialize a logger and a new Bundle.
Step2: Changing Hierarchies
Some of the built-in constraints depend on the system hierarchy, and will automatically adjust to reflect changes to the hierarchy.
For example, the masses depend on the period and semi-major axis of the parent orbit but also depend on the mass-ratio (q) which is defined as the primary mass over secondary mass. For this reason, changing the roles of the primary and secondary components should be reflected in the masses (so long as q remains fixed).
In order to show this example, let's set the mass-ratio to be non-unity.
Step3: Here the star with component tag 'primary' is actually the primary component in the hierarchy, so should have the LARGER mass (for a q < 1.0).
Step4: Now let's flip the hierarchy so that the star with the 'primary' component tag is actually the secondary component in the system (and so takes the role of numerator in q = M2/M1).
For more information on the syntax for setting hierarchies, see the Building a System Tutorial.
Step5: Even though under-the-hood the constraints are being rebuilt from scratch, they will remember if you have flipped them to solve for some other parameter.
To show this, let's flip the constraint for the secondary mass to solve for 'period' and then change the hierarchy back to its original value. | Python Code:
#!pip install -I "phoebe>=2.4,<2.5"
Explanation: Advanced: Constraints and Changing Hierarchies
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new Bundle.
End of explanation
b.set_value('q', 0.8)
Explanation: Changing Hierarchies
Some of the built-in constraints depend on the system hierarchy, and will automatically adjust to reflect changes to the hierarchy.
For example, the masses depend on the period and semi-major axis of the parent orbit but also depend on the mass-ratio (q) which is defined as the primary mass over secondary mass. For this reason, changing the roles of the primary and secondary components should be reflected in the masses (so long as q remains fixed).
In order to show this example, let's set the mass-ratio to be non-unity.
End of explanation
print("M1: {}, M2: {}".format(b.get_value(qualifier='mass', component='primary', context='component'),
b.get_value(qualifier='mass', component='secondary', context='component')))
Explanation: Here the star with component tag 'primary' is actually the primary component in the hierarchy, so should have the LARGER mass (for a q < 1.0).
End of explanation
b['mass@primary']
b.set_hierarchy('orbit:binary(star:secondary, star:primary)')
b['mass@primary@star@component']
print(b.get_value('q'))
print("M1: {}, M2: {}".format(b.get_value(qualifier='mass', component='primary', context='component'),
b.get_value(qualifier='mass', component='secondary', context='component')))
Explanation: Now let's flip the hierarchy so that the star with the 'primary' component tag is actually the secondary component in the system (and so takes the role of numerator in q = M2/M1).
For more information on the syntax for setting hierarchies, see the Building a System Tutorial.
End of explanation
print("M1: {}, M2: {}, period: {}, q: {}".format(b.get_value(qualifier='mass', component='primary', context='component'),
b.get_value(qualifier='mass', component='secondary', context='component'),
b.get_value(qualifier='period', component='binary', context='component'),
b.get_value(qualifier='q', component='binary', context='component')))
b.flip_constraint('mass@secondary@constraint', 'period')
print("M1: {}, M2: {}, period: {}, q: {}".format(b.get_value(qualifier='mass', component='primary', context='component'),
b.get_value(qualifier='mass', component='secondary', context='component'),
b.get_value(qualifier='period', component='binary', context='component'),
b.get_value(qualifier='q', component='binary', context='component')))
b.set_value(qualifier='mass', component='secondary', context='component', value=1.0)
print("M1: {}, M2: {}, period: {}, q: {}".format(b.get_value(qualifier='mass', component='primary', context='component'),
b.get_value(qualifier='mass', component='secondary', context='component'),
b.get_value(qualifier='period', component='binary', context='component'),
b.get_value(qualifier='q', component='binary', context='component')))
Explanation: Even though under-the-hood the constraints are being rebuilt from scratch, they will remember if you have flipped them to solve for some other parameter.
To show this, let's flip the constraint for the secondary mass to solve for 'period' and then change the hierarchy back to its original value.
End of explanation |
11,158 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Time series prediction using RNNs + Estimators
This notebook illustrates how to
Step1: Describing the data set and the model
We're using a weather dataset....[DESCRIBE DATA SET, HOW THE DATA WAS GENERATED AND OTHER DETAILS].
The goal is
Step2: Separating training, evaluation and a small test data
The test data is going to be very small (10 sequences) and is being used just for visualization.
Step3: What we want to predict
This is the plot of the labels from the test data.
Step4: Defining Input functions
Step5: RNN Model
Step6: Running model
Step7: Trainning
Step8: Evaluating
Step9: Testing
Step10: Visualizing predictions | Python Code:
#!/usr/bin/env python
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# original code from: https://github.com/GoogleCloudPlatform/training-data-analyst/tree/master/blogs/timeseries
# modified by: Marianne Linhares, [email protected], May 2017
# tensorflow
import tensorflow as tf
import tensorflow.contrib.learn as tflearn
import tensorflow.contrib.layers as tflayers
from tensorflow.contrib.learn.python.learn import learn_runner
import tensorflow.contrib.metrics as metrics
import tensorflow.contrib.rnn as rnn
# Rnn common functions
from tensorflow.contrib.learn.python.learn.estimators import rnn_common
# visualization
import matplotlib.pyplot as plt
# helpers
import numpy as np
import pandas as pd
import csv
# enable tensorflow logs
tf.logging.set_verbosity(tf.logging.INFO)
Explanation: Time series prediction using RNNs + Estimators
This notebook illustrates how to:
1. Creating a Recurrent Neural Network in TensorFlow
2. Creating a Custom Estimator in tf.contrib.learn
Dependecies
End of explanation
df = pd.read_csv('weather.csv')
number_of_rows = len(df)
print('number of rows in the dataset:', number_of_rows)
print('how a row looks like:')
print(df.head(11))
print()
print("we don't the year mo da columns, so let's forget about them")
df = df[['avg_tmp', 'avg_dewp', 'avg_slp']]
print(df.head(11))
SEQ_LEN = 10
VALID_ROWS = number_of_rows - SEQ_LEN - 1
NUM_FEATURES = 3
# then we can use indexes to access rows easily
df = np.asarray(df)
# sequences will have shape: [VALID_ROWS, SEQ_LEN, NUM_FEATURES]
sequences = np.zeros((VALID_ROWS, SEQ_LEN, NUM_FEATURES), dtype=np.float32)
labels = np.zeros((VALID_ROWS, 1))
# if the sequence would have len < SEQ_LEN we don't want to use it
# @monteirom: but we can, just need to pass the seq_len as parameter to the dynamic RNN,
# but for now let's keep things simple
for i in range(VALID_ROWS):
sequences[i] = df[i: i + SEQ_LEN]
labels[i] = df[i + SEQ_LEN][0]
print('-' * 20)
print('Example')
print('-' * 20)
print('sequence:')
print(sequences[0])
print('prediction:', labels[0])
Explanation: Describing the data set and the model
We're using a weather dataset....[DESCRIBE DATA SET, HOW THE DATA WAS GENERATED AND OTHER DETAILS].
The goal is: based on features from the past days predict the avg temperature in the next day. More specifically we'll use the data from 10 days in sequence to predict the avg temperature in the next day.
Preparing the data
First let's prepare the data for the RNN, the RNN input is:
x = sequence of features
y = what we want to predict/classify from x, in our casewe want to predict the next avg temperature
Then we have to separe trainning data from test data.
End of explanation
# these values are based on the number of valid rows which is 32083
TRAIN_SIZE = 30000
EVAL_SIZE = 2073
TEST_SIZE = 10
# TODO(@monteirom): suffle
train_seq = sequences[:TRAIN_SIZE]
train_label = np.asarray(labels[:TRAIN_SIZE], dtype=np.float32)
eval_seq = sequences[TRAIN_SIZE: TRAIN_SIZE + EVAL_SIZE]
eval_label = np.asarray(labels[TRAIN_SIZE:TRAIN_SIZE + EVAL_SIZE], dtype=np.float32)
test_seq = sequences[TRAIN_SIZE + EVAL_SIZE: ]
test_label = np.asarray(labels[TRAIN_SIZE + EVAL_SIZE: ], dtype=np.float32)
print('train shape:', train_seq.shape)
print('eval shape:', eval_seq.shape)
print('test shape:', test_seq.shape)
Explanation: Separating training, evaluation and a small test data
The test data is going to be very small (10 sequences) and is being used just for visualization.
End of explanation
# getting test labels
test_plot_data = [test_label[i][0] for i in range(TEST_SIZE)]
# plotting
sns.tsplot(test_plot_data)
plt.show()
Explanation: What we want to predict
This is the plot of the labels from the test data.
End of explanation
BATCH_SIZE = 64
FEATURE_KEY = 'x'
SEQ_LEN_KEY = 'sequence_length'
def make_dict(x):
d = {}
d[FEATURE_KEY] = x
# [SIZE OF DATA SET, 1]
# where the second dimesion contains the sequence of each
# sequence in the data set
d[SEQ_LEN_KEY] = np.asarray(x.shape[0] * [SEQ_LEN], dtype=np.int32)
return d
# Make input function for training:
# num_epochs=None -> will cycle through input data forever
# shuffle=True -> randomize order of input data
train_input_fn = tf.estimator.inputs.numpy_input_fn(x=make_dict(train_seq),
y=train_label,
batch_size=BATCH_SIZE,
shuffle=True,
num_epochs=None)
# Make input function for evaluation:
# shuffle=False -> do not randomize input data
eval_input_fn = tf.estimator.inputs.numpy_input_fn(x=make_dict(eval_seq),
y=eval_label,
batch_size=BATCH_SIZE,
shuffle=False)
# Make input function for testing:
# shuffle=False -> do not randomize input data
test_input_fn = tf.estimator.inputs.numpy_input_fn(x=make_dict(test_seq),
y=test_label,
batch_size=1,
shuffle=False)
Explanation: Defining Input functions
End of explanation
N_OUTPUTS = 1 # 1 prediction
NUM_FEATURES = 3
def get_model_fn(rnn_cell_sizes,
label_dimension,
dnn_layer_sizes=[],
optimizer='SGD',
learning_rate=0.01):
def model_fn(features, targets, mode, params):
x = features[FEATURES_KEY]
sequence_length = features[SEQ_LEN_KEY]
# 1. configure the RNN
# Each RNN layer will consist of a LSTM cell
rnn_layers = [tf.nn.rnn_cell.LSTMCell(size) for size in rnn_cell_sizes]
# Construct the layers
multi_rnn_cell = tf.nn.rnn_cell.MultiRNNCell(rnn_layers)
outputs, _ = tf.nn.dynamic_rnn(multi_rnn_cell, x, dtype=tf.float32)
# Slice to keep only the last cell of the RNN
last_activations = rnn_common.select_last_activations(outputs,
sequence_length)
# Construct dense layers on top of the last cell of the RNN
for units in dnn_layer_sizes:
last_activations = tf.layers.dense(last_activations,
units,
activation=tf.nn.relu)
# Final dense layer for prediction
predictions = tf.layers.dense(last_activations, label_dimension)
# 2. Define the loss function for training/evaluation
#print 'targets={}'.format(targets)
#print 'preds={}'.format(predictions)
loss = tf.losses.mean_squared_error(targets, predictions)
eval_metric_ops = {
"rmse": tf.metrics.root_mean_squared_error(targets, predictions)
}
# 3. Define the training operation/optimizer
train_op = tf.contrib.layers.optimize_loss(
loss=loss,
global_step=tf.contrib.framework.get_global_step(),
learning_rate=learning_rate,
optimizer=optimizer)
# 4. Create predictions
predictions_dict = {"predicted": predictions}
# 5. return ModelFnOps
return tflearn.ModelFnOps(
mode=mode,
predictions=predictions_dict,
loss=loss,
train_op=train_op,
eval_metric_ops=eval_metric_ops)
return model_fn
Explanation: RNN Model
End of explanation
model_fn = get_model_fn(rnn_cell_sizes=[64], # size of the hidden layers
label_dimension=1, # since is just 1 prediction
dnn_layer_sizes=[32], # size of units in the dense layers on top of the RNN
optimizer='Adam',
learning_rate=0.001)
estimator = tf.contrib.learn.Estimator(model_fn=model_fn)
Explanation: Running model
End of explanation
estimator.fit(input_fn=train_input_fn, steps=10000)
Explanation: Trainning
End of explanation
ev = estimator.evaluate(input_fn=eval_input_fn)
print(ev)
Explanation: Evaluating
End of explanation
preds = list(estimator.predict(input_fn=test_input_fn))
predictions = []
for p in preds:
print(p)
predictions.append(p["predicted"][0])
Explanation: Testing
End of explanation
# plotting real values in black
sns.tsplot(test_plot_data, color="black")
# plotting predictions in red
sns.tsplot(predictions, color="red")
plt.show()
Explanation: Visualizing predictions
End of explanation |
11,159 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Theano
A language in a language
Dealing with weights matrices and gradients can be tricky and sometimes not trivial.
Theano is a great framework for handling vectors, matrices and high dimensional tensor algebra.
Most of this tutorial will refer to Theano however TensorFlow is another great framework capable of providing an incredible abstraction for complex algebra.
More on TensorFlow in the next chapters.
Step1: Symbolic variables
Theano has it's own variables and functions, defined the following
Step2: Variables can be used in expressions
Step3: y is an expression now
Result is symbolic as well
Step4: printing
As we are about to see, normal printing isn't the best when it comes to theano
Step5: Evaluating expressions
Supply a dict mapping variables to values
Step6: Or compile a function
Step7: Other tensor types
Step8: Automatic differention
Gradients are free!
Step9: Shared Variables
Symbolic + Storage
Step10: We can get and set the variable's value
Step11: Shared variables can be used in expressions as well
Step12: Their value is used as input when evaluating
Step13: Updates
Store results of function evalution
dict mapping shared variables to new values
Step14: Warming up! Logistic Regression
Step15: Kaggle Challenge Data
The Otto Group is one of the world’s biggest e-commerce companies, A consistent analysis of the performance of products is crucial. However, due to diverse global infrastructure, many identical products get classified differently.
For this competition, we have provided a dataset with 93 features for more than 200,000 products. The objective is to build a predictive model which is able to distinguish between our main product categories.
Each row corresponds to a single product. There are a total of 93 numerical features, which represent counts of different events. All features have been obfuscated and will not be defined any further.
https
Step16: Now lets create and train a logistic regression model.
Hands On - Logistic Regression | Python Code:
import theano
import theano.tensor as T
Explanation: Theano
A language in a language
Dealing with weights matrices and gradients can be tricky and sometimes not trivial.
Theano is a great framework for handling vectors, matrices and high dimensional tensor algebra.
Most of this tutorial will refer to Theano however TensorFlow is another great framework capable of providing an incredible abstraction for complex algebra.
More on TensorFlow in the next chapters.
End of explanation
x = T.scalar()
x
Explanation: Symbolic variables
Theano has it's own variables and functions, defined the following
End of explanation
y = 3*(x**2) + 1
Explanation: Variables can be used in expressions
End of explanation
type(y)
y.shape
Explanation: y is an expression now
Result is symbolic as well
End of explanation
print(y)
theano.pprint(y)
theano.printing.debugprint(y)
Explanation: printing
As we are about to see, normal printing isn't the best when it comes to theano
End of explanation
y.eval({x: 2})
Explanation: Evaluating expressions
Supply a dict mapping variables to values
End of explanation
f = theano.function([x], y)
f(2)
Explanation: Or compile a function
End of explanation
X = T.vector()
X = T.matrix()
X = T.tensor3()
X = T.tensor4()
Explanation: Other tensor types
End of explanation
x = T.scalar()
y = T.log(x)
gradient = T.grad(y, x)
print(gradient)
print(gradient.eval({x: 2}))
print((2 * gradient))
Explanation: Automatic differention
Gradients are free!
End of explanation
import numpy as np
x = theano.shared(np.zeros((2, 3), dtype=theano.config.floatX))
x
Explanation: Shared Variables
Symbolic + Storage
End of explanation
values = x.get_value()
print(values.shape)
print(values)
x.set_value(values)
Explanation: We can get and set the variable's value
End of explanation
(x + 2) ** 2
Explanation: Shared variables can be used in expressions as well
End of explanation
((x + 2) ** 2).eval()
theano.function([], (x + 2) ** 2)()
Explanation: Their value is used as input when evaluating
End of explanation
count = theano.shared(0)
new_count = count + 1
updates = {count: new_count}
f = theano.function([], count, updates=updates)
f()
f()
f()
Explanation: Updates
Store results of function evalution
dict mapping shared variables to new values
End of explanation
%matplotlib inline
import numpy as np
import theano
import theano.tensor as T
import matplotlib.pyplot as plt
Explanation: Warming up! Logistic Regression
End of explanation
import os
import sys
nb_dir = os.path.abspath('..')
if nb_dir not in sys.path:
sys.path.append(nb_dir)
from kaggle_data import load_data, preprocess_data, preprocess_labels
print("Loading data...")
X, labels = load_data('../data/kaggle_ottogroup/train.csv', train=True)
X, scaler = preprocess_data(X)
Y, encoder = preprocess_labels(labels)
X_test, ids = load_data('../data/kaggle_ottogroup/test.csv', train=False)
X_test, ids = X_test[:1000], ids[:1000]
#Plotting the data
print(X_test[:1])
X_test, _ = preprocess_data(X_test, scaler)
nb_classes = Y.shape[1]
print(nb_classes, 'classes')
dims = X.shape[1]
print(dims, 'dims')
Explanation: Kaggle Challenge Data
The Otto Group is one of the world’s biggest e-commerce companies, A consistent analysis of the performance of products is crucial. However, due to diverse global infrastructure, many identical products get classified differently.
For this competition, we have provided a dataset with 93 features for more than 200,000 products. The objective is to build a predictive model which is able to distinguish between our main product categories.
Each row corresponds to a single product. There are a total of 93 numerical features, which represent counts of different events. All features have been obfuscated and will not be defined any further.
https://www.kaggle.com/c/otto-group-product-classification-challenge/data
For this section we will use the Kaggle Otto Group Challenge Data. You will find these data in
../data/kaggle_ottogroup/ folder.
Note We already used this dataset in the 1.2 Introduction - Tensorflow notebook, as well as 1.3 Introduction - Keras notebook.
End of explanation
#Based on example from DeepLearning.net
rng = np.random
N = 400
feats = 93
training_steps = 10
# Declare Theano symbolic variables
x = T.matrix("x")
y = T.vector("y")
w = theano.shared(rng.randn(feats), name="w")
b = theano.shared(0., name="b")
# Construct Theano expression graph
p_1 = 1 / (1 + T.exp(-T.dot(x, w) - b)) # Probability that target = 1
prediction = p_1 > 0.5 # The prediction thresholded
xent = -y * T.log(p_1) - (1-y) * T.log(1-p_1) # Cross-entropy loss function
cost = xent.mean() + 0.01 * (w ** 2).sum()# The cost to minimize
gw, gb = T.grad(cost, [w, b]) # Compute the gradient of the cost
# (we shall return to this in a
# following sections of this tutorial
# See: Intro to tf & Keras)
# Compile
train = theano.function(
inputs=[x,y],
outputs=[prediction, xent],
updates=((w, w - 0.1 * gw), (b, b - 0.1 * gb)),
allow_input_downcast=True)
predict = theano.function(inputs=[x], outputs=prediction, allow_input_downcast=True)
#Transform for class1
y_class1 = []
for i in Y:
y_class1.append(i[0])
y_class1 = np.array(y_class1)
# Train
for i in range(training_steps):
print('Epoch %s' % (i+1,))
pred, err = train(X, y_class1)
print("target values for Data:")
print(y_class1)
print("prediction on training set:")
print(predict(X))
Explanation: Now lets create and train a logistic regression model.
Hands On - Logistic Regression
End of explanation |
11,160 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Content
Glossary
1. Somename
Next
Step1: Import section specific modules | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
Explanation: Content
Glossary
1. Somename
Next: 1.2 Somename 3
Import standard modules:
End of explanation
pass
Explanation: Import section specific modules:
End of explanation |
11,161 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
VectorView and OPM resting state datasets
Here we compute the resting state from raw for data recorded using
a Neuromag VectorView system and a custom OPM system.
The pipeline is meant to mostly follow the Brainstorm [1]
OMEGA resting tutorial pipeline <bst_omega_>.
The steps we use are
Step1: Load data, resample. We will store the raw objects in dicts with entries
"vv" and "opm" to simplify housekeeping and simplify looping later.
Step2: Do some minimal artifact rejection just for VectorView data
Step3: Explore data
Step4: Alignment and forward
Step5: Compute and apply inverse to PSD estimated using multitaper + Welch.
Group into frequency bands, then normalize each source point and sensor
independently. This makes the value of each sensor point and source location
in each frequency band the percentage of the PSD accounted for by that band.
Step6: Now we can make some plots of each frequency band. Note that the OPM head
coverage is only over right motor cortex, so only localization
of beta is likely to be worthwhile.
Theta
Step7: Alpha
Step8: Beta
Here we also show OPM data, which shows a profile similar to the VectorView
data beneath the sensors.
Step9: Gamma | Python Code:
# sphinx_gallery_thumbnail_number = 14
# Authors: Denis Engemann <[email protected]>
# Luke Bloy <[email protected]>
# Eric Larson <[email protected]>
#
# License: BSD (3-clause)
import os.path as op
from mne.filter import next_fast_len
from mayavi import mlab
import mne
print(__doc__)
data_path = mne.datasets.opm.data_path()
subject = 'OPM_sample'
subjects_dir = op.join(data_path, 'subjects')
bem_dir = op.join(subjects_dir, subject, 'bem')
bem_fname = op.join(subjects_dir, subject, 'bem',
subject + '-5120-5120-5120-bem-sol.fif')
src_fname = op.join(bem_dir, '%s-oct6-src.fif' % subject)
vv_fname = data_path + '/MEG/SQUID/SQUID_resting_state.fif'
vv_erm_fname = data_path + '/MEG/SQUID/SQUID_empty_room.fif'
vv_trans_fname = data_path + '/MEG/SQUID/SQUID-trans.fif'
opm_fname = data_path + '/MEG/OPM/OPM_resting_state_raw.fif'
opm_erm_fname = data_path + '/MEG/OPM/OPM_empty_room_raw.fif'
opm_trans_fname = None
opm_coil_def_fname = op.join(data_path, 'MEG', 'OPM', 'coil_def.dat')
Explanation: VectorView and OPM resting state datasets
Here we compute the resting state from raw for data recorded using
a Neuromag VectorView system and a custom OPM system.
The pipeline is meant to mostly follow the Brainstorm [1]
OMEGA resting tutorial pipeline <bst_omega_>.
The steps we use are:
Filtering: downsample heavily.
Artifact detection: use SSP for EOG and ECG.
Source localization: dSPM, depth weighting, cortically constrained.
Frequency: power spectrum density (Welch), 4 sec window, 50% overlap.
Standardize: normalize by relative power for each source.
:depth: 1
Preprocessing
End of explanation
raws = dict()
raw_erms = dict()
new_sfreq = 90. # Nyquist frequency (45 Hz) < line noise freq (50 Hz)
raws['vv'] = mne.io.read_raw_fif(vv_fname, verbose='error') # ignore naming
raws['vv'].load_data().resample(new_sfreq)
raws['vv'].info['bads'] = ['MEG2233', 'MEG1842']
raw_erms['vv'] = mne.io.read_raw_fif(vv_erm_fname, verbose='error')
raw_erms['vv'].load_data().resample(new_sfreq)
raw_erms['vv'].info['bads'] = ['MEG2233', 'MEG1842']
raws['opm'] = mne.io.read_raw_fif(opm_fname)
raws['opm'].load_data().resample(new_sfreq)
raw_erms['opm'] = mne.io.read_raw_fif(opm_erm_fname)
raw_erms['opm'].load_data().resample(new_sfreq)
# Make sure our assumptions later hold
assert raws['opm'].info['sfreq'] == raws['vv'].info['sfreq']
Explanation: Load data, resample. We will store the raw objects in dicts with entries
"vv" and "opm" to simplify housekeeping and simplify looping later.
End of explanation
titles = dict(vv='VectorView', opm='OPM')
ssp_ecg, _ = mne.preprocessing.compute_proj_ecg(
raws['vv'], tmin=-0.1, tmax=0.1, n_grad=1, n_mag=1)
raws['vv'].add_proj(ssp_ecg, remove_existing=True)
# due to how compute_proj_eog works, it keeps the old projectors, so
# the output contains both projector types (and also the original empty-room
# projectors)
ssp_ecg_eog, _ = mne.preprocessing.compute_proj_eog(
raws['vv'], n_grad=1, n_mag=1, ch_name='MEG0112')
raws['vv'].add_proj(ssp_ecg_eog, remove_existing=True)
raw_erms['vv'].add_proj(ssp_ecg_eog)
fig = mne.viz.plot_projs_topomap(raws['vv'].info['projs'][-4:],
info=raws['vv'].info)
fig.suptitle(titles['vv'])
fig.subplots_adjust(0.05, 0.05, 0.95, 0.85)
Explanation: Do some minimal artifact rejection just for VectorView data
End of explanation
kinds = ('vv', 'opm')
n_fft = next_fast_len(int(round(4 * new_sfreq)))
print('Using n_fft=%d (%0.1f sec)' % (n_fft, n_fft / raws['vv'].info['sfreq']))
for kind in kinds:
fig = raws[kind].plot_psd(n_fft=n_fft, proj=True)
fig.suptitle(titles[kind])
fig.subplots_adjust(0.1, 0.1, 0.95, 0.85)
Explanation: Explore data
End of explanation
src = mne.read_source_spaces(src_fname)
bem = mne.read_bem_solution(bem_fname)
fwd = dict()
trans = dict(vv=vv_trans_fname, opm=opm_trans_fname)
# check alignment and generate forward
with mne.use_coil_def(opm_coil_def_fname):
for kind in kinds:
dig = True if kind == 'vv' else False
fig = mne.viz.plot_alignment(
raws[kind].info, trans=trans[kind], subject=subject,
subjects_dir=subjects_dir, dig=dig, coord_frame='mri',
surfaces=('head', 'white'))
mlab.view(0, 90, focalpoint=(0., 0., 0.), distance=0.6, figure=fig)
fwd[kind] = mne.make_forward_solution(
raws[kind].info, trans[kind], src, bem, eeg=False, verbose=True)
Explanation: Alignment and forward
End of explanation
freq_bands = dict(
delta=(2, 4), theta=(5, 7), alpha=(8, 12), beta=(15, 29), gamma=(30, 45))
topos = dict(vv=dict(), opm=dict())
stcs = dict(vv=dict(), opm=dict())
snr = 3.
lambda2 = 1. / snr ** 2
for kind in kinds:
noise_cov = mne.compute_raw_covariance(raw_erms[kind])
inverse_operator = mne.minimum_norm.make_inverse_operator(
raws[kind].info, forward=fwd[kind], noise_cov=noise_cov, verbose=True)
stc_psd, sensor_psd = mne.minimum_norm.compute_source_psd(
raws[kind], inverse_operator, lambda2=lambda2,
n_fft=n_fft, dB=False, return_sensor=True, verbose=True)
topo_norm = sensor_psd.data.sum(axis=1, keepdims=True)
stc_norm = stc_psd.sum() # same operation on MNE object, sum across freqs
# Normalize each source point by the total power across freqs
for band, limits in freq_bands.items():
data = sensor_psd.copy().crop(*limits).data.sum(axis=1, keepdims=True)
topos[kind][band] = mne.EvokedArray(
100 * data / topo_norm, sensor_psd.info)
stcs[kind][band] = \
100 * stc_psd.copy().crop(*limits).sum() / stc_norm.data
Explanation: Compute and apply inverse to PSD estimated using multitaper + Welch.
Group into frequency bands, then normalize each source point and sensor
independently. This makes the value of each sensor point and source location
in each frequency band the percentage of the PSD accounted for by that band.
End of explanation
def plot_band(kind, band):
title = "%s %s\n(%d-%d Hz)" % ((titles[kind], band,) + freq_bands[band])
topos[kind][band].plot_topomap(
times=0., scalings=1., cbar_fmt='%0.1f', vmin=0, cmap='inferno',
time_format=title)
brain = stcs[kind][band].plot(
subject=subject, subjects_dir=subjects_dir, views='cau', hemi='both',
time_label=title, title=title, colormap='inferno',
clim=dict(kind='percent', lims=(70, 85, 99)))
brain.show_view(dict(azimuth=0, elevation=0), roll=0)
return fig, brain
fig_theta, brain_theta = plot_band('vv', 'theta')
Explanation: Now we can make some plots of each frequency band. Note that the OPM head
coverage is only over right motor cortex, so only localization
of beta is likely to be worthwhile.
Theta
End of explanation
fig_alpha, brain_alpha = plot_band('vv', 'alpha')
Explanation: Alpha
End of explanation
fig_beta, brain_beta = plot_band('vv', 'beta')
fig_beta_opm, brain_beta_opm = plot_band('opm', 'beta')
Explanation: Beta
Here we also show OPM data, which shows a profile similar to the VectorView
data beneath the sensors.
End of explanation
fig_gamma, brain_gamma = plot_band('vv', 'gamma')
Explanation: Gamma
End of explanation |
11,162 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
This notebook demostrates the core functionality of pymatgen, including the core objects representing Elements, Species, Lattices, and Structures.
By convention, we import pymatgen as mg.
Step1: Basic Element, Specie and Composition objects
Pymatgen contains a set of core classes to represent an Element, Specie and Composition. These objects contains useful properties such as atomic mass, ionic radii, etc. These core classes are loaded by default with pymatgen. An Element can be created as follows
Step2: You can see that units are printed for atomic masses and ionic radii. Pymatgen comes with a complete system of managing units in pymatgen.core.unit. A Unit is a subclass of float that attaches units and handles conversions. For example,
Step3: Please refer to the Units example for more information on units. Species are like Elements, except they have an explicit oxidation state. They can be used wherever Element is used for the most part.
Step4: A Composition is essentially an immutable mapping of Elements/Species with amounts, and useful properties like molecular weight, get_atomic_fraction, etc. Note that you can conveniently either use an Element/Specie object or a string as keys (this is a feature).
Step5: Lattice & Structure objects
A Lattice represents a Bravais lattice. Convenience static functions are provided for the creation of common lattice types from a minimum number of arguments.
Step6: A Structure object represents a crystal structure (lattice + basis). A Structure is essentially a list of PeriodicSites with the same Lattice. Let us now create a CsCl structure.
Step7: The Structure object contains many useful manipulation functions. Since Structure is essentially a list, it contains a simple pythonic API for manipulation its sites. Some examples are given below. Please note that there is an immutable version of Structure known as IStructure, for the use case where you really need to enforce that the structure does not change. Conversion between these forms of Structure can be performed using from_sites().
Step8: Basic analyses
Pymatgen provides many analyses functions for Structures. Some common ones are given below.
Step9: We also have an extremely powerful structure matching tool.
Step10: Input/output
Pymatgen also provides IO support for various file formats in the pymatgen.io package. A convenient set of read_structure and write_structure functions are also provided which auto-detects several well-known formats.
Step11: The vaspio_set module provides a means o obtain a complete set of VASP input files for performing calculations. Several useful presets based on the parameters used in the Materials Project are provided. | Python Code:
import pymatgen as mg
Explanation: Introduction
This notebook demostrates the core functionality of pymatgen, including the core objects representing Elements, Species, Lattices, and Structures.
By convention, we import pymatgen as mg.
End of explanation
si = mg.Element("Si")
print("Atomic mass of Si is {}".format(si.atomic_mass))
print("Si has a melting point of {}".format(si.melting_point))
print("Ionic radii for Si: {}".format(si.ionic_radii))
Explanation: Basic Element, Specie and Composition objects
Pymatgen contains a set of core classes to represent an Element, Specie and Composition. These objects contains useful properties such as atomic mass, ionic radii, etc. These core classes are loaded by default with pymatgen. An Element can be created as follows:
End of explanation
print("Atomic mass of Si in kg: {}".format(si.atomic_mass.to("kg")))
Explanation: You can see that units are printed for atomic masses and ionic radii. Pymatgen comes with a complete system of managing units in pymatgen.core.unit. A Unit is a subclass of float that attaches units and handles conversions. For example,
End of explanation
fe2 = mg.Specie("Fe", 2)
print(fe2.atomic_mass)
print(fe2.ionic_radius)
Explanation: Please refer to the Units example for more information on units. Species are like Elements, except they have an explicit oxidation state. They can be used wherever Element is used for the most part.
End of explanation
comp = mg.Composition("Fe2O3")
print("Weight of Fe2O3 is {}".format(comp.weight))
print("Amount of Fe in Fe2O3 is {}".format(comp["Fe"]))
print("Atomic fraction of Fe is {}".format(comp.get_atomic_fraction("Fe")))
print("Weight fraction of Fe is {}".format(comp.get_wt_fraction("Fe")))
Explanation: A Composition is essentially an immutable mapping of Elements/Species with amounts, and useful properties like molecular weight, get_atomic_fraction, etc. Note that you can conveniently either use an Element/Specie object or a string as keys (this is a feature).
End of explanation
# Creates cubic Lattice with lattice parameter 4.2
lattice = mg.Lattice.cubic(4.2)
print(lattice.lengths_and_angles)
Explanation: Lattice & Structure objects
A Lattice represents a Bravais lattice. Convenience static functions are provided for the creation of common lattice types from a minimum number of arguments.
End of explanation
structure = mg.Structure(lattice, ["Cs", "Cl"], [[0, 0, 0], [0.5, 0.5, 0.5]])
print("Unit cell vol = {}".format(structure.volume))
print("First site of the structure is {}".format(structure[0]))
Explanation: A Structure object represents a crystal structure (lattice + basis). A Structure is essentially a list of PeriodicSites with the same Lattice. Let us now create a CsCl structure.
End of explanation
structure.make_supercell([2, 2, 1]) #Make a 3 x 2 x 1 supercell of the structure
del structure[0] #Remove the first site
structure.append("Na", [0,0,0]) #Append a Na atom.
structure[-1] = "Li" #Change the last added atom to Li.
structure[0] = "Cs", [0.01, 0.5, 0] #Shift the first atom by 0.01 in fractional coordinates in the x-direction.
immutable_structure = mg.IStructure.from_sites(structure) #Create an immutable structure (cannot be modified).
print(immutable_structure)
Explanation: The Structure object contains many useful manipulation functions. Since Structure is essentially a list, it contains a simple pythonic API for manipulation its sites. Some examples are given below. Please note that there is an immutable version of Structure known as IStructure, for the use case where you really need to enforce that the structure does not change. Conversion between these forms of Structure can be performed using from_sites().
End of explanation
#Determining the symmetry
from pymatgen.symmetry.analyzer import SpacegroupAnalyzer
finder = SpacegroupAnalyzer(structure)
print("The spacegroup is {}".format(finder.get_space_group_symbol()))
Explanation: Basic analyses
Pymatgen provides many analyses functions for Structures. Some common ones are given below.
End of explanation
from pymatgen.analysis.structure_matcher import StructureMatcher
#Let's create two structures which are the same topologically, but with different elements, and one lattice is larger.
s1 = mg.Structure(lattice, ["Cs", "Cl"], [[0, 0, 0], [0.5, 0.5, 0.5]])
s2 = mg.Structure(mg.Lattice.cubic(5), ["Rb", "F"], [[0, 0, 0], [0.5, 0.5, 0.5]])
m = StructureMatcher()
print(m.fit_anonymous(s1, s2)) #Returns a mapping which maps s1 and s2 onto each other. Strict element fitting is also available.
Explanation: We also have an extremely powerful structure matching tool.
End of explanation
#Convenient IO to various formats. Format is intelligently determined from file name and extension.
structure.to(filename="POSCAR")
structure.to(filename="CsCl.cif")
#Or if you just supply fmt, you simply get a string.
print(structure.to(fmt="poscar"))
print(structure.to(fmt="cif"))
#Reading a structure from a file.
structure = mg.Structure.from_file("POSCAR")
Explanation: Input/output
Pymatgen also provides IO support for various file formats in the pymatgen.io package. A convenient set of read_structure and write_structure functions are also provided which auto-detects several well-known formats.
End of explanation
from pymatgen.io.vasp.sets import MPRelaxSet
v = MPRelaxSet(structure)
v.write_input("MyInputFiles") #Writes a complete set of input files for structure to the directory MyInputFiles
Explanation: The vaspio_set module provides a means o obtain a complete set of VASP input files for performing calculations. Several useful presets based on the parameters used in the Materials Project are provided.
End of explanation |
11,163 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Asymmetric Encryption
Use hard math problems called “Trapdoor Functions”
Example
Step1: Risk
Attacker can use known plaintext and A's public Ke to test a generated private key.
Computation
Asymmetric key encryption is very expensive
Never encrypt message; transmit encrypted symmetric key
Ex | Python Code:
#Lets test this idea...
import cProfile, pstats, StringIO
Prime1=307 #Try some others 7907 15485857 7919 15485863
Prime2=293
#or get some big primes from here:https://primes.utm.edu/
def factors(n):
return set(reduce(list.__add__,
([i, n//i] for i in range(1, int(n**0.5) + 1) if n % i == 0)))
def printStats(pr):
s = StringIO.StringIO()
sortby = 'cumulative'
ps = pstats.Stats(pr, stream=s).sort_stats(sortby)
ps.print_stats()
print s.getvalue()
pr = cProfile.Profile()
pr.enable()
MyProduct=Prime1 * Prime2
pr.disable()
print "Product of 2 Primes=", MyProduct
print "Execution Stats:\n",printStats(pr)
print "Lets factor it!"
pr.enable()
MyFactors=factors(MyProduct)
pr.disable()
print "Factors of ", MyProduct, " are:",MyFactors
print "Execution Stats:\n",printStats(pr)
Explanation: Asymmetric Encryption
Use hard math problems called “Trapdoor Functions”
Example: Factoring Multiples of Primes
Easy to multiply hard to factor
Difficult to process without a key but easy to process with a key (the “trapdoor”)
End of explanation
#Alice & Bob: Agree on a large prime p (1024+ bits)
p = 23
#Alice & Bob: Agree on a generator g
base_g = 5
#Alice & Bob: Choose private numbers a (S) & b (R)
Apriv = 6 #Alice's secret
Bpriv = 15 #Bob's secret
print "Alice sends Bob A = g^Apriv mod p"
#A = 5^6 mod 23
Apub = (base_g**Apriv)%p
print "Alice's shared value->Bob=",Apub
print "Bob sends Alice B = g^Bpriv mod p"
#B = 5^15 mod 23
Bpub=(base_g**Bpriv)%p
print "Bob's shared value->Alice=",Bpub,"\n"
print "Alice computes s = B^Apriv mod p"
#s = 19*6 mod 23
s=(Bpub**Apriv)%p
print "Alice has Shared Secret=",s,"\n"
print "Bob computes s = A^BPriv mod p"
#s = 8*15 mod 23
s=(Apub**Bpriv)%p
print "Bob has Shared Secret=",s
#Alice and Bob now share a secret (2).
Explanation: Risk
Attacker can use known plaintext and A's public Ke to test a generated private key.
Computation
Asymmetric key encryption is very expensive
Never encrypt message; transmit encrypted symmetric key
Ex: In software RSA is 100 times slower than DES
In embedded hardware it is even slower
Delay Time
Stream ≤ Stream-Block ≤ Block <br>
DES: 64-bit blocks<br>
RSA: 100-200-bit blocks (short blocks: limited security)
Diffie-Hellman Key Exchange
Created by Whitfield Diffie and Martin Hellman
One of the most common encryption protocols in use today.
Key exchange for SSL, TLS, SSH and IPsec
Key per session, not to be reused
A couple of variations have evolved
Ephemeral Diffie-Hellman (EDH)
Elliptic Curve Diffie-Hellman (ECDH)
Elliptic Curve Diffie-Hellman Ephemeral (ECDHE)
Example
End of explanation |
11,164 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Installation
Follow directions at the PySAL-ArcGIS-Toolbox Git Repository [https
Step1: Example
Step2: Use the PySAL-ArcGIS Utilities to Read in Spatial Weights Files
Step3: Run the Auto Model Class and Export Your Data to an Output Feature Class
Step4: Compare OLS and Spatial Lag Results | Python Code:
import arcpy as ARCPY
import arcgisscripting as ARC
import SSDataObject as SSDO
import SSUtilities as UTILS
import WeightsUtilities as WU
import numpy as NUM
import scipy as SCIPY
import pysal as PYSAL
import os as OS
import pandas as PANDAS
Explanation: Installation
Follow directions at the PySAL-ArcGIS-Toolbox Git Repository [https://github.com/Esri/PySAL-ArcGIS-Toolbox]
Please make note of the section on Adding a Git Project to your ArcGIS Installation Python Path.
End of explanation
inputFC = r'../data/CA_Polygons.shp'
fullFC = OS.path.abspath(inputFC)
fullPath, fcName = OS.path.split(fullFC)
ssdo = SSDO.SSDataObject(inputFC)
uniqueIDField = "MYID"
fieldNames = ['GROWTH', 'LOGPCR69', 'PERCNOHS', 'POP1969']
ssdo.obtainData(uniqueIDField, fieldNames)
df = ssdo.getDataFrame()
print(df.head())
Explanation: Example: Testing the Income Convergence Hypothesis in California Counties (1969 - 2010)
Use the Auto-Model Spatial Econometric Tool to identify the appropriate model
Regressing the growth rate of incomes on the log of starting incomes
a significant negative coefficient indicates convergence
The percentage of the population w/o a high school education and the population itself are the other exogenous factors.
Importing Your Data into a PANDAS DataFrame
End of explanation
import pysal2ArcUtils as PYSAL_UTILS
swmFile = OS.path.join(fullPath, "queen.swm")
W = PYSAL_UTILS.PAT_W(ssdo, swmFile)
w = W.w
kernelSWMFile = OS.path.join(fullPath, "knn8.swm")
KW = PYSAL_UTILS.PAT_W(ssdo, kernelSWMFile)
kw = KW.w
Explanation: Use the PySAL-ArcGIS Utilities to Read in Spatial Weights Files
End of explanation
import AutoModel as AUTO
auto = AUTO.AutoSpace_PySAL(ssdo, "GROWTH", ['LOGPCR69', 'PERCNOHS', 'POP1969'],
W, KW, pValue = 0.1, useCombo = True)
ARCPY.env.overwriteOutput = True
outputFC = r'../data/pysal_automodel.shp'
auto.createOutput(outputFC)
Explanation: Run the Auto Model Class and Export Your Data to an Output Feature Class
End of explanation
print(auto.olsModel.summary)
print(auto.finalModel.summary)
Explanation: Compare OLS and Spatial Lag Results
End of explanation |
11,165 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analyze Issue Label Bot
This notebook is used to compute metrics to evaluate performance of the issue label bot.
Step2: Setup Authorization
If you are using a service account run
%%bash
Activate Service Account provided by Kubeflow.
gcloud auth activate-service-account --key-file=${GOOGLE_APPLICATION_CREDENTIALS}
If you are running using user credentials
gcloud auth application-default login
Query Bigquery
We need to query bigquery to get the issues were we added predictions
Step3: Number of Issues Labeled Per Day
Make a plot of the number of issues labeled each day | Python Code:
import altair as alt
import collections
import importlib
import logging
import sys
import os
import datetime
from dateutil import parser as dateutil_parser
import glob
import json
import numpy as np
import pandas as pd
from pandas.io import gbq
# A bit of a hack to set the path correctly
sys.path = [os.path.abspath(os.path.join(os.getcwd(), "..", "..", "py"))] + sys.path
logging.basicConfig(level=logging.INFO,
format=('%(levelname)s|%(asctime)s'
'|%(message)s|%(pathname)s|%(lineno)d|'),
datefmt='%Y-%m-%dT%H:%M:%S',
)
logging.getLogger().setLevel(logging.INFO)
alt.renderers.enable('html')
import getpass
import subprocess
# Configuration Variables. Modify as desired.
PROJECT = subprocess.check_output(["gcloud", "config", "get-value", "project"]).strip().decode()
Explanation: Analyze Issue Label Bot
This notebook is used to compute metrics to evaluate performance of the issue label bot.
End of explanation
query =
SELECT
timestamp,
jsonPayload.repo_owner,
jsonPayload.repo_name,
cast(jsonPayload.issue_num as numeric) as issue_num,
jsonPayload.predictions
FROM `issue-label-bot-dev.issue_label_bot_logs_dev.stderr_*`
where jsonPayload.message = "Add labels to issue."
and timestamp_diff(current_timestamp(), timestamp, day) <=28
labeled=gbq.read_gbq(str(query), dialect='standard', project_id=PROJECT)
# Count how many times each label was added
label_counts = collections.defaultdict(lambda: 0)
# We need to compute the number of issues that got labeled with an area or kind label
results = pd.DataFrame(index=range(labeled.shape[0]), columns=["area", "kind"])
results = results.fillna(0)
for i in range(labeled.shape[0]):
predictions = labeled["predictions"][i]
if not predictions:
continue
# Loop over the predictions to see if one of them includes an area or kind label
for l, p in predictions.items():
label_counts[l] = label_counts[l] + 1
# Now for each issue count whether a particular label is added
issue_labels = pd.DataFrame(index=range(labeled.shape[0]), columns=label_counts.keys())
issue_labels = issue_labels.fillna(0)
for c in ["repo_owner", "repo_name", "issue_num"]:
issue_labels[c] = labeled[c]
for i in range(labeled.shape[0]):
predictions = labeled["predictions"][i]
if not predictions:
continue
for l, p in predictions.items():
if not p:
continue
issue_labels.at[i, l] = 1
# Deduplicate the rows
# We need to group by (repo_owner, repo_name, issue_num); we should take the max of each column
# as a way of dealing with duplicates
issue_labels = issue_labels.groupby(["repo_owner", "repo_name", "issue_num"], as_index=False).max()
# Create a mapping from label prefixes to all all the labels with that prefix
# e.g. area -> ["area_jupyter", "area_kfctl", ...]
label_prefixes = collections.defaultdict(lambda: [])
for l in label_counts.keys():
pieces = l.split("_")
if len(pieces) <= 1:
continue
label_prefixes[pieces[0]] = label_prefixes[pieces[0]] + [l]
# Add remappings.
# The log entries associated with "Add labels to issue." log the model predictions before label remapping
# is applied; i.e. before feature is remapped to kind/feature.
# So we want to apply those mappings here before computing the stats.
#
# TODO(https://github.com/kubeflow/code-intelligence/issues/109): We should arguably load these from
# the YAML files configuring label bot.
for l in ["bug", "feature", "feature_request", "question"]:
if l not in label_counts.keys():
continue
label_prefixes["kind"] = label_prefixes["kind"] + [l]
# Now for each issue aggregate across all labels with a given prefix to see if the issue has at least one
# of the given prefix labels
issue_group_labels = pd.DataFrame(index=range(issue_labels.shape[0]), columns=label_prefixes.keys())
issue_group_labels = issue_group_labels.fillna(0)
for c in ["repo_owner", "repo_name", "issue_num"]:
issue_group_labels[c] = issue_labels[c]
for prefix, labels in label_prefixes.items():
issue_group_labels[prefix] = issue_labels[labels].max(axis=1)
# Compute the number of issues with at least one of the specified prefixes
rows = ["area", "platform", "kind"]
num_issues = issue_group_labels.shape[0]
counts = issue_group_labels[rows].sum(axis=0)
stats = pd.DataFrame(index=range(len(rows)), columns = ["label", "count", "percentage"])
stats["label"] = counts.index
stats["count"] = counts.values
stats["percentage"] = stats["count"]/float(num_issues) *100
print(f"Total # of issues {num_issues}")
print("Number and precentage of issues with labels with various prefixes")
stats
chart = alt.Chart(stats)
chart.mark_point().encode(
x='label',
y='count',
).interactive()
Explanation: Setup Authorization
If you are using a service account run
%%bash
Activate Service Account provided by Kubeflow.
gcloud auth activate-service-account --key-file=${GOOGLE_APPLICATION_CREDENTIALS}
If you are running using user credentials
gcloud auth application-default login
Query Bigquery
We need to query bigquery to get the issues were we added predictions
End of explanation
import numpy as np
issues_per_day = labeled[["timestamp","repo_owner", "repo_name", "issue_num"]]
# Deduplicate the issues by taking the first entry
issues_per_day = issues_per_day.groupby(["repo_owner", "repo_name", "issue_num"], as_index=False).min()
# Compute the day
issues_per_day["day"] = issues_per_day["timestamp"].apply(lambda x: datetime.datetime(x.year, x.month, x.day))
issue_counts = issues_per_day[["day"]]
issue_counts["num_issues"] = 1
issue_counts = issue_counts.groupby(["day"], as_index=False).sum()
chart = alt.Chart(issue_counts)
line = chart.mark_line().encode(
x=alt.X('day'),
y=alt.Y('num_issues'),
)
point = line + line.mark_point()
point.interactive()
Explanation: Number of Issues Labeled Per Day
Make a plot of the number of issues labeled each day
End of explanation |
11,166 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ch 08
Step1: Define an abstract class called DecisionPolicy
Step2: Here's one way we could implement the decision policy, called a random decision policy
Step3: That's a good baseline. Now let's use a smarter approach using a neural network
Step4: Define a function to run a simulation of buying and selling stocks from a market
Step5: We want to run simulations multiple times and average out the performances
Step6: Call the following function to use the Yahoo Finance library and obtain useful stockmarket data.
Step7: Who wants to deal with stock market data without looking a pretty plots? No one. So we need this out of law
Step8: Train a reinforcement learning policy | Python Code:
%matplotlib inline
from yahoo_finance import Share
from matplotlib import pyplot as plt
import numpy as np
import random
import tensorflow as tf
import random
Explanation: Ch 08: Concept 01
Reinforcement learning
The states are previous history of stock prices, current budget, and current number of shares of a stock.
The actions are buy, sell, or hold (i.e. do nothing).
The stock market data comes from the Yahoo Finance library, pip install yahoo-finance.
End of explanation
class DecisionPolicy:
def select_action(self, current_state, step):
pass
def update_q(self, state, action, reward, next_state):
pass
Explanation: Define an abstract class called DecisionPolicy:
End of explanation
class RandomDecisionPolicy(DecisionPolicy):
def __init__(self, actions):
self.actions = actions
def select_action(self, current_state, step):
action = random.choice(self.actions)
return action
Explanation: Here's one way we could implement the decision policy, called a random decision policy:
End of explanation
class QLearningDecisionPolicy(DecisionPolicy):
def __init__(self, actions, input_dim):
self.epsilon = 0.95
self.gamma = 0.3
self.actions = actions
output_dim = len(actions)
h1_dim = 20
self.x = tf.placeholder(tf.float32, [None, input_dim])
self.y = tf.placeholder(tf.float32, [output_dim])
W1 = tf.Variable(tf.random_normal([input_dim, h1_dim]))
b1 = tf.Variable(tf.constant(0.1, shape=[h1_dim]))
h1 = tf.nn.relu(tf.matmul(self.x, W1) + b1)
W2 = tf.Variable(tf.random_normal([h1_dim, output_dim]))
b2 = tf.Variable(tf.constant(0.1, shape=[output_dim]))
self.q = tf.nn.relu(tf.matmul(h1, W2) + b2)
loss = tf.square(self.y - self.q)
self.train_op = tf.train.AdamOptimizer(0.001).minimize(loss)
self.sess = tf.Session()
self.sess.run(tf.global_variables_initializer())
def select_action(self, current_state, step):
threshold = min(self.epsilon, step / 1000.)
if random.random() < threshold:
# Exploit best option with probability epsilon
action_q_vals = self.sess.run(self.q, feed_dict={self.x: current_state})
action_idx = np.argmax(action_q_vals) # TODO: replace w/ tensorflow's argmax
action = self.actions[action_idx]
else:
# Explore random option with probability 1 - epsilon
action = self.actions[random.randint(0, len(self.actions) - 1)]
return action
def update_q(self, state, action, reward, next_state):
action_q_vals = self.sess.run(self.q, feed_dict={self.x: state})
next_action_q_vals = self.sess.run(self.q, feed_dict={self.x: next_state})
next_action_idx = np.argmax(next_action_q_vals)
current_action_idx = self.actions.index(action)
action_q_vals[0, current_action_idx] = reward + self.gamma * next_action_q_vals[0, next_action_idx]
action_q_vals = np.squeeze(np.asarray(action_q_vals))
self.sess.run(self.train_op, feed_dict={self.x: state, self.y: action_q_vals})
Explanation: That's a good baseline. Now let's use a smarter approach using a neural network:
End of explanation
def run_simulation(policy, initial_budget, initial_num_stocks, prices, hist, debug=False):
budget = initial_budget
num_stocks = initial_num_stocks
share_value = 0
transitions = list()
for i in range(len(prices) - hist - 1):
if i % 1000 == 0:
print('progress {:.2f}%'.format(float(100*i) / (len(prices) - hist - 1)))
current_state = np.asmatrix(np.hstack((prices[i:i+hist], budget, num_stocks)))
current_portfolio = budget + num_stocks * share_value
action = policy.select_action(current_state, i)
share_value = float(prices[i + hist])
if action == 'Buy' and budget >= share_value:
budget -= share_value
num_stocks += 1
elif action == 'Sell' and num_stocks > 0:
budget += share_value
num_stocks -= 1
else:
action = 'Hold'
new_portfolio = budget + num_stocks * share_value
reward = new_portfolio - current_portfolio
next_state = np.asmatrix(np.hstack((prices[i+1:i+hist+1], budget, num_stocks)))
transitions.append((current_state, action, reward, next_state))
policy.update_q(current_state, action, reward, next_state)
portfolio = budget + num_stocks * share_value
if debug:
print('${}\t{} shares'.format(budget, num_stocks))
return portfolio
Explanation: Define a function to run a simulation of buying and selling stocks from a market:
End of explanation
def run_simulations(policy, budget, num_stocks, prices, hist):
num_tries = 5
final_portfolios = list()
for i in range(num_tries):
print('Running simulation {}...'.format(i + 1))
final_portfolio = run_simulation(policy, budget, num_stocks, prices, hist)
final_portfolios.append(final_portfolio)
print('Final portfolio: ${}'.format(final_portfolio))
plt.title('Final Portfolio Value')
plt.xlabel('Simulation #')
plt.ylabel('Net worth')
plt.plot(final_portfolios)
plt.show()
Explanation: We want to run simulations multiple times and average out the performances:
End of explanation
def get_prices(share_symbol, start_date, end_date, cache_filename='stock_prices.npy'):
try:
stock_prices = np.load(cache_filename)
except IOError:
share = Share(share_symbol)
stock_hist = share.get_historical(start_date, end_date)
stock_prices = [stock_price['Open'] for stock_price in stock_hist]
np.save(cache_filename, stock_prices)
return stock_prices.astype(float)
Explanation: Call the following function to use the Yahoo Finance library and obtain useful stockmarket data.
End of explanation
def plot_prices(prices):
plt.title('Opening stock prices')
plt.xlabel('day')
plt.ylabel('price ($)')
plt.plot(prices)
plt.savefig('prices.png')
plt.show()
Explanation: Who wants to deal with stock market data without looking a pretty plots? No one. So we need this out of law:
End of explanation
if __name__ == '__main__':
prices = get_prices('MSFT', '1992-07-22', '2016-07-22')
plot_prices(prices)
actions = ['Buy', 'Sell', 'Hold']
hist = 3
# policy = RandomDecisionPolicy(actions)
policy = QLearningDecisionPolicy(actions, hist + 2)
budget = 100000.0
num_stocks = 0
run_simulations(policy, budget, num_stocks, prices, hist)
Explanation: Train a reinforcement learning policy:
End of explanation |
11,167 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Genetic Algorithm Workshop
In this workshop we will code up a genetic algorithm for a simple mathematical optimization problem.
Genetic Algorithm is a
* Meta-heuristic
* Inspired by Natural Selection
* Traditionally works on binary data. Can be adopted for other data types as well.
You can find an example illustrating GA below
Step11: The optimization problem
The problem we are considering is a mathematical one
<img src="cone.png" width=500px/>
Decisions
Step12: Great. Now that the class and its basic methods is defined, we move on to code up the GA.
Population
First up is to create an initial population.
Step13: Crossover
We perform a single point crossover between two points
Step14: Mutation
Randomly change a decision such that
Step16: Fitness Evaluation
To evaluate fitness between points we use binary domination. Binary Domination is defined as follows
Step17: Fitness and Elitism
In this workshop we will count the number of points of the population P dominated by a point A as the fitness of point A. This is a very naive measure of fitness since we are using binary domination.
Few prominent alternate methods are
1. Continuous Domination - Section 3.1
2. Non-dominated Sort
3. Non-dominated Sort + Niching
Elitism
Step18: Putting it all together and making the GA
Step19: Visualize
Lets plot the initial population with respect to the final frontier. | Python Code:
%matplotlib inline
# All the imports
from __future__ import print_function, division
from math import *
import random
import sys
import matplotlib.pyplot as plt
# TODO 1: Enter your unity ID here
__author__ = "sbiswas4"
class O:
Basic Class which
- Helps dynamic updates
- Pretty Prints
def __init__(self, **kwargs):
self.has().update(**kwargs)
def has(self):
return self.__dict__
def update(self, **kwargs):
self.has().update(kwargs)
return self
def __repr__(self):
show = [':%s %s' % (k, self.has()[k])
for k in sorted(self.has().keys())
if k[0] is not "_"]
txt = ' '.join(show)
if len(txt) > 60:
show = map(lambda x: '\t' + x + '\n', show)
return '{' + ' '.join(show) + '}'
print("Unity ID: ", __author__)
Explanation: Genetic Algorithm Workshop
In this workshop we will code up a genetic algorithm for a simple mathematical optimization problem.
Genetic Algorithm is a
* Meta-heuristic
* Inspired by Natural Selection
* Traditionally works on binary data. Can be adopted for other data types as well.
You can find an example illustrating GA below
End of explanation
# Few Utility functions
def say(*lst):
Print whithout going to new line
print(*lst, end="")
sys.stdout.flush()
def random_value(low, high, decimals=2):
Generate a random number between low and high.
decimals incidicate number of decimal places
return round(random.uniform(low, high),decimals)
def gt(a, b): return a > b
def lt(a, b): return a < b
def shuffle(lst):
Shuffle a list
random.shuffle(lst)
return lst
class Decision(O):
Class indicating Decision of a problem
def __init__(self, name, low, high):
@param name: Name of the decision
@param low: minimum value
@param high: maximum value
O.__init__(self, name=name, low=low, high=high)
class Objective(O):
Class indicating Objective of a problem
def __init__(self, name, do_minimize=True):
@param name: Name of the objective
@param do_minimize: Flag indicating if objective has to be minimized or maximized
O.__init__(self, name=name, do_minimize=do_minimize)
class Point(O):
Represents a member of the population
def __init__(self, decisions):
O.__init__(self)
self.decisions = decisions
self.objectives = None
def __hash__(self):
return hash(tuple(self.decisions))
def __eq__(self, other):
return self.decisions == other.decisions
def clone(self):
new = Point(self.decisions)
new.objectives = self.objectives
return new
class Problem(O):
Class representing the cone problem.
def __init__(self):
O.__init__(self)
# TODO 2: Code up decisions and objectives below for the problem
# using the auxilary classes provided above.
self.decisions = [Decision('r', 0, 10), Decision('h', 0, 20)]
#T= Total surface area and S= Curved SUrface area
self.objectives = [Objective('S'), Objective('T')]
@staticmethod
def evaluate(point):
[r, h] = point.decisions
# TODO 3: Evaluate the objectives S and T for the point.
base= pi*r*r
slant= sqrt((r**2)+(h**2))
S=pi*r*slant #lateral surface area
T=base+S #total surface area
point.objectives=[S,T]
return point.objectives
@staticmethod
def is_valid(point):
if point is None:
[r, h]= [0.9,15.8]
else:
[r, h] = point.decisions
# TODO 4: Check if the point has valid decisions
Volume=(pi*r*r*h)/3
if Volume>200:
return True
else:
return False
def generate_one(self):
# TODO 5: Generate a valid instance of Point.
[r,h] = self.decisions
p=Point([random_value(r.low,r.high),random_value(h.low,h.high)])
# if Problem.is_valid(p):
while not Problem.is_valid(p):
p.decisions=[random_value(r.low,r.high),random_value(h.low,h.high)]
return p
cone = Problem()
point = cone.generate_one()
cone.evaluate(point)
Explanation: The optimization problem
The problem we are considering is a mathematical one
<img src="cone.png" width=500px/>
Decisions: r in [0, 10] cm; h in [0, 20] cm
Objectives: minimize S, T
Constraints: V > 200cm<sup>3</sup>
End of explanation
def populate(problem, size):
population = []
# TODO 6: Create a list of points of length 'size'
for i in range(size):
population.append(problem.generate_one())
return population
# or if ur python OBSESSED
# return [problem.generate_one() for _ in xrange(size)]
pop=populate(cone, 5)
# print(pop)
Explanation: Great. Now that the class and its basic methods is defined, we move on to code up the GA.
Population
First up is to create an initial population.
End of explanation
def crossover(mom, dad):
# TODO 7: Create a new point which contains decisions from
# the first half of mom and second half of dad
half_pt= len(mom.decisions)//2
cross_o=mom.decisions[:half_pt]+dad.decisions[half_pt:]
return Point(cross_o)
pop=populate(cone,5)
crossover(pop[0], pop[1])
Explanation: Crossover
We perform a single point crossover between two points
End of explanation
def mutate(problem, point, mutation_rate=0.01):
# TODO 8: Iterate through all the decisions in the problem
# and if the probability is less than mutation rate
# change the decision(randomly set it between its max and min).
for i in range(len(problem.decisions)):
if random.random()<mutation_rate:
point.decisions[i] = random_value(problem.decisions[i].low, problem.decisions[i].high)
return (point)
mutate(cone,point)
Explanation: Mutation
Randomly change a decision such that
End of explanation
def bdom(problem, one, two):
Return if one dominates two
objs_one = problem.evaluate(one)
objs_two = problem.evaluate(two)
dominates = False
# TODO 9: Return True/False based on the definition
# of bdom above.
for i in range(len(objs_one)):
if objs_two[i]<objs_one[i]:
return False
elif objs_one[i] <= objs_two[i]:
return True
if dominates:
return True
return False
bdom(cone, pop[0],pop[1])
Explanation: Fitness Evaluation
To evaluate fitness between points we use binary domination. Binary Domination is defined as follows:
* Consider two points one and two.
* For every objective o and t in one and two, o <= t
* Atleast one objective o and t in one and two, o < t
Note: Binary Domination is not the best method to evaluate fitness but due to its simplicity we choose to use it for this workshop.
End of explanation
def fitness(problem, population, point):
dominates = 0
# TODO 10: Evaluate fitness of a point.
# For this workshop define fitness of a point
# as the number of points dominated by it.
# For example point dominates 5 members of population,
# then fitness of point is 5.
for i in population:
if i !=point and bdom(problem,point,i):
dominates+=1
point.fitness = dominates
return dominates
def elitism(problem, population, retain_size):
# TODO 11: Sort the population with respect to the fitness
# of the points and return the top 'retain_size' points of the population
population = sorted(population, key=lambda x: fitness(problem, population, x), reverse=True)
return population[:retain_size]
print(elitism(cone, pop, 3))
Explanation: Fitness and Elitism
In this workshop we will count the number of points of the population P dominated by a point A as the fitness of point A. This is a very naive measure of fitness since we are using binary domination.
Few prominent alternate methods are
1. Continuous Domination - Section 3.1
2. Non-dominated Sort
3. Non-dominated Sort + Niching
Elitism: Sort points with respect to the fitness and select the top points.
End of explanation
def ga(pop_size = 100, gens = 250):
problem = Problem()
population = populate(problem, pop_size)
# print (population)
[problem.evaluate(point) for point in population]
initial_population = [point.clone() for point in population]
gen = 0
while gen < gens:
say(".")
children = []
for _ in range(pop_size):
mom = random.choice(population)
dad = random.choice(population)
while (mom == dad):
dad = random.choice(population)
child = mutate(problem, crossover(mom, dad))
if problem.is_valid(child) and child not in population+children:
children.append(child)
population += children
population = elitism(problem, population, pop_size)
gen += 1
print("")
return initial_population, population
# ga()
Explanation: Putting it all together and making the GA
End of explanation
def plot_pareto(initial, final):
initial_objs = [point.objectives for point in initial]
final_objs = [point.objectives for point in final]
initial_x = [i[0] for i in initial_objs]
initial_y = [i[1] for i in initial_objs]
final_x = [i[0] for i in final_objs]
final_y = [i[1] for i in final_objs]
plt.scatter(initial_x, initial_y, color='b', marker='+', label='initial')
# plt.scatter(final_x, final_y, color='r', marker='o', label='final')
plt.title("Scatter Plot between initial and final population of GA")
plt.ylabel("Total Surface Area(T)")
plt.xlabel("Curved Surface Area(S)")
plt.legend(loc=9, bbox_to_anchor=(0.5, -0.175), ncol=2)
plt.show()
def plot_pareto2(initial, final):
initial_objs = [point.objectives for point in initial]
final_objs = [point.objectives for point in final]
initial_x = [i[0] for i in initial_objs]
initial_y = [i[1] for i in initial_objs]
final_x = [i[0] for i in final_objs]
final_y = [i[1] for i in final_objs]
# plt.scatter(initial_x, initial_y, color='b', marker='+', label='initial')
plt.scatter(final_x, final_y, color='r', marker='o', label='final')
plt.title("Scatter Plot between initial and final population of GA")
plt.ylabel("Total Surface Area(T)")
plt.xlabel("Curved Surface Area(S)")
plt.legend(loc=9, bbox_to_anchor=(0.5, -0.175), ncol=2)
plt.show()
initial, final = ga()
plot_pareto(initial, final)
Explanation: Visualize
Lets plot the initial population with respect to the final frontier.
End of explanation |
11,168 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PmodTMP2 Sensor example
In this example, the Pmod temperature sensor is initialized and set to log a reading every 1 second.
This examples required the PmodTMP2 sensor, and assumed it is attached to PMODB.
1. Simple TMP2 read() to see current room temperature
Step1: 2. Starting logging temperature once every second
Step2: 3. Try to modify temperature reading by touching the sensor
The default interval between samples is 1 second. So wait for at least 10 seconds to get enough samples.
During this period, try to press finger on the sensor to increase its temperature reading.
Stop the logging whenever done trying to change sensor's value.
Step3: 5. Plot values over time | Python Code:
from pynq import Overlay
Overlay("base.bit").download()
from pynq.iop import Pmod_TMP2
from pynq.iop import PMODB
mytmp = Pmod_TMP2(PMODB)
temperature = mytmp.read()
print(str(temperature) + " C")
Explanation: PmodTMP2 Sensor example
In this example, the Pmod temperature sensor is initialized and set to log a reading every 1 second.
This examples required the PmodTMP2 sensor, and assumed it is attached to PMODB.
1. Simple TMP2 read() to see current room temperature
End of explanation
mytmp.start_log()
Explanation: 2. Starting logging temperature once every second
End of explanation
mytmp.stop_log()
log = mytmp.get_log()
Explanation: 3. Try to modify temperature reading by touching the sensor
The default interval between samples is 1 second. So wait for at least 10 seconds to get enough samples.
During this period, try to press finger on the sensor to increase its temperature reading.
Stop the logging whenever done trying to change sensor's value.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(range(len(log)), log, 'ro')
plt.title('TMP2 Sensor log')
plt.axis([0, len(log), min(log), max(log)])
plt.show()
Explanation: 5. Plot values over time
End of explanation |
11,169 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Logarithmic Parameters
This notebook explores Bayesian optimisation of a function who's parameter is best thought of logarithmically (the order of magnitude is more important than the value itself)
To accommodate this, the surrogate function is trained on the exponents of the values rather than the values themselves
note
Step1: Function to optimize
Step2: To illustrate the problem
Step3: Now with a Logarithmic latent space mapping | Python Code:
%load_ext autoreload
%autoreload 2
from IPython.core.debugger import Tracer # debugging
from IPython.display import clear_output, display
import time
%matplotlib inline
#%config InlineBackend.figure_format = 'svg'
import matplotlib.pyplot as plt
import seaborn as sns; sns.set() # prettify matplotlib
import copy
import numpy as np
import sklearn.gaussian_process as sk_gp
# local modules
import turbo as tb
import turbo.modules as tm
import turbo.gui.jupyter as tg
import turbo.plotting as tp
# make deterministic
np.random.seed(100)
Explanation: Logarithmic Parameters
This notebook explores Bayesian optimisation of a function who's parameter is best thought of logarithmically (the order of magnitude is more important than the value itself)
To accommodate this, the surrogate function is trained on the exponents of the values rather than the values themselves
note: for this particular function, a $\nu=2.5$ works better for the Matern kernel than $\nu=1.5$.
End of explanation
buffer = 5e-3 # function not defined at exactly 0
shift = -2
def f(x):
x = x - shift
return np.cos(2*(20-x)**2)/x - 2*np.log(x)
def logspace(from_, to, num_per_mag=1):
'''
num_per_mag: number of samples per order of magnitude
'''
from_exp = np.log10(from_)
to_exp = np.log10(to)
num = abs(to_exp-from_exp)*num_per_mag + 1
return np.logspace(from_exp, to_exp, num=num, base=10)
x_min = buffer
x_max = 5
xs = logspace(x_min, x_max, num_per_mag=200)
x_min += shift
x_max += shift
xs += shift
#xs = np.linspace(x_min, x_max, num=601)
print(len(xs))
ys = f(xs)
best_y = np.max(ys)
plt.figure(figsize=(16,4))
plt.plot(xs, ys, 'g-')
plt.margins(0.01, 0.1)
plt.title('Linear Scale')
plt.xlabel('x')
plt.ylabel('cost')
plt.show()
plt.figure(figsize=(16,4))
plt.plot(xs - shift, ys, 'g-') # have to revert the shift to plot with the log scale
plt.margins(0.1, 0.1)
plt.title('Logarithmic Scale')
plt.xlabel('x')
plt.axes().set_xscale('log')
plt.ylabel('cost')
plt.show()
bounds = [('x', x_min, x_max)]
op = tb.Optimiser(f, 'max', bounds, pre_phase_trials=2, settings_preset='default')
'''
op.latent_space = tm.NoLatentSpace()
# this function is very difficult to fit effectively, I found that the only way to make the GP behave is
# to use the domain knowledge that the length_scale can't be anywhere near the default maximum of 100,000
op.surrogate_factory = tm.SciKitGPSurrogate.Factory(gp_params=dict(
alpha = 1e-10, # larger => more noise. Default = 1e-10
kernel = 1.0 * gp.kernels.Matern(nu=2.5, length_scale_bounds=(1e-5, 10))+gp.kernels.WhiteKernel(),
), variable_iterations=lambda trial_num: 4 if (trial_num-2) % 3 == 0 else 1)
'''
op.surrogate = tm.GPySurrogate()
op.acquisition = tm.UCB(beta=2)
op_log = copy.deepcopy(op)
rec = tb.Recorder(op)
Explanation: Function to optimize:
End of explanation
tg.OptimiserProgressBar(op)
op.run(max_trials=30)
tp.plot_error(rec, true_best=best_y);
tp.plot_timings(rec);
tp.interactive_plot_trial_1D(rec, param='x', true_objective=f)
Explanation: To illustrate the problem
End of explanation
zero_point = x_min - buffer # the function is not defined for any x <= zero point
op_log.latent_space = tm.ConstantLatentSpace(mappings={'x' : tm.LogMap(zero_point=zero_point)})
rec_log = tb.Recorder(op_log)
tg.OptimiserProgressBar(op_log)
op_log.run(max_trials=15)
tp.plot_error(rec_log, true_best=best_y);
tp.plot_timings(rec_log);
for l in [False, True]:
tp.plot_trial_1D(rec_log, param='x', trial_num=-1, true_objective=f, plot_in_latent_space=l)
tp.interactive_plot_trial_1D(rec_log, true_objective=f)
Explanation: Now with a Logarithmic latent space mapping
End of explanation |
11,170 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Fairness Indicators Lineage Case Study
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Download and preprocess the dataset
Step3: Building a TFX Pipeline
There are several TFX Pipeline Components that can be used for a production model, but for the purpose the this case study will focus on using only the below components
Step4: TFX ExampleGen Component
Step5: TFX StatisticsGen Component
Step6: TFX SchemaGen Component
Step9: TFX Transform Component
The Transform component performs data transformations and feature engineering. The results include an input TensorFlow graph which is used during both training and serving to preprocess the data before training or inference. This graph becomes part of the SavedModel that is the result of model training. Since the same input graph is used for both training and serving, the preprocessing will always be the same, and only needs to be written once.
The Transform component requires more code than many other components because of the arbitrary complexity of the feature engineering that you may need for the data and/or model that you're working with.
Define some constants and functions for both the Transform component and the Trainer component. Define them in a Python module, in this case saved to disk using the %%writefile magic command since you are working in a notebook.
The transformation that we will be performing in this case study are as follows
Step17: TFX Trainer Component
The Trainer Component trains a specified TensorFlow model.
In order to run the trainer component we need to create a Python module containing a trainer_fn function that will return an estimator for our model. If you prefer creating a Keras model, you can do so and then convert it to an estimator using keras.model_to_estimator().
The Trainer component trains a specified TensorFlow model. In order to run the model we need to create a Python module containing a a function called trainer_fn function that TFX will call.
For our case study we will build a Keras model that will return will return keras.model_to_estimator().
Step19: TensorFlow Model Analysis
Now that our model is trained developed and trained within TFX, we can use several additional components within the TFX exosystem to understand our models performance in a little more detail. By looking at different metrics we’re able to get a better picture of how the overall model performs for different slices within our model to make sure our model is not underperforming for any subgroup.
First we'll examine TensorFlow Model Analysis, which is a library for evaluating TensorFlow models. It allows users to evaluate their models on large amounts of data in a distributed manner, using the same metrics defined in their trainer. These metrics can be computed over different slices of data and visualized in a notebook.
For a list of possible metrics that can be added into TensorFlow Model Analysis see here.
Step20: Fairness Indicators
Load Fairness Indicators to examine the underlying data.
Step22: Fairness Indicators will allow us to drill down to see the performance of different slices and is designed to support teams in evaluating and improving models for fairness concerns. It enables easy computation of binary and multiclass classifiers and will allow you to evaluate across any size of use case.
We willl load Fairness Indicators into this notebook and analyse the results and take a look at the results. After you have had a moment explored with Fairness Indicators, examine the False Positive Rate and False Negative Rate tabs in the tool. In this case study, we're concerned with trying to reduce the number of false predictions of recidivism, corresponding to the False Positive Rate.
Within Fairness Indicator tool you'll see two dropdowns options
Step23: Identify where the fairness issue could be coming from
For each of the above artifacts, execution, and context types we can use ML Metadata to dig into the attributes and how each part of our ML pipeline was developed.
We'll start by diving into the StatisticsGen to examine the underlying data that we initially fed into the model. By knowing the artifacts within our model we can use ML Metadata and TensorFlow Data Validation to look backward and forward within the model to identify where a potential problem is coming from.
After running the below cell, select Lift (Y=1) in the second chart on the Chart to show tab to see the lift between the different data slices. Within race, the lift for African-American is approximatly 1.08 whereas Caucasian is approximatly 0.86.
Step25: Tracking a Model Change
Now that we have an idea on how we could improve the fairness of our model, we will first document our initial run within the ML Metadata for our own record and for anyone else that might review our changes at a future time.
ML Metadata can keep a log of our past models along with any notes that we would like to add between runs. We'll add a simple note on our first run denoting that this run was done on the full COMPAS dataset
Step33: Improving fairness concerns by weighting the model
There are several ways we can approach fixing fairness concerns within a model. Manipulating observed data/labels, implementing fairness constraints, or prejudice removal by regularization are some techniques<sup>1</sup> that have been used to fix fairness concerns. In this case study we will reweight the model by implementing a custom loss function into Keras.
The code below is the same as the above Transform Component but with the exception of a new class called LogisticEndpoint that we will use for our loss within Keras and a few parameter changes.
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, N. (2019). A Survey on Bias and Fairness in Machine Learning. https
Step35: Retrain the TFX model with the weighted model
In this next part we will use the weighted Transform Component to rerun the same Trainer model as before to see the improvement in fairness after the weighting is applied.
Step36: After retraining our results with the weighted model, we can once again look at the fairness metrics to gauge any improvements in the model. This time, however, we will use the model comparison feature within Fairness Indicators to see the difference between the weighted and unweighted model. Although we’re still seeing some fairness concerns with the weighted model, the discrepancy is far less pronounced.
The drawback, however, is that our AUC and binary accuracy has also dropped after weighting the model.
False Positive Rate @ 0.75
African-American | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
!python -m pip install -q -U \
tfx \
tensorflow-model-analysis \
tensorflow-data-validation \
tensorflow-metadata \
tensorflow-transform \
ml-metadata \
tfx-bsl
import os
import tempfile
import six.moves.urllib as urllib
from ml_metadata.metadata_store import metadata_store
from ml_metadata.proto import metadata_store_pb2
import pandas as pd
from google.protobuf import text_format
from sklearn.utils import shuffle
import tensorflow as tf
import tensorflow_data_validation as tfdv
import tensorflow_model_analysis as tfma
from tensorflow_model_analysis.addons.fairness.post_export_metrics import fairness_indicators
from tensorflow_model_analysis.addons.fairness.view import widget_view
import tfx
from tfx.components.evaluator.component import Evaluator
from tfx.components.example_gen.csv_example_gen.component import CsvExampleGen
from tfx.components.schema_gen.component import SchemaGen
from tfx.components.statistics_gen.component import StatisticsGen
from tfx.components.trainer.component import Trainer
from tfx.components.transform.component import Transform
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
from tfx.proto import evaluator_pb2
from tfx.proto import trainer_pb2
Explanation: Fairness Indicators Lineage Case Study
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/responsible_ai/fairness_indicators/tutorials/Fairness_Indicators_Lineage_Case_Study"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_Lineage_Case_Study.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/fairness-indicators/tree/master/g3doc/tutorials/Fairness_Indicators_Lineage_Case_Study.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/fairness-indicators/g3doc/tutorials/Fairness_Indicators_Lineage_Case_Study.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://tfhub.dev/google/random-nnlm-en-dim128/1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a>
</td>
</table>
Warning: Estimators are deprecated (not recommended for new code). Estimators run v1.Session-style code which is more difficult to write correctly, and can behave unexpectedly, especially when combined with TF 2 code. Estimators do fall under our compatibility guarantees, but will receive no fixes other than security vulnerabilities. See the migration guide for details.
<!--
TODO(b/192933099): update this to use keras instead of estimators.
-->
COMPAS Dataset
COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) is a public dataset, which contains approximately 18,000 criminal cases from Broward County, Florida between January, 2013 and December, 2014. The data contains information about 11,000 unique defendants, including criminal history demographics, and a risk score intended to represent the defendant’s likelihood of reoffending (recidivism). A machine learning model trained on this data has been used by judges and parole officers to determine whether or not to set bail and whether or not to grant parole.
In 2016, an article published in ProPublica found that the COMPAS model was incorrectly predicting that African-American defendants would recidivate at much higher rates than their white counterparts while Caucasian would not recidivate at a much higher rate. For Caucasian defendants, the model made mistakes in the opposite direction, making incorrect predictions that they wouldn’t commit another crime. The authors went on to show that these biases were likely due to an uneven distribution in the data between African-Americans and Caucasian defendants. Specifically, the ground truth label of a negative example (a defendant would not commit another crime) and a positive example (defendant would commit another crime) were disproportionate between the two races. Since 2016, the COMPAS dataset has appeared frequently in the ML fairness literature <sup>1, 2, 3</sup>, with researchers using it to demonstrate techniques for identifying and remediating fairness concerns. This tutorial from the FAT* 2018 conference illustrates how COMPAS can dramatically impact a defendant’s prospects in the real world.
It is important to note that developing a machine learning model to predict pre-trial detention has a number of important ethical considerations. You can learn more about these issues in the Partnership on AI “Report on Algorithmic Risk Assessment Tools in the U.S. Criminal Justice System.” The Partnership on AI is a multi-stakeholder organization -- of which Google is a member -- that creates guidelines around AI.
We’re using the COMPAS dataset only as an example of how to identify and remediate fairness concerns in data. This dataset is canonical in the algorithmic fairness literature.
About the Tools in this Case Study
TensorFlow Extended (TFX) is a Google-production-scale machine learning platform based on TensorFlow. It provides a configuration framework and shared libraries to integrate common components needed to define, launch, and monitor your machine learning system.
TensorFlow Model Analysis is a library for evaluating machine learning models. Users can evaluate their models on a large amount of data in a distributed manner and view metrics over different slices within a notebook.
Fairness Indicators is a suite of tools built on top of TensorFlow Model Analysis that enables regular evaluation of fairness metrics in product pipelines.
ML Metadata is a library for recording and retrieving the lineage and metadata of ML artifacts such as models, datasets, and metrics. Within TFX ML Metadata will help us understand the artifacts created in a pipeline, which is a unit of data that is passed between TFX components.
TensorFlow Data Validation is a library to analyze your data and check for errors that can affect model training or serving.
Case Study Overview
For the duration of this case study we will define “fairness concerns” as a bias within a model that negatively impacts a slice within our data. Specifically, we’re trying to limit any recidivism prediction that could be biased towards race.
The walk through of the case study will proceed as follows:
Download the data, preprocess, and explore the initial dataset.
Build a TFX pipeline with the COMPAS dataset using a Keras binary classifier.
Run our results through TensorFlow Model Analysis, TensorFlow Data Validation, and load Fairness Indicators to explore any potential fairness concerns within our model.
Use ML Metadata to track all the artifacts for a model that we trained with TFX.
Weight the initial COMPAS dataset for our second model to account for the uneven distribution between recidivism and race.
Review the performance changes within the new dataset.
Check the underlying changes within our TFX pipeline with ML Metadata to understand what changes were made between the two models.
Helpful Resources
This case study is an extension of the below case studies. It is recommended working through the below case studies first.
* TFX Pipeline Overview
* Fairness Indicator Case Study
* TFX Data Validation
Setup
To start, we will install the necessary packages, download the data, and import the required modules for the case study.
To install the required packages for this case study in your notebook run the below PIP command.
Note: See here for a reference on compatibility between different versions of the libraries used in this case study.
Wadsworth, C., Vera, F., Piech, C. (2017). Achieving Fairness Through Adversarial Learning: an Application to Recidivism Prediction. https://arxiv.org/abs/1807.00199.
Chouldechova, A., G’Sell, M., (2017). Fairer and more accurate, but for whom? https://arxiv.org/abs/1707.00046.
Berk et al., (2017), Fairness in Criminal Justice Risk Assessments: The State of the Art, https://arxiv.org/abs/1703.09207.
End of explanation
# Download the COMPAS dataset and setup the required filepaths.
_DATA_ROOT = tempfile.mkdtemp(prefix='tfx-data')
_DATA_PATH = 'https://storage.googleapis.com/compas_dataset/cox-violent-parsed.csv'
_DATA_FILEPATH = os.path.join(_DATA_ROOT, 'compas-scores-two-years.csv')
data = urllib.request.urlopen(_DATA_PATH)
_COMPAS_DF = pd.read_csv(data)
# To simpliy the case study, we will only use the columns that will be used for
# our model.
_COLUMN_NAMES = [
'age',
'c_charge_desc',
'c_charge_degree',
'c_days_from_compas',
'is_recid',
'juv_fel_count',
'juv_misd_count',
'juv_other_count',
'priors_count',
'r_days_from_arrest',
'race',
'sex',
'vr_charge_desc',
]
_COMPAS_DF = _COMPAS_DF[_COLUMN_NAMES]
# We will use 'is_recid' as our ground truth lable, which is boolean value
# indicating if a defendant committed another crime. There are some rows with -1
# indicating that there is no data. These rows we will drop from training.
_COMPAS_DF = _COMPAS_DF[_COMPAS_DF['is_recid'] != -1]
# Given the distribution between races in this dataset we will only focuse on
# recidivism for African-Americans and Caucasians.
_COMPAS_DF = _COMPAS_DF[
_COMPAS_DF['race'].isin(['African-American', 'Caucasian'])]
# Adding we weight feature that will be used during the second part of this
# case study to help improve fairness concerns.
_COMPAS_DF['sample_weight'] = 0.8
# Load the DataFrame back to a CSV file for our TFX model.
_COMPAS_DF.to_csv(_DATA_FILEPATH, index=False, na_rep='')
Explanation: Download and preprocess the dataset
End of explanation
context = InteractiveContext()
Explanation: Building a TFX Pipeline
There are several TFX Pipeline Components that can be used for a production model, but for the purpose the this case study will focus on using only the below components:
* ExampleGen to read our dataset.
* StatisticsGen to calculate the statistics of our dataset.
* SchemaGen to create a data schema.
* Transform for feature engineering.
* Trainer to run our machine learning model.
Create the InteractiveContext
To run TFX within a notebook, we first will need to create an InteractiveContext to run the components interactively.
InteractiveContext will use a temporary directory with an ephemeral ML Metadata database instance. To use your own pipeline root or database, the optional properties pipeline_root and metadata_connection_config may be passed to InteractiveContext.
End of explanation
# The ExampleGen TFX Pipeline component ingests data into TFX pipelines.
# It consumes external files/services to generate Examples which will be read by
# other TFX components. It also provides consistent and configurable partition,
# and shuffles the dataset for ML best practice.
example_gen = CsvExampleGen(input_base=_DATA_ROOT)
context.run(example_gen)
Explanation: TFX ExampleGen Component
End of explanation
# The StatisticsGen TFX pipeline component generates features statistics over
# both training and serving data, which can be used by other pipeline
# components. StatisticsGen uses Beam to scale to large datasets.
statistics_gen = StatisticsGen(examples=example_gen.outputs['examples'])
context.run(statistics_gen)
Explanation: TFX StatisticsGen Component
End of explanation
# Some TFX components use a description of your input data called a schema. The
# schema is an instance of schema.proto. It can specify data types for feature
# values, whether a feature has to be present in all examples, allowed value
# ranges, and other properties. A SchemaGen pipeline component will
# automatically generate a schema by inferring types, categories, and ranges
# from the training data.
infer_schema = SchemaGen(statistics=statistics_gen.outputs['statistics'])
context.run(infer_schema)
Explanation: TFX SchemaGen Component
End of explanation
# Setup paths for the Transform Component.
_transform_module_file = 'compas_transform.py'
%%writefile {_transform_module_file}
import tensorflow as tf
import tensorflow_transform as tft
CATEGORICAL_FEATURE_KEYS = [
'sex',
'race',
'c_charge_desc',
'c_charge_degree',
]
INT_FEATURE_KEYS = [
'age',
'c_days_from_compas',
'juv_fel_count',
'juv_misd_count',
'juv_other_count',
'priors_count',
'sample_weight',
]
LABEL_KEY = 'is_recid'
# List of the unique values for the items within CATEGORICAL_FEATURE_KEYS.
MAX_CATEGORICAL_FEATURE_VALUES = [
2,
6,
513,
14,
]
def transformed_name(key):
return '{}_xf'.format(key)
def preprocessing_fn(inputs):
tf.transform's callback function for preprocessing inputs.
Args:
inputs: Map from feature keys to raw features.
Returns:
Map from string feature key to transformed feature operations.
outputs = {}
for key in CATEGORICAL_FEATURE_KEYS:
outputs[transformed_name(key)] = tft.compute_and_apply_vocabulary(
_fill_in_missing(inputs[key]),
vocab_filename=key)
for key in INT_FEATURE_KEYS:
outputs[transformed_name(key)] = tft.scale_to_z_score(
_fill_in_missing(inputs[key]))
# Target label will be to see if the defendant is charged for another crime.
outputs[transformed_name(LABEL_KEY)] = _fill_in_missing(inputs[LABEL_KEY])
return outputs
def _fill_in_missing(tensor_value):
Replaces a missing values in a SparseTensor.
Fills in missing values of `tensor_value` with '' or 0, and converts to a
dense tensor.
Args:
tensor_value: A `SparseTensor` of rank 2. Its dense shape should have size
at most 1 in the second dimension.
Returns:
A rank 1 tensor where missing values of `tensor_value` are filled in.
if not isinstance(tensor_value, tf.sparse.SparseTensor):
return tensor_value
default_value = '' if tensor_value.dtype == tf.string else 0
sparse_tensor = tf.SparseTensor(
tensor_value.indices,
tensor_value.values,
[tensor_value.dense_shape[0], 1])
dense_tensor = tf.sparse.to_dense(sparse_tensor, default_value)
return tf.squeeze(dense_tensor, axis=1)
# Build and run the Transform Component.
transform = Transform(
examples=example_gen.outputs['examples'],
schema=infer_schema.outputs['schema'],
module_file=_transform_module_file
)
context.run(transform)
Explanation: TFX Transform Component
The Transform component performs data transformations and feature engineering. The results include an input TensorFlow graph which is used during both training and serving to preprocess the data before training or inference. This graph becomes part of the SavedModel that is the result of model training. Since the same input graph is used for both training and serving, the preprocessing will always be the same, and only needs to be written once.
The Transform component requires more code than many other components because of the arbitrary complexity of the feature engineering that you may need for the data and/or model that you're working with.
Define some constants and functions for both the Transform component and the Trainer component. Define them in a Python module, in this case saved to disk using the %%writefile magic command since you are working in a notebook.
The transformation that we will be performing in this case study are as follows:
* For string values we will generate a vocabulary that maps to an integer via tft.compute_and_apply_vocabulary.
* For integer values we will standardize the column mean 0 and variance 1 via tft.scale_to_z_score.
* Remove empty row values and replace them with an empty string or 0 depending on the feature type.
* Append ‘_xf’ to column names to denote the features that were processed in the Transform Component.
Now let's define a module containing the preprocessing_fn() function that we will pass to the Transform component:
End of explanation
# Setup paths for the Trainer Component.
_trainer_module_file = 'compas_trainer.py'
%%writefile {_trainer_module_file}
import tensorflow as tf
import tensorflow_model_analysis as tfma
import tensorflow_transform as tft
from tensorflow_transform.tf_metadata import schema_utils
from compas_transform import *
_BATCH_SIZE = 1000
_LEARNING_RATE = 0.00001
_MAX_CHECKPOINTS = 1
_SAVE_CHECKPOINT_STEPS = 999
def transformed_names(keys):
return [transformed_name(key) for key in keys]
def transformed_name(key):
return '{}_xf'.format(key)
def _gzip_reader_fn(filenames):
Returns a record reader that can read gzip'ed files.
Args:
filenames: A tf.string tensor or tf.data.Dataset containing one or more
filenames.
Returns: A nested structure of tf.TypeSpec objects matching the structure of
an element of this dataset and specifying the type of individual components.
return tf.data.TFRecordDataset(filenames, compression_type='GZIP')
# Tf.Transform considers these features as "raw".
def _get_raw_feature_spec(schema):
Generates a feature spec from a Schema proto.
Args:
schema: A Schema proto.
Returns:
A feature spec defined as a dict whose keys are feature names and values are
instances of FixedLenFeature, VarLenFeature or SparseFeature.
return schema_utils.schema_as_feature_spec(schema).feature_spec
def _example_serving_receiver_fn(tf_transform_output, schema):
Builds the serving in inputs.
Args:
tf_transform_output: A TFTransformOutput.
schema: the schema of the input data.
Returns:
TensorFlow graph which parses examples, applying tf-transform to them.
raw_feature_spec = _get_raw_feature_spec(schema)
raw_feature_spec.pop(LABEL_KEY)
raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
raw_feature_spec)
serving_input_receiver = raw_input_fn()
transformed_features = tf_transform_output.transform_raw_features(
serving_input_receiver.features)
transformed_features.pop(transformed_name(LABEL_KEY))
return tf.estimator.export.ServingInputReceiver(
transformed_features, serving_input_receiver.receiver_tensors)
def _eval_input_receiver_fn(tf_transform_output, schema):
Builds everything needed for the tf-model-analysis to run the model.
Args:
tf_transform_output: A TFTransformOutput.
schema: the schema of the input data.
Returns:
EvalInputReceiver function, which contains:
- TensorFlow graph which parses raw untransformed features, applies the
tf-transform preprocessing operators.
- Set of raw, untransformed features.
- Label against which predictions will be compared.
# Notice that the inputs are raw features, not transformed features here.
raw_feature_spec = _get_raw_feature_spec(schema)
serialized_tf_example = tf.compat.v1.placeholder(
dtype=tf.string, shape=[None], name='input_example_tensor')
# Add a parse_example operator to the tensorflow graph, which will parse
# raw, untransformed, tf examples.
features = tf.io.parse_example(
serialized=serialized_tf_example, features=raw_feature_spec)
transformed_features = tf_transform_output.transform_raw_features(features)
labels = transformed_features.pop(transformed_name(LABEL_KEY))
receiver_tensors = {'examples': serialized_tf_example}
return tfma.export.EvalInputReceiver(
features=transformed_features,
receiver_tensors=receiver_tensors,
labels=labels)
def _input_fn(filenames, tf_transform_output, batch_size=200):
Generates features and labels for training or evaluation.
Args:
filenames: List of CSV files to read data from.
tf_transform_output: A TFTransformOutput.
batch_size: First dimension size of the Tensors returned by input_fn.
Returns:
A (features, indices) tuple where features is a dictionary of
Tensors, and indices is a single Tensor of label indices.
transformed_feature_spec = (
tf_transform_output.transformed_feature_spec().copy())
dataset = tf.compat.v1.data.experimental.make_batched_features_dataset(
filenames,
batch_size,
transformed_feature_spec,
shuffle=False,
reader=_gzip_reader_fn)
transformed_features = dataset.make_one_shot_iterator().get_next()
# We pop the label because we do not want to use it as a feature while we're
# training.
return transformed_features, transformed_features.pop(
transformed_name(LABEL_KEY))
def _keras_model_builder():
Build a keras model for COMPAS dataset classification.
Returns:
A compiled Keras model.
feature_columns = []
feature_layer_inputs = {}
for key in transformed_names(INT_FEATURE_KEYS):
feature_columns.append(tf.feature_column.numeric_column(key))
feature_layer_inputs[key] = tf.keras.Input(shape=(1,), name=key)
for key, num_buckets in zip(transformed_names(CATEGORICAL_FEATURE_KEYS),
MAX_CATEGORICAL_FEATURE_VALUES):
feature_columns.append(
tf.feature_column.indicator_column(
tf.feature_column.categorical_column_with_identity(
key, num_buckets=num_buckets)))
feature_layer_inputs[key] = tf.keras.Input(
shape=(1,), name=key, dtype=tf.dtypes.int32)
feature_columns_input = tf.keras.layers.DenseFeatures(feature_columns)
feature_layer_outputs = feature_columns_input(feature_layer_inputs)
dense_layers = tf.keras.layers.Dense(
20, activation='relu', name='dense_1')(feature_layer_outputs)
dense_layers = tf.keras.layers.Dense(
10, activation='relu', name='dense_2')(dense_layers)
output = tf.keras.layers.Dense(
1, name='predictions')(dense_layers)
model = tf.keras.Model(
inputs=[v for v in feature_layer_inputs.values()], outputs=output)
model.compile(
loss=tf.keras.losses.MeanAbsoluteError(),
optimizer=tf.optimizers.Adam(learning_rate=_LEARNING_RATE))
return model
# TFX will call this function.
def trainer_fn(hparams, schema):
Build the estimator using the high level API.
Args:
hparams: Hyperparameters used to train the model as name/value pairs.
schema: Holds the schema of the training examples.
Returns:
A dict of the following:
- estimator: The estimator that will be used for training and eval.
- train_spec: Spec for training.
- eval_spec: Spec for eval.
- eval_input_receiver_fn: Input function for eval.
tf_transform_output = tft.TFTransformOutput(hparams.transform_output)
train_input_fn = lambda: _input_fn(
hparams.train_files,
tf_transform_output,
batch_size=_BATCH_SIZE)
eval_input_fn = lambda: _input_fn(
hparams.eval_files,
tf_transform_output,
batch_size=_BATCH_SIZE)
train_spec = tf.estimator.TrainSpec(
train_input_fn,
max_steps=hparams.train_steps)
serving_receiver_fn = lambda: _example_serving_receiver_fn(
tf_transform_output, schema)
exporter = tf.estimator.FinalExporter('compas', serving_receiver_fn)
eval_spec = tf.estimator.EvalSpec(
eval_input_fn,
steps=hparams.eval_steps,
exporters=[exporter],
name='compas-eval')
run_config = tf.estimator.RunConfig(
save_checkpoints_steps=_SAVE_CHECKPOINT_STEPS,
keep_checkpoint_max=_MAX_CHECKPOINTS)
run_config = run_config.replace(model_dir=hparams.serving_model_dir)
estimator = tf.keras.estimator.model_to_estimator(
keras_model=_keras_model_builder(), config=run_config)
# Create an input receiver for TFMA processing.
receiver_fn = lambda: _eval_input_receiver_fn(tf_transform_output, schema)
return {
'estimator': estimator,
'train_spec': train_spec,
'eval_spec': eval_spec,
'eval_input_receiver_fn': receiver_fn
}
# Uses user-provided Python function that implements a model using TensorFlow's
# Estimators API.
trainer = Trainer(
module_file=_trainer_module_file,
transformed_examples=transform.outputs['transformed_examples'],
schema=infer_schema.outputs['schema'],
transform_graph=transform.outputs['transform_graph'],
train_args=trainer_pb2.TrainArgs(num_steps=10000),
eval_args=trainer_pb2.EvalArgs(num_steps=5000)
)
context.run(trainer)
Explanation: TFX Trainer Component
The Trainer Component trains a specified TensorFlow model.
In order to run the trainer component we need to create a Python module containing a trainer_fn function that will return an estimator for our model. If you prefer creating a Keras model, you can do so and then convert it to an estimator using keras.model_to_estimator().
The Trainer component trains a specified TensorFlow model. In order to run the model we need to create a Python module containing a a function called trainer_fn function that TFX will call.
For our case study we will build a Keras model that will return will return keras.model_to_estimator().
End of explanation
# Uses TensorFlow Model Analysis to compute a evaluation statistics over
# features of a model.
model_analyzer = Evaluator(
examples=example_gen.outputs['examples'],
model=trainer.outputs['model'],
eval_config = text_format.Parse(
model_specs {
label_key: 'is_recid'
}
metrics_specs {
metrics {class_name: "BinaryAccuracy"}
metrics {class_name: "AUC"}
metrics {
class_name: "FairnessIndicators"
config: '{"thresholds": [0.25, 0.5, 0.75]}'
}
}
slicing_specs {
feature_keys: 'race'
}
, tfma.EvalConfig())
)
context.run(model_analyzer)
Explanation: TensorFlow Model Analysis
Now that our model is trained developed and trained within TFX, we can use several additional components within the TFX exosystem to understand our models performance in a little more detail. By looking at different metrics we’re able to get a better picture of how the overall model performs for different slices within our model to make sure our model is not underperforming for any subgroup.
First we'll examine TensorFlow Model Analysis, which is a library for evaluating TensorFlow models. It allows users to evaluate their models on large amounts of data in a distributed manner, using the same metrics defined in their trainer. These metrics can be computed over different slices of data and visualized in a notebook.
For a list of possible metrics that can be added into TensorFlow Model Analysis see here.
End of explanation
evaluation_uri = model_analyzer.outputs['evaluation'].get()[0].uri
eval_result = tfma.load_eval_result(evaluation_uri)
tfma.addons.fairness.view.widget_view.render_fairness_indicator(eval_result)
Explanation: Fairness Indicators
Load Fairness Indicators to examine the underlying data.
End of explanation
# Connect to the TFX database.
connection_config = metadata_store_pb2.ConnectionConfig()
connection_config.sqlite.filename_uri = os.path.join(
context.pipeline_root, 'metadata.sqlite')
store = metadata_store.MetadataStore(connection_config)
def _mlmd_type_to_dataframe(mlmd_type):
Helper function to turn MLMD into a Pandas DataFrame.
Args:
mlmd_type: Metadata store type.
Returns:
DataFrame containing type ID, Name, and Properties.
pd.set_option('display.max_columns', None)
pd.set_option('display.expand_frame_repr', False)
column_names = ['ID', 'Name', 'Properties']
df = pd.DataFrame(columns=column_names)
for a_type in mlmd_type:
mlmd_row = pd.DataFrame([[a_type.id, a_type.name, a_type.properties]],
columns=column_names)
df = df.append(mlmd_row)
return df
# ML Metadata stores strong-typed Artifacts, Executions, and Contexts.
# First, we can use type APIs to understand what is defined in ML Metadata
# by the current version of TFX. We'll be able to view all the previous runs
# that created our initial model.
print('Artifact Types:')
display(_mlmd_type_to_dataframe(store.get_artifact_types()))
print('\nExecution Types:')
display(_mlmd_type_to_dataframe(store.get_execution_types()))
print('\nContext Types:')
display(_mlmd_type_to_dataframe(store.get_context_types()))
Explanation: Fairness Indicators will allow us to drill down to see the performance of different slices and is designed to support teams in evaluating and improving models for fairness concerns. It enables easy computation of binary and multiclass classifiers and will allow you to evaluate across any size of use case.
We willl load Fairness Indicators into this notebook and analyse the results and take a look at the results. After you have had a moment explored with Fairness Indicators, examine the False Positive Rate and False Negative Rate tabs in the tool. In this case study, we're concerned with trying to reduce the number of false predictions of recidivism, corresponding to the False Positive Rate.
Within Fairness Indicator tool you'll see two dropdowns options:
1. A "Baseline" option that is set by column_for_slicing.
2. A "Thresholds" option that is set by fairness_indicator_thresholds.
“Baseline” is the slice you want to compare all other slices to. Most commonly, it is represented by the overall slice, but can also be one of the specific slices as well.
"Threshold" is a value set within a given binary classification model to indicate where a prediction should be placed. When setting a threshold there are two things you should keep in mind.
Precision: What is the downside if your prediction results in a Type 1 error? In this case study a higher threshold would mean we're predicting more defendants will commit another crime when they actually don't.
Recall: What is the downside of a Type II error? In this case study a higher threshold would mean we're predicting more defendants will not commit another crime when they actually do.
We will set arbitrary thresholds at 0.75 and we will only focus on the fairness metrics for African-American and Caucasian defendants given the small sample sizes for the other races, which aren’t large enough to draw statistically significant conclusions.
The rates of the below might differ slightly based on how the data was shuffled at the beginning of this case study, but take a look at the difference between the data between African-American and Caucasian defendants. At a lower threshold our model is more likely to predict that a Caucasian defended will commit a second crime compared to an African-American defended. However this prediction inverts as we increase our threshold.
False Positive Rate @ 0.75
African-American: ~30%
AUC: 0.71
Binary Accuracy: 0.67
Caucasian: ~8%
AUC: 0.71
AUC: 0.67
More information on Type I/II errors and threshold setting can be found here.
ML Metadata
To understand where disparity could be coming from and to take a snapshot of our current model, we can use ML Metadata for recording and retrieving metadata associated with our model. ML Metadata is an integral part of TFX, but is designed so that it can be used independently.
For this case study, we will list all artifacts that we developed previously within this case study. By cycling through the artifacts, executions, and context we will have a high level view of our TFX model to dig into where any potential issues are coming from. This will provide us a baseline overview of how our model was developed and what TFX components helped to develop our initial model.
We will start by first laying out the high level artifacts, execution, and context types in our model.
End of explanation
statistics_gen = StatisticsGen(
examples=example_gen.outputs['examples'],
schema=infer_schema.outputs['schema'],
stats_options=tfdv.StatsOptions(label_feature='is_recid'))
exec_result = context.run(statistics_gen)
for event in store.get_events_by_execution_ids([exec_result.execution_id]):
if event.path.steps[0].key == 'statistics':
statistics_w_schema_uri = store.get_artifacts_by_id([event.artifact_id])[0].uri
model_stats = tfdv.load_statistics(
os.path.join(statistics_w_schema_uri, 'eval/stats_tfrecord/'))
tfdv.visualize_statistics(model_stats)
Explanation: Identify where the fairness issue could be coming from
For each of the above artifacts, execution, and context types we can use ML Metadata to dig into the attributes and how each part of our ML pipeline was developed.
We'll start by diving into the StatisticsGen to examine the underlying data that we initially fed into the model. By knowing the artifacts within our model we can use ML Metadata and TensorFlow Data Validation to look backward and forward within the model to identify where a potential problem is coming from.
After running the below cell, select Lift (Y=1) in the second chart on the Chart to show tab to see the lift between the different data slices. Within race, the lift for African-American is approximatly 1.08 whereas Caucasian is approximatly 0.86.
End of explanation
_MODEL_NOTE_TO_ADD = 'First model that contains fairness concerns in the model.'
first_trained_model = store.get_artifacts_by_type('Model')[-1]
# Add the two notes above to the ML metadata.
first_trained_model.custom_properties['note'].string_value = _MODEL_NOTE_TO_ADD
store.put_artifacts([first_trained_model])
def _mlmd_model_to_dataframe(model, model_number):
Helper function to turn a MLMD modle into a Pandas DataFrame.
Args:
model: Metadata store model.
model_number: Number of model run within ML Metadata.
Returns:
DataFrame containing the ML Metadata model.
pd.set_option('display.max_columns', None)
pd.set_option('display.expand_frame_repr', False)
df = pd.DataFrame()
custom_properties = ['name', 'note', 'state', 'producer_component',
'pipeline_name']
df['id'] = [model[model_number].id]
df['uri'] = [model[model_number].uri]
for prop in custom_properties:
df[prop] = model[model_number].custom_properties.get(prop)
df[prop] = df[prop].astype(str).map(
lambda x: x.lstrip('string_value: "').rstrip('"\n'))
return df
# Print the current model to see the results of the ML Metadata for the model.
display(_mlmd_model_to_dataframe(store.get_artifacts_by_type('Model'), 0))
Explanation: Tracking a Model Change
Now that we have an idea on how we could improve the fairness of our model, we will first document our initial run within the ML Metadata for our own record and for anyone else that might review our changes at a future time.
ML Metadata can keep a log of our past models along with any notes that we would like to add between runs. We'll add a simple note on our first run denoting that this run was done on the full COMPAS dataset
End of explanation
%%writefile {_trainer_module_file}
import numpy as np
import tensorflow as tf
import tensorflow_model_analysis as tfma
import tensorflow_transform as tft
from tensorflow_transform.tf_metadata import schema_utils
from compas_transform import *
_BATCH_SIZE = 1000
_LEARNING_RATE = 0.00001
_MAX_CHECKPOINTS = 1
_SAVE_CHECKPOINT_STEPS = 999
def transformed_names(keys):
return [transformed_name(key) for key in keys]
def transformed_name(key):
return '{}_xf'.format(key)
def _gzip_reader_fn(filenames):
Returns a record reader that can read gzip'ed files.
Args:
filenames: A tf.string tensor or tf.data.Dataset containing one or more
filenames.
Returns: A nested structure of tf.TypeSpec objects matching the structure of
an element of this dataset and specifying the type of individual components.
return tf.data.TFRecordDataset(filenames, compression_type='GZIP')
# Tf.Transform considers these features as "raw".
def _get_raw_feature_spec(schema):
Generates a feature spec from a Schema proto.
Args:
schema: A Schema proto.
Returns:
A feature spec defined as a dict whose keys are feature names and values are
instances of FixedLenFeature, VarLenFeature or SparseFeature.
return schema_utils.schema_as_feature_spec(schema).feature_spec
def _example_serving_receiver_fn(tf_transform_output, schema):
Builds the serving in inputs.
Args:
tf_transform_output: A TFTransformOutput.
schema: the schema of the input data.
Returns:
TensorFlow graph which parses examples, applying tf-transform to them.
raw_feature_spec = _get_raw_feature_spec(schema)
raw_feature_spec.pop(LABEL_KEY)
raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
raw_feature_spec)
serving_input_receiver = raw_input_fn()
transformed_features = tf_transform_output.transform_raw_features(
serving_input_receiver.features)
transformed_features.pop(transformed_name(LABEL_KEY))
return tf.estimator.export.ServingInputReceiver(
transformed_features, serving_input_receiver.receiver_tensors)
def _eval_input_receiver_fn(tf_transform_output, schema):
Builds everything needed for the tf-model-analysis to run the model.
Args:
tf_transform_output: A TFTransformOutput.
schema: the schema of the input data.
Returns:
EvalInputReceiver function, which contains:
- TensorFlow graph which parses raw untransformed features, applies the
tf-transform preprocessing operators.
- Set of raw, untransformed features.
- Label against which predictions will be compared.
# Notice that the inputs are raw features, not transformed features here.
raw_feature_spec = _get_raw_feature_spec(schema)
serialized_tf_example = tf.compat.v1.placeholder(
dtype=tf.string, shape=[None], name='input_example_tensor')
# Add a parse_example operator to the tensorflow graph, which will parse
# raw, untransformed, tf examples.
features = tf.io.parse_example(
serialized=serialized_tf_example, features=raw_feature_spec)
transformed_features = tf_transform_output.transform_raw_features(features)
labels = transformed_features.pop(transformed_name(LABEL_KEY))
receiver_tensors = {'examples': serialized_tf_example}
return tfma.export.EvalInputReceiver(
features=transformed_features,
receiver_tensors=receiver_tensors,
labels=labels)
def _input_fn(filenames, tf_transform_output, batch_size=200):
Generates features and labels for training or evaluation.
Args:
filenames: List of CSV files to read data from.
tf_transform_output: A TFTransformOutput.
batch_size: First dimension size of the Tensors returned by input_fn.
Returns:
A (features, indices) tuple where features is a dictionary of
Tensors, and indices is a single Tensor of label indices.
transformed_feature_spec = (
tf_transform_output.transformed_feature_spec().copy())
dataset = tf.compat.v1.data.experimental.make_batched_features_dataset(
filenames,
batch_size,
transformed_feature_spec,
shuffle=False,
reader=_gzip_reader_fn)
transformed_features = dataset.make_one_shot_iterator().get_next()
# We pop the label because we do not want to use it as a feature while we're
# training.
return transformed_features, transformed_features.pop(
transformed_name(LABEL_KEY))
# TFX will call this function.
def trainer_fn(hparams, schema):
Build the estimator using the high level API.
Args:
hparams: Hyperparameters used to train the model as name/value pairs.
schema: Holds the schema of the training examples.
Returns:
A dict of the following:
- estimator: The estimator that will be used for training and eval.
- train_spec: Spec for training.
- eval_spec: Spec for eval.
- eval_input_receiver_fn: Input function for eval.
tf_transform_output = tft.TFTransformOutput(hparams.transform_output)
train_input_fn = lambda: _input_fn(
hparams.train_files,
tf_transform_output,
batch_size=_BATCH_SIZE)
eval_input_fn = lambda: _input_fn(
hparams.eval_files,
tf_transform_output,
batch_size=_BATCH_SIZE)
train_spec = tf.estimator.TrainSpec(
train_input_fn,
max_steps=hparams.train_steps)
serving_receiver_fn = lambda: _example_serving_receiver_fn(
tf_transform_output, schema)
exporter = tf.estimator.FinalExporter('compas', serving_receiver_fn)
eval_spec = tf.estimator.EvalSpec(
eval_input_fn,
steps=hparams.eval_steps,
exporters=[exporter],
name='compas-eval')
run_config = tf.estimator.RunConfig(
save_checkpoints_steps=_SAVE_CHECKPOINT_STEPS,
keep_checkpoint_max=_MAX_CHECKPOINTS)
run_config = run_config.replace(model_dir=hparams.serving_model_dir)
estimator = tf.keras.estimator.model_to_estimator(
keras_model=_keras_model_builder(), config=run_config)
# Create an input receiver for TFMA processing.
receiver_fn = lambda: _eval_input_receiver_fn(tf_transform_output, schema)
return {
'estimator': estimator,
'train_spec': train_spec,
'eval_spec': eval_spec,
'eval_input_receiver_fn': receiver_fn
}
def _keras_model_builder():
Build a keras model for COMPAS dataset classification.
Returns:
A compiled Keras model.
feature_columns = []
feature_layer_inputs = {}
for key in transformed_names(INT_FEATURE_KEYS):
feature_columns.append(tf.feature_column.numeric_column(key))
feature_layer_inputs[key] = tf.keras.Input(shape=(1,), name=key)
for key, num_buckets in zip(transformed_names(CATEGORICAL_FEATURE_KEYS),
MAX_CATEGORICAL_FEATURE_VALUES):
feature_columns.append(
tf.feature_column.indicator_column(
tf.feature_column.categorical_column_with_identity(
key, num_buckets=num_buckets)))
feature_layer_inputs[key] = tf.keras.Input(
shape=(1,), name=key, dtype=tf.dtypes.int32)
feature_columns_input = tf.keras.layers.DenseFeatures(feature_columns)
feature_layer_outputs = feature_columns_input(feature_layer_inputs)
dense_layers = tf.keras.layers.Dense(
20, activation='relu', name='dense_1')(feature_layer_outputs)
dense_layers = tf.keras.layers.Dense(
10, activation='relu', name='dense_2')(dense_layers)
output = tf.keras.layers.Dense(
1, name='predictions')(dense_layers)
model = tf.keras.Model(
inputs=[v for v in feature_layer_inputs.values()], outputs=output)
# To weight our model we will develop a custom loss class within Keras.
# The old loss is commented out below and the new one is added in below.
model.compile(
# loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
loss=LogisticEndpoint(),
optimizer=tf.optimizers.Adam(learning_rate=_LEARNING_RATE))
return model
class LogisticEndpoint(tf.keras.layers.Layer):
def __init__(self, name=None):
super(LogisticEndpoint, self).__init__(name=name)
self.loss_fn = tf.keras.losses.BinaryCrossentropy(from_logits=True)
def __call__(self, y_true, y_pred, sample_weight=None):
inputs = [y_true, y_pred]
inputs += sample_weight or ['sample_weight_xf']
return super(LogisticEndpoint, self).__call__(inputs)
def call(self, inputs):
y_true, y_pred = inputs[0], inputs[1]
if len(inputs) == 3:
sample_weight = inputs[2]
else:
sample_weight = None
loss = self.loss_fn(y_true, y_pred, sample_weight)
self.add_loss(loss)
reduce_loss = tf.math.divide_no_nan(
tf.math.reduce_sum(tf.nn.softmax(y_pred)), _BATCH_SIZE)
return reduce_loss
Explanation: Improving fairness concerns by weighting the model
There are several ways we can approach fixing fairness concerns within a model. Manipulating observed data/labels, implementing fairness constraints, or prejudice removal by regularization are some techniques<sup>1</sup> that have been used to fix fairness concerns. In this case study we will reweight the model by implementing a custom loss function into Keras.
The code below is the same as the above Transform Component but with the exception of a new class called LogisticEndpoint that we will use for our loss within Keras and a few parameter changes.
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, N. (2019). A Survey on Bias and Fairness in Machine Learning. https://arxiv.org/pdf/1908.09635.pdf
End of explanation
trainer_weighted = Trainer(
module_file=_trainer_module_file,
transformed_examples=transform.outputs['transformed_examples'],
schema=infer_schema.outputs['schema'],
transform_graph=transform.outputs['transform_graph'],
train_args=trainer_pb2.TrainArgs(num_steps=10000),
eval_args=trainer_pb2.EvalArgs(num_steps=5000)
)
context.run(trainer_weighted)
# Again, we will run TensorFlow Model Analysis and load Fairness Indicators
# to examine the performance change in our weighted model.
model_analyzer_weighted = Evaluator(
examples=example_gen.outputs['examples'],
model=trainer_weighted.outputs['model'],
eval_config = text_format.Parse(
model_specs {
label_key: 'is_recid'
}
metrics_specs {
metrics {class_name: 'BinaryAccuracy'}
metrics {class_name: 'AUC'}
metrics {
class_name: 'FairnessIndicators'
config: '{"thresholds": [0.25, 0.5, 0.75]}'
}
}
slicing_specs {
feature_keys: 'race'
}
, tfma.EvalConfig())
)
context.run(model_analyzer_weighted)
evaluation_uri_weighted = model_analyzer_weighted.outputs['evaluation'].get()[0].uri
eval_result_weighted = tfma.load_eval_result(evaluation_uri_weighted)
multi_eval_results = {
'Unweighted Model': eval_result,
'Weighted Model': eval_result_weighted
}
tfma.addons.fairness.view.widget_view.render_fairness_indicator(
multi_eval_results=multi_eval_results)
Explanation: Retrain the TFX model with the weighted model
In this next part we will use the weighted Transform Component to rerun the same Trainer model as before to see the improvement in fairness after the weighting is applied.
End of explanation
# Pull the URI for the two models that we ran in this case study.
first_model_uri = store.get_artifacts_by_type('ExampleStatistics')[-1].uri
second_model_uri = store.get_artifacts_by_type('ExampleStatistics')[0].uri
# Load the stats for both models.
first_model_uri = tfdv.load_statistics(os.path.join(
first_model_uri, 'eval/stats_tfrecord/'))
second_model_stats = tfdv.load_statistics(os.path.join(
second_model_uri, 'eval/stats_tfrecord/'))
# Visualize the statistics between the two models.
tfdv.visualize_statistics(
lhs_statistics=second_model_stats,
lhs_name='Sampled Model',
rhs_statistics=first_model_uri,
rhs_name='COMPAS Orginal')
# Add a new note within ML Metadata describing the weighted model.
_NOTE_TO_ADD = 'Weighted model between race and is_recid.'
# Pulling the URI for the weighted trained model.
second_trained_model = store.get_artifacts_by_type('Model')[-1]
# Add the note to ML Metadata.
second_trained_model.custom_properties['note'].string_value = _NOTE_TO_ADD
store.put_artifacts([second_trained_model])
display(_mlmd_model_to_dataframe(store.get_artifacts_by_type('Model'), -1))
display(_mlmd_model_to_dataframe(store.get_artifacts_by_type('Model'), 0))
Explanation: After retraining our results with the weighted model, we can once again look at the fairness metrics to gauge any improvements in the model. This time, however, we will use the model comparison feature within Fairness Indicators to see the difference between the weighted and unweighted model. Although we’re still seeing some fairness concerns with the weighted model, the discrepancy is far less pronounced.
The drawback, however, is that our AUC and binary accuracy has also dropped after weighting the model.
False Positive Rate @ 0.75
African-American: ~1%
AUC: 0.47
Binary Accuracy: 0.59
Caucasian: ~0%
AUC: 0.47
Binary Accuracy: 0.58
Examine the data of the second run
Finally, we can visualize the data with TensorFlow Data Validation and overlay the data changes between the two models and add an additional note to the ML Metadata indicating that this model has improved the fairness concerns.
End of explanation |
11,171 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
On this notebook the best models and input parameters will be searched for. The problem at hand is predicting the price of any stock symbol 7 days ahead, assuming one model for all the symbols. The best training period length, base period length, and base period step will be determined, using the MAE metrics (and/or the R^2 metrics). The step for the rolling validation will be determined taking into consideration a compromise between having enough points and the time needed to compute the validation.
Step1: Let's get the data.
Step2: Let's evaluate the Dummy predictor (mean of the base period) with the first parameters set
Step3: So, a simple mean predictor gives a Mean Relative Error of about 5% for all the period and all the symbols, but the R^2 score is about 0.3. That is for predicting 7 days ahead, with a base interval of 14 days, in a rolling validation.
Let's see if a linear regression can improve that.
Step4: Let's try to find the best base period and training period for the Linear Regressor.
Step5: Let's create a parameters list for roll evaluation. We should keep only the param sets with ahead_days=7, and also add different triaining periods.
Step6: Let's define a function that returns the mean r^2 and mre for a roll evaluation
Step7: Let's test the single params set function
Step8: Now with two params sets
Step9: Finally, let's parallellize the evaluation. (some code and suggestions were taken from here
Step10: So, the best parameters to predict 7 days ahead seem to be 7 base days and 504 training days. (the step days between samples is fixed at 7, so base days larger than 7 will have more correlated samples. That may, or may not, be part of the cause. It won't be studied further on this project.)
The MRE is about 4.1% (that is better than the 5% target)
Let's use other models...
- KNN model
Step11: - Decision Tree model
Step12: - Random Forest model | Python Code:
# Basic imports
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
from time import time
from sklearn.metrics import r2_score, median_absolute_error
%matplotlib inline
%pylab inline
pylab.rcParams['figure.figsize'] = (20.0, 10.0)
%load_ext autoreload
%autoreload 2
sys.path.append('../../')
import predictor.feature_extraction as fe
import utils.preprocessing as pp
Explanation: On this notebook the best models and input parameters will be searched for. The problem at hand is predicting the price of any stock symbol 7 days ahead, assuming one model for all the symbols. The best training period length, base period length, and base period step will be determined, using the MAE metrics (and/or the R^2 metrics). The step for the rolling validation will be determined taking into consideration a compromise between having enough points and the time needed to compute the validation.
End of explanation
datasets_params_list_df = pd.read_pickle('../../data/datasets_params_list_df.pkl')
datasets_params_list_df.head()
def unpack_params(params_df):
GOOD_DATA_RATIO = params_df['GOOD_DATA_RATIO']
train_val_time = int(params_df['train_val_time'])
base_days = int(params_df['base_days'])
step_days = int(params_df['step_days'])
ahead_days = int(params_df['ahead_days'])
SAMPLES_GOOD_DATA_RATIO = params_df['SAMPLES_GOOD_DATA_RATIO']
x_filename = params_df['x_filename']
y_filename = params_df['y_filename']
return GOOD_DATA_RATIO, train_val_time, base_days, step_days, ahead_days, SAMPLES_GOOD_DATA_RATIO, x_filename, y_filename
# Input values
GOOD_DATA_RATIO, \
train_val_time, \
base_days, \
step_days, \
ahead_days, \
SAMPLES_GOOD_DATA_RATIO, \
x_filename, \
y_filename = unpack_params(datasets_params_list_df.iloc[1,:])
# Getting the data
x = pd.read_pickle('../../data/{}'.format(x_filename))
y = pd.read_pickle('../../data/{}'.format(y_filename))
print(x.shape)
x.head()
print(y.shape)
y.head()
Explanation: Let's get the data.
End of explanation
tic = time()
train_days = 252 # The amount of market days for each training period
step_eval_days = 30 # The step to move between training/validation pairs
from predictor.dummy_mean_predictor import DummyPredictor
dummy_predictor = DummyPredictor()
import predictor.evaluation as ev
r2, mre, y_val_true_df, y_val_pred_df, mean_dates = ev.roll_evaluate(x,
y,
train_days,
step_eval_days,
ahead_days,
dummy_predictor,
verbose=True)
print('r2 shape: {}'.format(r2.shape))
print('mre shape: {}'.format(mre.shape))
print('y_val_true_df shape: {}'.format(y_val_true_df.shape))
print('y_val_pred_df shape: {}'.format(y_val_pred_df.shape))
print('mean_dates shape {}'.format(mean_dates.shape))
plt.plot(mean_dates, r2[:, 0], 'b', label='Mean r2 score')
plt.plot(mean_dates, r2[:, 0] + 2*r2[:, 1], 'r')
plt.plot(mean_dates, r2[:, 0] - 2*r2[:, 1], 'r')
plt.xlabel('Mean date of the training period')
plt.legend()
plt.title('Training metrics')
plt.grid()
plt.figure()
plt.plot(mean_dates, mre[:, 0], 'b', label='Mean MRE')
plt.plot(mean_dates, mre[:, 0] + 2*mre[:, 1], 'r')
plt.plot(mean_dates, mre[:, 0] - 2*mre[:, 1], 'r')
plt.xlabel('Mean date of the training period')
plt.legend()
plt.title('Training metrics')
plt.grid()
print('\n')
val_metrics_df = ev.get_metrics_df(y_val_true_df, y_val_pred_df)
print(val_metrics_df.head())
print('\n\n' + '-'*70)
print('The mean metrics are: \n{}'.format(val_metrics_df.mean()))
print('-'*70)
toc = time()
print('Elapsed time: {} seconds'.format((toc-tic)))
Explanation: Let's evaluate the Dummy predictor (mean of the base period) with the first parameters set
End of explanation
tic = time()
from predictor.linear_predictor import LinearPredictor
eval_predictor = LinearPredictor()
train_days = 252 # The amount of market days for each training period
step_eval_days = 30 # The step to move between training/validation pairs
import predictor.evaluation as ev
r2, mre, y_val_true_df, y_val_pred_df, mean_dates = ev.roll_evaluate(x,
y,
train_days,
step_eval_days,
ahead_days,
eval_predictor,
verbose=True)
print('r2 shape: {}'.format(r2.shape))
print('mre shape: {}'.format(mre.shape))
print('y_val_true_df shape: {}'.format(y_val_true_df.shape))
print('y_val_pred_df shape: {}'.format(y_val_pred_df.shape))
print('mean_dates shape {}'.format(mean_dates.shape))
plt.plot(mean_dates, r2[:, 0], 'b', label='Mean r2 score')
plt.plot(mean_dates, r2[:, 0] + 2*r2[:, 1], 'r')
plt.plot(mean_dates, r2[:, 0] - 2*r2[:, 1], 'r')
plt.xlabel('Mean date of the training period')
plt.legend()
plt.title('Training metrics')
plt.grid()
plt.figure()
plt.plot(mean_dates, mre[:, 0], 'b', label='Mean MRE')
plt.plot(mean_dates, mre[:, 0] + 2*mre[:, 1], 'r')
plt.plot(mean_dates, mre[:, 0] - 2*mre[:, 1], 'r')
plt.xlabel('Mean date of the training period')
plt.legend()
plt.title('Training metrics')
plt.grid()
print('\n')
val_metrics_df = ev.get_metrics_df(y_val_true_df, y_val_pred_df)
print(val_metrics_df.head())
print('\n\n' + '-'*70)
print('The mean metrics are: \n{}'.format(val_metrics_df.mean()))
print('-'*70)
toc = time()
print('Elapsed time: {} seconds'.format((toc-tic)))
Explanation: So, a simple mean predictor gives a Mean Relative Error of about 5% for all the period and all the symbols, but the R^2 score is about 0.3. That is for predicting 7 days ahead, with a base interval of 14 days, in a rolling validation.
Let's see if a linear regression can improve that.
End of explanation
datasets_params_list_df.head()
lin_reg_results = datasets_params_list_df.copy() # To store the results of the r2 score and mre.
train_days_arr = 252 * np.array([1, 2, 3])
train_days_arr
Explanation: Let's try to find the best base period and training period for the Linear Regressor.
End of explanation
params_list_7_df = datasets_params_list_df[datasets_params_list_df['ahead_days'] == 7]
params_list_7_df
params_list_df = pd.DataFrame()
for train_days in train_days_arr:
temp_df = params_list_7_df.copy()
temp_df['train_days'] = train_days
params_list_df = params_list_df.append(temp_df, ignore_index=True)
params_list_df
Explanation: Let's create a parameters list for roll evaluation. We should keep only the param sets with ahead_days=7, and also add different triaining periods.
End of explanation
import predictor.evaluation as ev
def mean_score_eval(params):
# Input values
train_days = int(params['train_days'])
GOOD_DATA_RATIO, \
train_val_time, \
base_days, \
step_days, \
ahead_days, \
SAMPLES_GOOD_DATA_RATIO, \
x_filename, \
y_filename = unpack_params(params)
pid = 'base{}_ahead{}_train{}'.format(base_days, ahead_days, train_days)
print('Generating: {}'.format(pid))
# Getting the data
x = pd.read_pickle('../../data/{}'.format(x_filename))
y = pd.read_pickle('../../data/{}'.format(y_filename))
r2, mre, y_val_true_df, y_val_pred_df, mean_dates = ev.roll_evaluate(x,
y,
train_days,
step_eval_days,
ahead_days,
eval_predictor,
verbose=True)
val_metrics_df = ev.get_metrics_df(y_val_true_df, y_val_pred_df)
#return (pid, pid)
result = tuple(val_metrics_df.mean().values)
print(result)
return result
def apply_mean_score_eval(params_df):
result_df = params_df.copy()
result_df['scores'] = result_df.apply(mean_score_eval, axis=1)
return result_df
Explanation: Let's define a function that returns the mean r^2 and mre for a roll evaluation
End of explanation
# Global variables
eval_predictor = LinearPredictor()
step_eval_days = 30 # The step to move between training/validation pairs
test_params_df = params_list_df.iloc[0,:]
test_params_df
scores_tuple = mean_score_eval(test_params_df)
scores_tuple
Explanation: Let's test the single params set function
End of explanation
test_params_df = params_list_df.iloc[0:2,:]
test_params_df
res_df = apply_mean_score_eval(test_params_df)
res_df
Explanation: Now with two params sets
End of explanation
from multiprocessing import Pool
num_partitions = 4 #number of partitions to split dataframe
num_cores = 4 #number of cores on your machine
def parallelize_dataframe(df, func):
df_split = np.array_split(df, num_partitions)
pool = Pool(num_cores)
df = pd.concat(pool.map(func, df_split))
pool.close()
pool.join()
return df
from predictor.linear_predictor import LinearPredictor
# Global variables
eval_predictor = LinearPredictor()
step_eval_days = 30 # The step to move between training/validation pairs
results_df = parallelize_dataframe(params_list_df, apply_mean_score_eval)
results_df
results_df['r2'] = results_df.apply(lambda x: x['scores'][0], axis=1)
results_df['mre'] = results_df.apply(lambda x: x['scores'][1], axis=1)
results_df
# Pickle that!
results_df.to_pickle('../../data/results_ahead7_linear_df.pkl')
results_df['mre'].plot()
np.argmin(results_df['mre'])
results_df.iloc[np.argmin(results_df['mre'])]
results_df.iloc[np.argmax(results_df['r2'])]
Explanation: Finally, let's parallellize the evaluation. (some code and suggestions were taken from here: http://www.racketracer.com/2016/07/06/pandas-in-parallel/#comments)
End of explanation
from predictor.knn_predictor import KNNPredictor
# Global variables
eval_predictor = KNNPredictor()
step_eval_days = 30 # The step to move between training/validation pairs
results_df = parallelize_dataframe(params_list_df, apply_mean_score_eval)
results_df
results_df['r2'] = results_df.apply(lambda x: x['scores'][0], axis=1)
results_df['mre'] = results_df.apply(lambda x: x['scores'][1], axis=1)
# Pickle that!
results_df.to_pickle('../../data/results_ahead7_knn_df.pkl')
results_df['mre'].plot()
print('Minimum MRE param set: \n {}'.format(results_df.iloc[np.argmin(results_df['mre'])]))
print('Maximum R^2 param set: \n {}'.format(results_df.iloc[np.argmax(results_df['r2'])]))
Explanation: So, the best parameters to predict 7 days ahead seem to be 7 base days and 504 training days. (the step days between samples is fixed at 7, so base days larger than 7 will have more correlated samples. That may, or may not, be part of the cause. It won't be studied further on this project.)
The MRE is about 4.1% (that is better than the 5% target)
Let's use other models...
- KNN model
End of explanation
from predictor.decision_tree_predictor import DecisionTreePredictor
# Global variables
eval_predictor = DecisionTreePredictor()
step_eval_days = 60 # The step to move between training/validation pairs
results_df = parallelize_dataframe(params_list_df, apply_mean_score_eval)
results_df
results_df['r2'] = results_df.apply(lambda x: x['scores'][0], axis=1)
results_df['mre'] = results_df.apply(lambda x: x['scores'][1], axis=1)
# Pickle that!
results_df.to_pickle('../../data/results_ahead7_decision_tree_df.pkl')
results_df['mre'].plot()
print('Minimum MRE param set: \n {}'.format(results_df.iloc[np.argmin(results_df['mre'])]))
print('Maximum R^2 param set: \n {}'.format(results_df.iloc[np.argmax(results_df['r2'])]))
Explanation: - Decision Tree model
End of explanation
from predictor.random_forest_predictor import RandomForestPredictor
# Global variables
eval_predictor = RandomForestPredictor()
step_eval_days = 60 # The step to move between training/validation pairs
results_df = parallelize_dataframe(params_list_df, apply_mean_score_eval)
results_df
results_df['r2'] = results_df.apply(lambda x: x['scores'][0], axis=1)
results_df['mre'] = results_df.apply(lambda x: x['scores'][1], axis=1)
# Pickle that!
results_df.to_pickle('../../data/results_ahead7_random_forest_df.pkl')
results_df['mre'].plot()
print('Minimum MRE param set: \n {}'.format(results_df.iloc[np.argmin(results_df['mre'])]))
print('Maximum R^2 param set: \n {}'.format(results_df.iloc[np.argmax(results_df['r2'])]))
Explanation: - Random Forest model
End of explanation |
11,172 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TensorFlow Authors.
Step1: Data validation using TFX Pipeline and TensorFlow Data Validation
Note
Step2: Install TFX
Step3: Did you restart the runtime?
If you are using Google Colab, the first time that you run
the cell above, you must restart the runtime by clicking
above "RESTART RUNTIME" button or using "Runtime > Restart
runtime ..." menu. This is because of the way that Colab
loads packages.
Check the TensorFlow and TFX versions.
Step4: Set up variables
There are some variables used to define a pipeline. You can customize these
variables as you want. By default all output from the pipeline will be
generated under the current directory.
Step5: Prepare example data
We will download the example dataset for use in our TFX pipeline. The dataset
we are using is
Palmer Penguins dataset
which is also used in other
TFX examples.
There are four numeric features in this dataset
Step6: Take a quick look at the CSV file.
Step8: You should be able to see five feature columns. species is one of 0, 1 or 2,
and all other features should have values between 0 and 1. We will create a TFX
pipeline to analyze this dataset.
Generate a preliminary schema
TFX pipelines are defined using Python APIs. We will create a pipeline to
generate a schema from the input examples automatically. This schema can be
reviewed by a human and adjusted as needed. Once the schema is finalized it can
be used for training and example validation in later tasks.
In addition to CsvExampleGen which is used in
Simple TFX Pipeline Tutorial,
we will use StatisticsGen and SchemaGen
Step9: Run the pipeline
We will use LocalDagRunner as in the previous tutorial.
Step12: You should see "INFO
Step13: Now we can examine the outputs from the pipeline execution.
Step14: It is time to examine the outputs from each component. As described above,
Tensorflow Data Validation(TFDV)
is used in StatisticsGen and SchemaGen, and TFDV also
provides visualization of the outputs from these components.
In this tutorial, we will use the visualization helper methods in TFX which
use TFDV internally to show the visualization.
Examine the output from StatisticsGen
Step15: <!-- <img class="tfo-display-only-on-site"
src="images/penguin_tfdv/penguin_tfdv_statistics.png"/> -->
You can see various stats for the input data. These statistics are supplied to
SchemaGen to construct an initial schema of data automatically.
Examine the output from SchemaGen
Step16: This schema is automatically inferred from the output of StatisticsGen. You
should be able to see 4 FLOAT features and 1 INT feature.
Export the schema for future use
We need to review and refine the generated schema. The reviewed schema needs
to be persisted to be used in subsequent pipelines for ML model training. In
other words, you might want to add the schema file to your version control
system for actual use cases. In this tutorial, we will just copy the schema
to a predefined filesystem path for simplicity.
Step17: The schema file uses
Protocol Buffer text format
and an instance of
TensorFlow Metadata Schema proto.
Step21: You should be sure to review and possibly edit the schema definition as
needed. In this tutorial, we will just use the generated schema unchanged.
Validate input examples and train an ML model
We will go back to the pipeline that we created in
Simple TFX Pipeline Tutorial,
to train an ML model and use the generated schema for writing the model
training code.
We will also add an
ExampleValidator
component which will look for anomalies and missing values in the incoming
dataset with respect to the schema.
Write model training code
We need to write the model code as we did in
Simple TFX Pipeline Tutorial.
The model itself is the same as in the previous tutorial, but this time we will
use the schema generated from the previous pipeline instead of specifying
features manually. Most of the code was not changed. The only difference is
that we do not need to specify the names and types of features in this file.
Instead, we read them from the schema file.
Step23: Now you have completed all preparation steps to build a TFX pipeline for
model training.
Write a pipeline definition
We will add two new components, Importer and ExampleValidator. Importer
brings an external file into the TFX pipeline. In this case, it is a file
containing schema definition. ExampleValidator will examine
the input data and validate whether all input data conforms the data schema
we provided.
Step24: Run the pipeline
Step25: You should see "INFO
Step26: ExampleAnomalies from the ExampleValidator can be visualized as well. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 The TensorFlow Authors.
End of explanation
try:
import colab
!pip install --upgrade pip
except:
pass
Explanation: Data validation using TFX Pipeline and TensorFlow Data Validation
Note: We recommend running this tutorial in a Colab notebook, with no setup required! Just click "Run in Google Colab".
<div class="devsite-table-wrapper"><table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/tfx/tutorials/tfx/penguin_tfdv">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png"/>View on TensorFlow.org</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/tfx/blob/master/docs/tutorials/tfx/penguin_tfdv.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/tfx/tree/master/docs/tutorials/tfx/penguin_tfdv.ipynb">
<img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/tfx/docs/tutorials/tfx/penguin_tfdv.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a></td>
</table></div>
In this notebook-based tutorial, we will create and run TFX pipelines
to validate input data and create an ML model. This notebook is based on the
TFX pipeline we built in
Simple TFX Pipeline Tutorial.
If you have not read that tutorial yet, you should read it before proceeding
with this notebook.
The first task in any data science or ML project is to understand and clean
the data, which includes:
- Understanding the data types, distributions, and other information (e.g.,
mean value, or number of uniques) about each feature
- Generating a preliminary schema that describes the data
- Identifying anomalies and missing values in the data with respect to given
schema
In this tutorial, we will create two TFX pipelines.
First, we will create a pipeline to analyze the dataset and generate a
preliminary schema of the given dataset. This pipeline will include two new
components, StatisticsGen and SchemaGen.
Once we have a proper schema of the data, we will create a pipeline to train
an ML classification model based on the pipeline from the previous tutorial.
In this pipeline, we will use the schema from the first pipeline and a
new component, ExampleValidator, to validate the input data.
The three new components, StatisticsGen, SchemaGen and ExampleValidator, are
TFX components for data analysis and validation, and they are implemented
using the
TensorFlow Data Validation library.
Please see
Understanding TFX Pipelines
to learn more about various concepts in TFX.
Set Up
We first need to install the TFX Python package and download
the dataset which we will use for our model.
Upgrade Pip
To avoid upgrading Pip in a system when running locally,
check to make sure that we are running in Colab.
Local systems can of course be upgraded separately.
End of explanation
!pip install -U tfx
Explanation: Install TFX
End of explanation
import tensorflow as tf
print('TensorFlow version: {}'.format(tf.__version__))
from tfx import v1 as tfx
print('TFX version: {}'.format(tfx.__version__))
Explanation: Did you restart the runtime?
If you are using Google Colab, the first time that you run
the cell above, you must restart the runtime by clicking
above "RESTART RUNTIME" button or using "Runtime > Restart
runtime ..." menu. This is because of the way that Colab
loads packages.
Check the TensorFlow and TFX versions.
End of explanation
import os
# We will create two pipelines. One for schema generation and one for training.
SCHEMA_PIPELINE_NAME = "penguin-tfdv-schema"
PIPELINE_NAME = "penguin-tfdv"
# Output directory to store artifacts generated from the pipeline.
SCHEMA_PIPELINE_ROOT = os.path.join('pipelines', SCHEMA_PIPELINE_NAME)
PIPELINE_ROOT = os.path.join('pipelines', PIPELINE_NAME)
# Path to a SQLite DB file to use as an MLMD storage.
SCHEMA_METADATA_PATH = os.path.join('metadata', SCHEMA_PIPELINE_NAME,
'metadata.db')
METADATA_PATH = os.path.join('metadata', PIPELINE_NAME, 'metadata.db')
# Output directory where created models from the pipeline will be exported.
SERVING_MODEL_DIR = os.path.join('serving_model', PIPELINE_NAME)
from absl import logging
logging.set_verbosity(logging.INFO) # Set default logging level.
Explanation: Set up variables
There are some variables used to define a pipeline. You can customize these
variables as you want. By default all output from the pipeline will be
generated under the current directory.
End of explanation
import urllib.request
import tempfile
DATA_ROOT = tempfile.mkdtemp(prefix='tfx-data') # Create a temporary directory.
_data_url = 'https://raw.githubusercontent.com/tensorflow/tfx/master/tfx/examples/penguin/data/labelled/penguins_processed.csv'
_data_filepath = os.path.join(DATA_ROOT, "data.csv")
urllib.request.urlretrieve(_data_url, _data_filepath)
Explanation: Prepare example data
We will download the example dataset for use in our TFX pipeline. The dataset
we are using is
Palmer Penguins dataset
which is also used in other
TFX examples.
There are four numeric features in this dataset:
culmen_length_mm
culmen_depth_mm
flipper_length_mm
body_mass_g
All features were already normalized to have range [0,1]. We will build a
classification model which predicts the species of penguins.
Because the TFX ExampleGen component reads inputs from a directory, we need
to create a directory and copy the dataset to it.
End of explanation
!head {_data_filepath}
Explanation: Take a quick look at the CSV file.
End of explanation
def _create_schema_pipeline(pipeline_name: str,
pipeline_root: str,
data_root: str,
metadata_path: str) -> tfx.dsl.Pipeline:
Creates a pipeline for schema generation.
# Brings data into the pipeline.
example_gen = tfx.components.CsvExampleGen(input_base=data_root)
# NEW: Computes statistics over data for visualization and schema generation.
statistics_gen = tfx.components.StatisticsGen(
examples=example_gen.outputs['examples'])
# NEW: Generates schema based on the generated statistics.
schema_gen = tfx.components.SchemaGen(
statistics=statistics_gen.outputs['statistics'], infer_feature_shape=True)
components = [
example_gen,
statistics_gen,
schema_gen,
]
return tfx.dsl.Pipeline(
pipeline_name=pipeline_name,
pipeline_root=pipeline_root,
metadata_connection_config=tfx.orchestration.metadata
.sqlite_metadata_connection_config(metadata_path),
components=components)
Explanation: You should be able to see five feature columns. species is one of 0, 1 or 2,
and all other features should have values between 0 and 1. We will create a TFX
pipeline to analyze this dataset.
Generate a preliminary schema
TFX pipelines are defined using Python APIs. We will create a pipeline to
generate a schema from the input examples automatically. This schema can be
reviewed by a human and adjusted as needed. Once the schema is finalized it can
be used for training and example validation in later tasks.
In addition to CsvExampleGen which is used in
Simple TFX Pipeline Tutorial,
we will use StatisticsGen and SchemaGen:
StatisticsGen calculates
statistics for the dataset.
SchemaGen examines the
statistics and creates an initial data schema.
See the guides for each component or
TFX components tutorial
to learn more on these components.
Write a pipeline definition
We define a function to create a TFX pipeline. A Pipeline object
represents a TFX pipeline which can be run using one of pipeline
orchestration systems that TFX supports.
End of explanation
tfx.orchestration.LocalDagRunner().run(
_create_schema_pipeline(
pipeline_name=SCHEMA_PIPELINE_NAME,
pipeline_root=SCHEMA_PIPELINE_ROOT,
data_root=DATA_ROOT,
metadata_path=SCHEMA_METADATA_PATH))
Explanation: Run the pipeline
We will use LocalDagRunner as in the previous tutorial.
End of explanation
from ml_metadata.proto import metadata_store_pb2
# Non-public APIs, just for showcase.
from tfx.orchestration.portable.mlmd import execution_lib
# TODO(b/171447278): Move these functions into the TFX library.
def get_latest_artifacts(metadata, pipeline_name, component_id):
Output artifacts of the latest run of the component.
context = metadata.store.get_context_by_type_and_name(
'node', f'{pipeline_name}.{component_id}')
executions = metadata.store.get_executions_by_context(context.id)
latest_execution = max(executions,
key=lambda e:e.last_update_time_since_epoch)
return execution_lib.get_artifacts_dict(metadata, latest_execution.id,
[metadata_store_pb2.Event.OUTPUT])
# Non-public APIs, just for showcase.
from tfx.orchestration.experimental.interactive import visualizations
def visualize_artifacts(artifacts):
Visualizes artifacts using standard visualization modules.
for artifact in artifacts:
visualization = visualizations.get_registry().get_visualization(
artifact.type_name)
if visualization:
visualization.display(artifact)
from tfx.orchestration.experimental.interactive import standard_visualizations
standard_visualizations.register_standard_visualizations()
Explanation: You should see "INFO:absl:Component SchemaGen is finished." if the pipeline
finished successfully.
We will examine the output of the pipeline to understand our dataset.
Review outputs of the pipeline
As explained in the previous tutorial, a TFX pipeline produces two kinds of
outputs, artifacts and a
metadata DB(MLMD) which contains
metadata of artifacts and pipeline executions. We defined the location of
these outputs in the above cells. By default, artifacts are stored under
the pipelines directory and metadata is stored as a sqlite database
under the metadata directory.
You can use MLMD APIs to locate these outputs programatically. First, we will
define some utility functions to search for the output artifacts that were just
produced.
End of explanation
# Non-public APIs, just for showcase.
from tfx.orchestration.metadata import Metadata
from tfx.types import standard_component_specs
metadata_connection_config = tfx.orchestration.metadata.sqlite_metadata_connection_config(
SCHEMA_METADATA_PATH)
with Metadata(metadata_connection_config) as metadata_handler:
# Find output artifacts from MLMD.
stat_gen_output = get_latest_artifacts(metadata_handler, SCHEMA_PIPELINE_NAME,
'StatisticsGen')
stats_artifacts = stat_gen_output[standard_component_specs.STATISTICS_KEY]
schema_gen_output = get_latest_artifacts(metadata_handler,
SCHEMA_PIPELINE_NAME, 'SchemaGen')
schema_artifacts = schema_gen_output[standard_component_specs.SCHEMA_KEY]
Explanation: Now we can examine the outputs from the pipeline execution.
End of explanation
# docs-infra: no-execute
visualize_artifacts(stats_artifacts)
Explanation: It is time to examine the outputs from each component. As described above,
Tensorflow Data Validation(TFDV)
is used in StatisticsGen and SchemaGen, and TFDV also
provides visualization of the outputs from these components.
In this tutorial, we will use the visualization helper methods in TFX which
use TFDV internally to show the visualization.
Examine the output from StatisticsGen
End of explanation
visualize_artifacts(schema_artifacts)
Explanation: <!-- <img class="tfo-display-only-on-site"
src="images/penguin_tfdv/penguin_tfdv_statistics.png"/> -->
You can see various stats for the input data. These statistics are supplied to
SchemaGen to construct an initial schema of data automatically.
Examine the output from SchemaGen
End of explanation
import shutil
_schema_filename = 'schema.pbtxt'
SCHEMA_PATH = 'schema'
os.makedirs(SCHEMA_PATH, exist_ok=True)
_generated_path = os.path.join(schema_artifacts[0].uri, _schema_filename)
# Copy the 'schema.pbtxt' file from the artifact uri to a predefined path.
shutil.copy(_generated_path, SCHEMA_PATH)
Explanation: This schema is automatically inferred from the output of StatisticsGen. You
should be able to see 4 FLOAT features and 1 INT feature.
Export the schema for future use
We need to review and refine the generated schema. The reviewed schema needs
to be persisted to be used in subsequent pipelines for ML model training. In
other words, you might want to add the schema file to your version control
system for actual use cases. In this tutorial, we will just copy the schema
to a predefined filesystem path for simplicity.
End of explanation
print(f'Schema at {SCHEMA_PATH}-----')
!cat {SCHEMA_PATH}/*
Explanation: The schema file uses
Protocol Buffer text format
and an instance of
TensorFlow Metadata Schema proto.
End of explanation
_trainer_module_file = 'penguin_trainer.py'
%%writefile {_trainer_module_file}
from typing import List
from absl import logging
import tensorflow as tf
from tensorflow import keras
from tensorflow_transform.tf_metadata import schema_utils
from tfx import v1 as tfx
from tfx_bsl.public import tfxio
from tensorflow_metadata.proto.v0 import schema_pb2
# We don't need to specify _FEATURE_KEYS and _FEATURE_SPEC any more.
# Those information can be read from the given schema file.
_LABEL_KEY = 'species'
_TRAIN_BATCH_SIZE = 20
_EVAL_BATCH_SIZE = 10
def _input_fn(file_pattern: List[str],
data_accessor: tfx.components.DataAccessor,
schema: schema_pb2.Schema,
batch_size: int = 200) -> tf.data.Dataset:
Generates features and label for training.
Args:
file_pattern: List of paths or patterns of input tfrecord files.
data_accessor: DataAccessor for converting input to RecordBatch.
schema: schema of the input data.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
return data_accessor.tf_dataset_factory(
file_pattern,
tfxio.TensorFlowDatasetOptions(
batch_size=batch_size, label_key=_LABEL_KEY),
schema=schema).repeat()
def _build_keras_model(schema: schema_pb2.Schema) -> tf.keras.Model:
Creates a DNN Keras model for classifying penguin data.
Returns:
A Keras Model.
# The model below is built with Functional API, please refer to
# https://www.tensorflow.org/guide/keras/overview for all API options.
# ++ Changed code: Uses all features in the schema except the label.
feature_keys = [f.name for f in schema.feature if f.name != _LABEL_KEY]
inputs = [keras.layers.Input(shape=(1,), name=f) for f in feature_keys]
# ++ End of the changed code.
d = keras.layers.concatenate(inputs)
for _ in range(2):
d = keras.layers.Dense(8, activation='relu')(d)
outputs = keras.layers.Dense(3)(d)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.Adam(1e-2),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.SparseCategoricalAccuracy()])
model.summary(print_fn=logging.info)
return model
# TFX Trainer will call this function.
def run_fn(fn_args: tfx.components.FnArgs):
Train the model based on given args.
Args:
fn_args: Holds args used to train the model as name/value pairs.
# ++ Changed code: Reads in schema file passed to the Trainer component.
schema = tfx.utils.parse_pbtxt_file(fn_args.schema_path, schema_pb2.Schema())
# ++ End of the changed code.
train_dataset = _input_fn(
fn_args.train_files,
fn_args.data_accessor,
schema,
batch_size=_TRAIN_BATCH_SIZE)
eval_dataset = _input_fn(
fn_args.eval_files,
fn_args.data_accessor,
schema,
batch_size=_EVAL_BATCH_SIZE)
model = _build_keras_model(schema)
model.fit(
train_dataset,
steps_per_epoch=fn_args.train_steps,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps)
# The result of the training should be saved in `fn_args.serving_model_dir`
# directory.
model.save(fn_args.serving_model_dir, save_format='tf')
Explanation: You should be sure to review and possibly edit the schema definition as
needed. In this tutorial, we will just use the generated schema unchanged.
Validate input examples and train an ML model
We will go back to the pipeline that we created in
Simple TFX Pipeline Tutorial,
to train an ML model and use the generated schema for writing the model
training code.
We will also add an
ExampleValidator
component which will look for anomalies and missing values in the incoming
dataset with respect to the schema.
Write model training code
We need to write the model code as we did in
Simple TFX Pipeline Tutorial.
The model itself is the same as in the previous tutorial, but this time we will
use the schema generated from the previous pipeline instead of specifying
features manually. Most of the code was not changed. The only difference is
that we do not need to specify the names and types of features in this file.
Instead, we read them from the schema file.
End of explanation
def _create_pipeline(pipeline_name: str, pipeline_root: str, data_root: str,
schema_path: str, module_file: str, serving_model_dir: str,
metadata_path: str) -> tfx.dsl.Pipeline:
Creates a pipeline using predefined schema with TFX.
# Brings data into the pipeline.
example_gen = tfx.components.CsvExampleGen(input_base=data_root)
# Computes statistics over data for visualization and example validation.
statistics_gen = tfx.components.StatisticsGen(
examples=example_gen.outputs['examples'])
# NEW: Import the schema.
schema_importer = tfx.dsl.Importer(
source_uri=schema_path,
artifact_type=tfx.types.standard_artifacts.Schema).with_id(
'schema_importer')
# NEW: Performs anomaly detection based on statistics and data schema.
example_validator = tfx.components.ExampleValidator(
statistics=statistics_gen.outputs['statistics'],
schema=schema_importer.outputs['result'])
# Uses user-provided Python function that trains a model.
trainer = tfx.components.Trainer(
module_file=module_file,
examples=example_gen.outputs['examples'],
schema=schema_importer.outputs['result'], # Pass the imported schema.
train_args=tfx.proto.TrainArgs(num_steps=100),
eval_args=tfx.proto.EvalArgs(num_steps=5))
# Pushes the model to a filesystem destination.
pusher = tfx.components.Pusher(
model=trainer.outputs['model'],
push_destination=tfx.proto.PushDestination(
filesystem=tfx.proto.PushDestination.Filesystem(
base_directory=serving_model_dir)))
components = [
example_gen,
# NEW: Following three components were added to the pipeline.
statistics_gen,
schema_importer,
example_validator,
trainer,
pusher,
]
return tfx.dsl.Pipeline(
pipeline_name=pipeline_name,
pipeline_root=pipeline_root,
metadata_connection_config=tfx.orchestration.metadata
.sqlite_metadata_connection_config(metadata_path),
components=components)
Explanation: Now you have completed all preparation steps to build a TFX pipeline for
model training.
Write a pipeline definition
We will add two new components, Importer and ExampleValidator. Importer
brings an external file into the TFX pipeline. In this case, it is a file
containing schema definition. ExampleValidator will examine
the input data and validate whether all input data conforms the data schema
we provided.
End of explanation
tfx.orchestration.LocalDagRunner().run(
_create_pipeline(
pipeline_name=PIPELINE_NAME,
pipeline_root=PIPELINE_ROOT,
data_root=DATA_ROOT,
schema_path=SCHEMA_PATH,
module_file=_trainer_module_file,
serving_model_dir=SERVING_MODEL_DIR,
metadata_path=METADATA_PATH))
Explanation: Run the pipeline
End of explanation
metadata_connection_config = tfx.orchestration.metadata.sqlite_metadata_connection_config(
METADATA_PATH)
with Metadata(metadata_connection_config) as metadata_handler:
ev_output = get_latest_artifacts(metadata_handler, PIPELINE_NAME,
'ExampleValidator')
anomalies_artifacts = ev_output[standard_component_specs.ANOMALIES_KEY]
Explanation: You should see "INFO:absl:Component Pusher is finished." if the pipeline
finished successfully.
Examine outputs of the pipeline
We have trained the classification model for penguins, and we also have
validated the input examples in the ExampleValidator component. We can analyze
the output from ExampleValidator as we did with the previous pipeline.
End of explanation
visualize_artifacts(anomalies_artifacts)
Explanation: ExampleAnomalies from the ExampleValidator can be visualized as well.
End of explanation |
11,173 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numpy Exercise 4
Imports
Step1: Complete graph Laplacian
In discrete mathematics a Graph is a set of vertices or nodes that are connected to each other by edges or lines. If those edges don't have directionality, the graph is said to be undirected. Graphs are used to model social and communications networks (Twitter, Facebook, Internet) as well as natural systems such as molecules.
A Complete Graph, $K_n$ on $n$ nodes has an edge that connects each node to every other node.
Here is $K_5$
Step3: The Laplacian Matrix is a matrix that is extremely important in graph theory and numerical analysis. It is defined as $L=D-A$. Where $D$ is the degree matrix and $A$ is the adjecency matrix. For the purpose of this problem you don't need to understand the details of these matrices, although their definitions are relatively simple.
The degree matrix for $K_n$ is an $n \times n$ diagonal matrix with the value $n-1$ along the diagonal and zeros everywhere else. Write a function to compute the degree matrix for $K_n$ using NumPy.
Step5: The adjacency matrix for $K_n$ is an $n \times n$ matrix with zeros along the diagonal and ones everywhere else. Write a function to compute the adjacency matrix for $K_n$ using NumPy.
Step6: Use NumPy to explore the eigenvalues or spectrum of the Laplacian L of $K_n$. What patterns do you notice as $n$ changes? Create a conjecture about the general Laplace spectrum of $K_n$. | Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
Explanation: Numpy Exercise 4
Imports
End of explanation
import networkx as nx
K_5=nx.complete_graph(5)
nx.draw(K_5)
Explanation: Complete graph Laplacian
In discrete mathematics a Graph is a set of vertices or nodes that are connected to each other by edges or lines. If those edges don't have directionality, the graph is said to be undirected. Graphs are used to model social and communications networks (Twitter, Facebook, Internet) as well as natural systems such as molecules.
A Complete Graph, $K_n$ on $n$ nodes has an edge that connects each node to every other node.
Here is $K_5$:
End of explanation
def complete_deg(n):
Return the integer valued degree matrix D for the complete graph K_n.
a = np.zeros((n,n))
b = a.astype(dtype=np.int)
for x in range(n):
b[x,x] = n-1
return b
D = complete_deg(5)
assert D.shape==(5,5)
assert D.dtype==np.dtype(int)
assert np.all(D.diagonal()==4*np.ones(5))
assert np.all(D-np.diag(D.diagonal())==np.zeros((5,5),dtype=int))
Explanation: The Laplacian Matrix is a matrix that is extremely important in graph theory and numerical analysis. It is defined as $L=D-A$. Where $D$ is the degree matrix and $A$ is the adjecency matrix. For the purpose of this problem you don't need to understand the details of these matrices, although their definitions are relatively simple.
The degree matrix for $K_n$ is an $n \times n$ diagonal matrix with the value $n-1$ along the diagonal and zeros everywhere else. Write a function to compute the degree matrix for $K_n$ using NumPy.
End of explanation
def complete_adj(n):
Return the integer valued adjacency matrix A for the complete graph K_n.
a = np.ones((n,n))
b = a.astype(dtype=np.int)
for x in range(n):
b[x,x] = 0
return b
A = complete_adj(5)
assert A.shape==(5,5)
assert A.dtype==np.dtype(int)
assert np.all(A+np.eye(5,dtype=int)==np.ones((5,5),dtype=int))
Explanation: The adjacency matrix for $K_n$ is an $n \times n$ matrix with zeros along the diagonal and ones everywhere else. Write a function to compute the adjacency matrix for $K_n$ using NumPy.
End of explanation
def laplacian(n):
return complete_deg(n)-complete_adj(n)
one = laplacian(1)
two = laplacian(2)
three = laplacian(3)
four = laplacian(4)
ten = laplacian(10)
five = laplacian(5)
print(one)
print(np.linalg.eigvals(one))
print(two)
print(np.linalg.eigvals(two))
print(three)
print(np.linalg.eigvals(three))
print(four)
print(np.linalg.eigvals(four))
print(five)
print(np.linalg.eigvals(five))
print(ten)
print(np.linalg.eigvals(ten))
Explanation: Use NumPy to explore the eigenvalues or spectrum of the Laplacian L of $K_n$. What patterns do you notice as $n$ changes? Create a conjecture about the general Laplace spectrum of $K_n$.
End of explanation |
11,174 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Keywords Stuff
Step1: Documetns with similar sets of keywords should have similar content
a document can be represented by a vector indicating whether a keyword is present or absent for the codument
Distances between these vectors can be used to cluster / group documents
Step2: Convert Keyword Lists to Vectorspace, Calc Similarities
Step3: Perform clustering | Python Code:
from collections import defaultdict
keycounts = defaultdict(int)
def updateKeycounts(kws):
for kw in kws:
keycounts[kw] += 1
_ = keywords.apply(lambda x: updateKeycounts(x.keywords), axis=1)
keycounts = pandas.DataFrame({"word" : [w for w in keycounts.keys()],
"count" : [c for c in keycounts.values()]})
keycounts.sort_values(by="count", ascending=False, inplace=True)
keycounts.index = range(keycounts.shape[0])
keycounts.head()
keycounts['count'].describe(percentiles=[.25, .50, .75, .80, .85, .90, .95])
g = sns.barplot(x="word", y="count", data=keycounts[keycounts['count']>=100], estimator=sum);
g.figure.set_size_inches(15,10);
plt.xticks(rotation="vertical", size=12);
plt.title('Total Occurrences of Each Keyword', size=14);
plt.ylabel("");
Explanation: Keywords Stuff
End of explanation
from sklearn import metrics, feature_extraction, feature_selection, cluster
Explanation: Documetns with similar sets of keywords should have similar content
a document can be represented by a vector indicating whether a keyword is present or absent for the codument
Distances between these vectors can be used to cluster / group documents
End of explanation
def list2dict(l):
return({n : 1 for n in l})
keywords['kyd'] = keywords.apply(lambda x: list2dict(x.keywords), axis=1)
keywords.kyd[0]
kwDV = feature_extraction.DictVectorizer()
kw_feats = kwDV.fit_transform(keywords['kyd']).todense()
kw_feats.shape
len(kwDV.get_feature_names())
# Restrict the features being used
# keywords must occur at least 10 times in the data;
# larger requirements result in too many groups due to
count_cutoff = 1
support = keycounts.apply(lambda x: x['count'] >= count_cutoff, axis=1)
kwDV.restrict(support)
len(kwDV.get_feature_names())
kw_feats = kwDV.transform(keywords['kyd']).todense()
kw_feats.shape
pandas.Series(numpy.array(kw_feats.sum(axis=1)).reshape([2100,])).describe()
kw_cos = metrics.pairwise.cosine_similarity(kw_feats)
Explanation: Convert Keyword Lists to Vectorspace, Calc Similarities
End of explanation
bandwidth = cluster.estimate_bandwidth(kw_feats, quantile=0.3, n_samples=1000, n_jobs=4)
ms = cluster.MeanShift(bandwidth=bandwidth, n_jobs=4)
ms.fit(kw_feats)
labels = ms.labels_
cluster_centers = ms.cluster_centers_
labels_unique = numpy.unique(labels)
n_clusters_ = len(labels_unique)
print("number of estimated clusters : %d" % n_clusters_)
cc_vals = pandas.Series(cluster_centers.reshape([cluster_centers.size,]))
cc_vals.describe(percentiles=[0.50, 0.80, 0.90, 0.95, 0.975, 0.99])
Explanation: Perform clustering
End of explanation |
11,175 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
for d in ds[1
Step1: with open("models/bar_models", "rb") as f | Python Code:
# !say "Finished"
eval_complex_model(a)
save_model(a, "attempt_two/zero_models")
Explanation: for d in ds[1:]:
trainerer(a,d[:10], 1000,l_r = 0.006, batches= 5 )
trainerer(a,ds_three[:200], 1000,l_r = 0.002, batches= 1 )
End of explanation
trainerer(a,ds_twos[:100], 10,l_r = 0.002, batches = 5)
vs = ContinuousSampler(a)
ds_twos[0].shape
pp.image(vs.reconstruction_given_visible(ds_twos[0].reshape(784)).reshape(28,28))
ds_all = np.vstack((ds))
from sklearn.neural_network import BernoulliRBM
model = BernoulliRBM(n_components=100, n_iter=40)
model.fit(rbmpy.datasets.flatten_data_set(ds_all))
eval_complex_model(rbmpy.rbm.create_from_sklearn_rbm(model, 784, ds_nine.shape[0]))
model
Explanation: with open("models/bar_models", "rb") as f:
a = pickle.load(f)
End of explanation |
11,176 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
Step1: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
Step2: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise
Step3: Training
Step4: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
Step5: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts. | Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
Explanation: A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
End of explanation
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
Explanation: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
End of explanation
# Size of the encoding layer (the hidden layer)
encoding_dim = 32 # feel free to change this value
image_size = mnist.train.images.shape[1]
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, (None, image_size), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, image_size), name='targets')
# Output of hidden layer, single fully connected layer here with ReLU activation
encoded = tf.layers.dense(inputs_, encoding_dim, activation=tf.nn.relu)
# Output layer logits, fully connected layer with no activation
logits = tf.layers.dense(encoded, image_size, activation=None)
# Sigmoid output from logits
decoded = tf.nn.sigmoid(logits, name='output')
# Sigmoid cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Mean of the loss
cost = tf.reduce_mean(loss)
# Adam optimizer
opt = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost)
Explanation: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. Feel free to use TensorFlow's higher level API, tf.layers. For instance, you would use tf.layers.dense(inputs, units, activation=tf.nn.relu) to create a fully connected layer with a ReLU activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.
End of explanation
# Create the session
sess = tf.Session()
Explanation: Training
End of explanation
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
feed = {inputs_: batch[0], targets_: batch[0]}
batch_cost, _ = sess.run([cost, opt], feed_dict=feed)
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
Explanation: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
End of explanation
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
Explanation: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.
End of explanation |
11,177 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Clustering with KMeans in Shogun Machine Learning Toolbox
Notebook by Parijat Mazumdar (GitHub ID
Step1: The toy data created above consists of 4 gaussian blobs, having 200 points each, centered around the vertices of a rectancle. Let's plot it for convenience.
Step2: With data at our disposal, it is time to apply KMeans to it using the KMeans class in Shogun. First we construct Shogun features from our data
Step3: Next we specify the number of clusters we want and create a distance object specifying the distance metric to be used over our data for our KMeans training
Step4: Next, we create a KMeans object with our desired inputs/parameters and train
Step5: Now that training has been done, let's get the cluster centers and label for each data point
Step6: Finally let us plot the centers and the data points (in different colours for different clusters)
Step7: <b>Note
Step8: Now, let's first get results by repeating the rest of the steps
Step9: The other way to initialize centers by hand is as follows
Step10: Let's complete the rest of the code to get results.
Step11: Note the difference that inititial cluster centers can have on final result.
Initializing using KMeans++ algorithm
In Shogun, a user can also use <a href="http
Step12: The other way to initilize using KMeans++ is as follows
Step13: Completing rest of the steps to get result
Step14: To switch back to random initialization, you may use
Step15: Training Methods
Shogun offers 2 training methods for KMeans clustering
Step16: In mini-batch KMeans it is compulsory to set batch-size and number of iterations. These parameters can be set together or one after the other.
Step17: Completing the code to get results
Step18: Applying KMeans on Real Data
In this section we see how useful KMeans can be in classifying the different varieties of Iris plant. For this purpose, we make use of Fisher's Iris dataset borrowed from the <a href='http
Step19: In the above plot we see that the data points labelled Iris Sentosa form a nice separate cluster of their own. But in case of other 2 varieties, while the data points of same label do form clusters of their own, there is some mixing between the clusters at the boundary. Now let us apply KMeans algorithm and see how well we can extract these clusters.
Step20: Now let us create a 2-D plot of the clusters formed making use of the two most important features (petal length and petal width) and compare it with the earlier plot depicting the actual labels of data points.
Step21: From the above plot, it can be inferred that the accuracy of KMeans algorithm is very high for Iris dataset. Don't believe me? Alright, then let us make use of one of Shogun's clustering evaluation techniques to formally validate the claim. But before that, we have to label each sample in the dataset with a label corresponding to the class to which it belongs.
Step22: Now we can compute clustering accuracy making use of the ClusteringAccuracy class in Shogun
Step23: In the above plot, wrongly clustered data points are marked in red. We see that the Iris Sentosa plants are perfectly clustered without error. The Iris Versicolour plants and Iris Virginica plants are also clustered with high accuracy, but there are some plant samples of either class that have been clustered with the wrong class. This happens near the boundary of the 2 classes in the plot and was well expected. Having mastered KMeans, it's time to move on to next interesting topic.
PCA as a preprocessor to KMeans
KMeans is highly affected by the <i>curse of dimensionality</i>. So, dimension reduction becomes an important preprocessing step. Shogun offers a variety of dimension reduction techniques to choose from. Since our data is not very high dimensional, PCA is a good choice for dimension reduction. We have already seen the accuracy of KMeans when all four dimensions are used. In the following exercise we shall see how the accuracy varies as one chooses lower dimensions to represent data.
1-Dimensional representation
Let us first apply PCA to reduce training features to 1 dimension
Step24: Next, let us get an idea of the data in 1-D by plotting it.
Step25: Let us now apply KMeans to the 1-D data to get clusters.
Step26: Now that we have the results, the inevitable step is to check how good these results are.
Step27: 2-Dimensional Representation
We follow the same steps as above and get the clustering accuracy.
STEP 1
Step28: STEP 2
Step29: STEP 3
Step30: 3-Dimensional Representation
Again, we follow the same steps, but skip plotting data.
STEP 1
Step31: STEP 2
Step32: STEP 3
Step33: Finally, let us plot clustering accuracy vs. number of dimensions to consolidate our results. | Python Code:
from numpy import concatenate, array
from numpy.random import randn
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
num = 200
d1 = concatenate((randn(1,num),10.*randn(1,num)),0)
d2 = concatenate((randn(1,num),10.*randn(1,num)),0)+array([[10.],[0.]])
d3 = concatenate((randn(1,num),10.*randn(1,num)),0)+array([[0.],[100.]])
d4 = concatenate((randn(1,num),10.*randn(1,num)),0)+array([[10.],[100.]])
rectangle = concatenate((d1,d2,d3,d4),1)
totalPoints = 800
Explanation: Clustering with KMeans in Shogun Machine Learning Toolbox
Notebook by Parijat Mazumdar (GitHub ID: <a href='https://github.com/mazumdarparijat'>mazumdarparijat</a>)
This notebook demonstrates <a href="http://en.wikipedia.org/wiki/K-means_clustering">clustering with KMeans</a> in Shogun along with its initialization and training. The initialization of cluster centres is shown manually, randomly and using the <a href="http://en.wikipedia.org/wiki/K-means%2B%2B">KMeans++</a> algorithm. Training is done via the classical <a href="http://en.wikipedia.org/wiki/Lloyd%27s_algorithm">Lloyds</a> and mini-batch KMeans method.
It is then applied to a real world data set. Furthermore, the effect of dimensionality reduction using <a href="http://en.wikipedia.org/wiki/Principal_component_analysis">PCA</a> is analysed on the KMeans algorithm.
KMeans - An Overview
The <a href="http://en.wikipedia.org/wiki/K-means_clustering">KMeans clustering algorithm</a> is used to partition a space of n observations into k partitions (or clusters). Each of these clusters is denoted by the mean of the observation vectors belonging to it and a unique label which is attached to all the observations belonging to it. Thus, in general, the algorithm takes parameter k and an observation matrix (along with the notion of distance between points ie <i>distance metric</i>) as input and returns mean of each of the k clusters along with labels indicating belongingness of each observations. Let us construct a simple example to understand how it is done in Shogun using the <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CKMeans.html">CKMeans</a> class.
Let us start by creating a toy dataset.
End of explanation
import matplotlib.pyplot as pyplot
%matplotlib inline
figure,axis = pyplot.subplots(1,1)
axis.plot(rectangle[0], rectangle[1], 'o', color='r', markersize=5)
axis.set_xlim(-5,15)
axis.set_ylim(-50,150)
axis.set_title('Toy data : Rectangle')
pyplot.show()
Explanation: The toy data created above consists of 4 gaussian blobs, having 200 points each, centered around the vertices of a rectancle. Let's plot it for convenience.
End of explanation
from shogun import *
train_features = features(rectangle)
Explanation: With data at our disposal, it is time to apply KMeans to it using the KMeans class in Shogun. First we construct Shogun features from our data:
End of explanation
# number of clusters
k = 2
# distance metric over feature matrix - Euclidean distance
distance = EuclideanDistance(train_features, train_features)
Explanation: Next we specify the number of clusters we want and create a distance object specifying the distance metric to be used over our data for our KMeans training:
End of explanation
# KMeans object created
kmeans = KMeans(k, distance)
# KMeans training
kmeans.train()
Explanation: Next, we create a KMeans object with our desired inputs/parameters and train:
End of explanation
# cluster centers
centers = kmeans.get_cluster_centers()
# Labels for data points
result = kmeans.apply()
Explanation: Now that training has been done, let's get the cluster centers and label for each data point
End of explanation
def plotResult(title = 'KMeans Plot'):
figure,axis = pyplot.subplots(1,1)
for i in range(totalPoints):
if result[i]==0.0:
axis.plot(rectangle[0,i], rectangle[1,i], 'o', color='g', markersize=3)
else:
axis.plot(rectangle[0,i], rectangle[1,i], 'o', color='y', markersize=3)
axis.plot(centers[0,0], centers[1,0], 'ko', color='g', markersize=10)
axis.plot(centers[0,1], centers[1,1], 'ko', color='y', markersize=10)
axis.set_xlim(-5,15)
axis.set_ylim(-50,150)
axis.set_title(title)
pyplot.show()
plotResult('KMeans Results')
Explanation: Finally let us plot the centers and the data points (in different colours for different clusters):
End of explanation
from numpy import array
initial_centers = array([[0.,10.],[50.,50.]])
# initial centers passed
kmeans = KMeans(k, distance, initial_centers)
Explanation: <b>Note:</b> You might not get the perfect result always. That is an inherent flaw of KMeans algorithm. In subsequent sections, we will discuss techniques which allow us to counter this.<br>
Now that we have already worked out a simple KMeans implementation, it's time to understand certain specifics of KMeans implementaion and the options provided by Shogun to its users.
Initialization of cluster centers
The KMeans algorithm requires that the cluster centers are initialized with some values. Shogun offers 3 ways to initialize the clusters. <ul><li>Random initialization (default)</li><li>Initialization by hand</li><li>Initialization using <a href="http://en.wikipedia.org/wiki/K-means%2B%2B">KMeans++ algorithm</a></li></ul>Unless the user supplies initial centers or tells Shogun to use KMeans++, Random initialization is the default method used for cluster center initialization. This was precisely the case in the example discussed above.
Initialization by hand
There are 2 ways to initialize centers by hand. One way is to pass on the centers during KMeans object creation, as follows:
End of explanation
# KMeans training
kmeans.train(train_features)
# cluster centers
centers = kmeans.get_cluster_centers()
# Labels for data points
result = kmeans.apply()
# plot the results
plotResult('Hand initialized KMeans Results 1')
Explanation: Now, let's first get results by repeating the rest of the steps:
End of explanation
new_initial_centers = array([[5.,5.],[0.,100.]])
# set new initial centers
kmeans.set_initial_centers(new_initial_centers)
Explanation: The other way to initialize centers by hand is as follows:
End of explanation
# KMeans training
kmeans.train(train_features)
# cluster centers
centers = kmeans.get_cluster_centers()
# Labels for data points
result = kmeans.apply()
# plot the results
plotResult('Hand initialized KMeans Results 2')
Explanation: Let's complete the rest of the code to get results.
End of explanation
# set flag for using KMeans++
kmeans = KMeans(k, distance, True)
Explanation: Note the difference that inititial cluster centers can have on final result.
Initializing using KMeans++ algorithm
In Shogun, a user can also use <a href="http://en.wikipedia.org/wiki/K-means%2B%2B">KMeans++ algorithm</a> for center initialization. Using KMeans++ for center initialization is beneficial because it reduces total iterations used by KMeans and also the final centers mostly correspond to the global minima, which is often not the case with KMeans with random initialization. One of the ways to use KMeans++ is to set flag as <i>true</i> during KMeans object creation, as follows:
End of explanation
# set KMeans++ flag
kmeans.set_use_kmeanspp(True)
Explanation: The other way to initilize using KMeans++ is as follows:
End of explanation
# KMeans training
kmeans.train(train_features)
# cluster centers
centers = kmeans.get_cluster_centers()
# Labels for data points
result = kmeans.apply()
# plot the results
plotResult('KMeans with KMeans++ Results')
Explanation: Completing rest of the steps to get result:
End of explanation
#unset KMeans++ flag
kmeans.set_use_kmeanspp(False)
Explanation: To switch back to random initialization, you may use:
End of explanation
# set training method to mini-batch
kmeans = KMeansMiniBatch(k, distance)
Explanation: Training Methods
Shogun offers 2 training methods for KMeans clustering:<ul><li><a href='http://en.wikipedia.org/wiki/K-means_clustering#Standard_algorithm'>Classical Lloyd's training</a> (default)</li><li><a href='http://www.eecs.tufts.edu/~dsculley/papers/fastkmeans.pdf'>mini-batch KMeans training</a></li></ul>Lloyd's training method is used by Shogun by default unless user switches to mini-batch training method.
Mini-Batch KMeans
Mini-batch KMeans is very useful in case of extremely large datasets and/or very high dimensional data which is often the case in text mining. One can switch to Mini-batch KMeans training while creating KMeans object as follows:
End of explanation
# set both parameters together batch size-2 and no. of iterations-100
kmeans.set_mb_params(2,100)
# OR
# set batch size-2
kmeans.set_batch_size(2)
# set no. of iterations-100
kmeans.set_mb_iter(100)
Explanation: In mini-batch KMeans it is compulsory to set batch-size and number of iterations. These parameters can be set together or one after the other.
End of explanation
# KMeans training
kmeans.train(train_features)
# cluster centers
centers = kmeans.get_cluster_centers()
# Labels for data points
result = kmeans.apply()
# plot the results
plotResult('Mini-batch KMeans Results')
Explanation: Completing the code to get results:
End of explanation
f = open(os.path.join(SHOGUN_DATA_DIR, 'uci/iris/iris.data'))
feats = []
# read data from file
for line in f:
words = line.rstrip().split(',')
feats.append([float(i) for i in words[0:4]])
f.close()
# create observation matrix
obsmatrix = array(feats).T
# plot the data
figure,axis = pyplot.subplots(1,1)
# First 50 data belong to Iris Sentosa, plotted in green
axis.plot(obsmatrix[2,0:50], obsmatrix[3,0:50], 'o', color='green', markersize=5)
# Next 50 data belong to Iris Versicolour, plotted in red
axis.plot(obsmatrix[2,50:100], obsmatrix[3,50:100], 'o', color='red', markersize=5)
# Last 50 data belong to Iris Virginica, plotted in blue
axis.plot(obsmatrix[2,100:150], obsmatrix[3,100:150], 'o', color='blue', markersize=5)
axis.set_xlim(-1,8)
axis.set_ylim(-1,3)
axis.set_title('3 varieties of Iris plants')
pyplot.show()
Explanation: Applying KMeans on Real Data
In this section we see how useful KMeans can be in classifying the different varieties of Iris plant. For this purpose, we make use of Fisher's Iris dataset borrowed from the <a href='http://archive.ics.uci.edu/ml/datasets/Iris'>UCI Machine Learning Repository</a>. There are 3 varieties of Iris plants
<ul><li>Iris Sensosa</li><li>Iris Versicolour</li><li>Iris Virginica</li></ul>
The Iris dataset enlists 4 features that can be used to segregate these varieties, namely
<ul><li>sepal length</li><li>sepal width</li><li>petal length</li><li>petal width</li></ul>
It is additionally acknowledged that petal length and petal width are the 2 most important features (ie. features with very high class correlations)[refer to <a href='http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.names'>summary statistics</a>]. Since the entire feature vector is impossible to plot, we only plot these two most important features in order to understand the dataset (at least partially). Note that we could have extracted the 2 most important features by applying PCA (or any one of the many dimensionality reduction methods available in Shogun) as well.
End of explanation
def apply_kmeans_iris(data):
# wrap to Shogun features
train_features = features(data)
# number of cluster centers = 3
k = 3
# distance function features - euclidean
distance = EuclideanDistance(train_features, train_features)
# initialize KMeans object
kmeans = KMeans(k, distance)
# use kmeans++ to initialize centers [play around: change it to False and compare results]
kmeans.set_use_kmeanspp(True)
# training method is Lloyd by default [play around: change it to mini-batch by uncommenting the following lines]
#kmeans.set_train_method(KMM_MINI_BATCH)
#kmeans.set_mbKMeans_params(20,30)
# training kmeans
kmeans.train(train_features)
# labels for data points
result = kmeans.apply()
return result
result = apply_kmeans_iris(obsmatrix)
Explanation: In the above plot we see that the data points labelled Iris Sentosa form a nice separate cluster of their own. But in case of other 2 varieties, while the data points of same label do form clusters of their own, there is some mixing between the clusters at the boundary. Now let us apply KMeans algorithm and see how well we can extract these clusters.
End of explanation
# plot the clusters over the original points in 2 dimensions
figure,axis = pyplot.subplots(1,1)
for i in range(150):
if result[i]==0.0:
axis.plot(obsmatrix[2,i],obsmatrix[3,i],'ko',color='r', markersize=5)
elif result[i]==1.0:
axis.plot(obsmatrix[2,i],obsmatrix[3,i],'ko',color='g', markersize=5)
else:
axis.plot(obsmatrix[2,i],obsmatrix[3,i],'ko',color='b', markersize=5)
axis.set_xlim(-1,8)
axis.set_ylim(-1,3)
axis.set_title('Iris plants clustered based on attributes')
pyplot.show()
Explanation: Now let us create a 2-D plot of the clusters formed making use of the two most important features (petal length and petal width) and compare it with the earlier plot depicting the actual labels of data points.
End of explanation
from numpy import ones, zeros
# first 50 are iris sensosa labelled 0, next 50 are iris versicolour labelled 1 and so on
labels = concatenate((zeros(50),ones(50),2.*ones(50)),0)
# bind labels assigned to Shogun multiclass labels
ground_truth = MulticlassLabels(array(labels,dtype='float64'))
Explanation: From the above plot, it can be inferred that the accuracy of KMeans algorithm is very high for Iris dataset. Don't believe me? Alright, then let us make use of one of Shogun's clustering evaluation techniques to formally validate the claim. But before that, we have to label each sample in the dataset with a label corresponding to the class to which it belongs.
End of explanation
from numpy import nonzero
def analyzeResult(result):
# shogun object for clustering accuracy
AccuracyEval = ClusteringAccuracy()
# changes the labels of result (keeping clusters intact) to produce a best match with ground truth
AccuracyEval.best_map(result, ground_truth)
# evaluates clustering accuracy
accuracy = AccuracyEval.evaluate(result, ground_truth)
# find out which sample points differ from actual labels (or ground truth)
compare = result.get_labels()-labels
diff = nonzero(compare)
return (diff,accuracy)
(diff,accuracy_4d) = analyzeResult(result)
print('Accuracy : ' + str(accuracy_4d))
# plot the difference between ground truth and predicted clusters
figure,axis = pyplot.subplots(1,1)
axis.plot(obsmatrix[2,:],obsmatrix[3,:],'x',color='black', markersize=5)
axis.plot(obsmatrix[2,diff],obsmatrix[3,diff],'x',color='r', markersize=7)
axis.set_xlim(-1,8)
axis.set_ylim(-1,3)
axis.set_title('Difference')
pyplot.show()
Explanation: Now we can compute clustering accuracy making use of the ClusteringAccuracy class in Shogun
End of explanation
from numpy import dot
def apply_pca_to_data(target_dims):
train_features = features(obsmatrix)
submean = PruneVarSubMean(False)
submean.init(train_features)
submean.apply_to_feature_matrix(train_features)
preprocessor = PCA()
preprocessor.set_target_dim(target_dims)
preprocessor.init(train_features)
pca_transform = preprocessor.get_transformation_matrix()
new_features = dot(pca_transform.T, train_features)
return new_features
oneD_matrix = apply_pca_to_data(1)
Explanation: In the above plot, wrongly clustered data points are marked in red. We see that the Iris Sentosa plants are perfectly clustered without error. The Iris Versicolour plants and Iris Virginica plants are also clustered with high accuracy, but there are some plant samples of either class that have been clustered with the wrong class. This happens near the boundary of the 2 classes in the plot and was well expected. Having mastered KMeans, it's time to move on to next interesting topic.
PCA as a preprocessor to KMeans
KMeans is highly affected by the <i>curse of dimensionality</i>. So, dimension reduction becomes an important preprocessing step. Shogun offers a variety of dimension reduction techniques to choose from. Since our data is not very high dimensional, PCA is a good choice for dimension reduction. We have already seen the accuracy of KMeans when all four dimensions are used. In the following exercise we shall see how the accuracy varies as one chooses lower dimensions to represent data.
1-Dimensional representation
Let us first apply PCA to reduce training features to 1 dimension
End of explanation
figure,axis = pyplot.subplots(1,1)
# First 50 data belong to Iris Sentosa, plotted in green
axis.plot(oneD_matrix[0,0:50], zeros(50), 'o', color='green', markersize=5)
# Next 50 data belong to Iris Versicolour, plotted in red
axis.plot(oneD_matrix[0,50:100], zeros(50), 'o', color='red', markersize=5)
# Last 50 data belong to Iris Virginica, plotted in blue
axis.plot(oneD_matrix[0,100:150], zeros(50), 'o', color='blue', markersize=5)
axis.set_xlim(-5,5)
axis.set_ylim(-1,1)
axis.set_title('3 varieties of Iris plants')
pyplot.show()
Explanation: Next, let us get an idea of the data in 1-D by plotting it.
End of explanation
result = apply_kmeans_iris(oneD_matrix)
Explanation: Let us now apply KMeans to the 1-D data to get clusters.
End of explanation
(diff,accuracy_1d) = analyzeResult(result)
print('Accuracy : ' + str(accuracy_1d))
# plot the difference between ground truth and predicted clusters
figure,axis = pyplot.subplots(1,1)
axis.plot(oneD_matrix[0,:],zeros(150),'x',color='black', markersize=5)
axis.plot(oneD_matrix[0,diff],zeros(len(diff)),'x',color='r', markersize=7)
axis.set_xlim(-5,5)
axis.set_ylim(-1,1)
axis.set_title('Difference')
pyplot.show()
Explanation: Now that we have the results, the inevitable step is to check how good these results are.
End of explanation
twoD_matrix = apply_pca_to_data(2)
figure,axis = pyplot.subplots(1,1)
# First 50 data belong to Iris Sentosa, plotted in green
axis.plot(twoD_matrix[0,0:50], twoD_matrix[1,0:50], 'o', color='green', markersize=5)
# Next 50 data belong to Iris Versicolour, plotted in red
axis.plot(twoD_matrix[0,50:100], twoD_matrix[1,50:100], 'o', color='red', markersize=5)
# Last 50 data belong to Iris Virginica, plotted in blue
axis.plot(twoD_matrix[0,100:150], twoD_matrix[1,100:150], 'o', color='blue', markersize=5)
axis.set_title('3 varieties of Iris plants')
pyplot.show()
Explanation: 2-Dimensional Representation
We follow the same steps as above and get the clustering accuracy.
STEP 1 : Apply PCA and plot the data (plotting is optional)
End of explanation
result = apply_kmeans_iris(twoD_matrix)
Explanation: STEP 2 : Apply KMeans to obtain clusters
End of explanation
(diff,accuracy_2d) = analyzeResult(result)
print('Accuracy : ' + str(accuracy_2d))
# plot the difference between ground truth and predicted clusters
figure,axis = pyplot.subplots(1,1)
axis.plot(twoD_matrix[0,:],twoD_matrix[1,:],'x',color='black', markersize=5)
axis.plot(twoD_matrix[0,diff],twoD_matrix[1,diff],'x',color='r', markersize=7)
axis.set_title('Difference')
pyplot.show()
Explanation: STEP 3: Get the accuracy of the results
End of explanation
threeD_matrix = apply_pca_to_data(3)
Explanation: 3-Dimensional Representation
Again, we follow the same steps, but skip plotting data.
STEP 1: Apply PCA to data
End of explanation
result = apply_kmeans_iris(threeD_matrix)
Explanation: STEP 2: Apply KMeans to 3-D representation of data
End of explanation
(diff,accuracy_3d) = analyzeResult(result)
print('Accuracy : ' + str(accuracy_3d))
# plot the difference between ground truth and predicted clusters
figure,axis = pyplot.subplots(1,1)
axis.plot(obsmatrix[2,:],obsmatrix[3,:],'x',color='black', markersize=5)
axis.plot(obsmatrix[2,diff],obsmatrix[3,diff],'x',color='r', markersize=7)
axis.set_title('Difference')
axis.set_xlim(-1,8)
axis.set_ylim(-1,3)
pyplot.show()
Explanation: STEP 3: Get accuracy of results. In this step, the 'difference' plot positions data points based petal length
and petal width in the original data. This will enable us to visually compare these results with that of KMeans applied
to 4-Dimensional data (ie. our first result on Iris dataset)
End of explanation
from scipy.interpolate import interp1d
from numpy import linspace
x = array([1, 2, 3, 4])
y = array([accuracy_1d, accuracy_2d, accuracy_3d, accuracy_4d])
f = interp1d(x, y)
xnew = linspace(1,4,10)
pyplot.plot(x,y,'o',xnew,f(xnew),'-')
pyplot.xlim([0,5])
pyplot.xlabel('no. of dims')
pyplot.ylabel('Clustering Accuracy')
pyplot.title('PCA Results')
pyplot.show()
Explanation: Finally, let us plot clustering accuracy vs. number of dimensions to consolidate our results.
End of explanation |
11,178 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this little experiment, I printed the likelihoods after each iteration.
The test case was failing with 10% probability and the history had 10 locations. And I requested a certainty for termination of 90%.
Step1: In the next plots you will see that at the beginning the likelihood for the fault location is evenly distributed. There was no observation made.
Then, in the next plot there was an observation ("No fault detected.") at Location 3. Therefore, after Location 3 things are more likely, and on or before it, the fault is less likely.
Scroll down the graphs 'Iteration 14'. Notice that Locations 8 and 9 have dropped to 0, and Location 7 is around 0.25. This means that there was a fault detected at Location 7. Therefore, the algorithm knows that 8 and 9 cannot be the first faulty location.
Afterwards the algorithm evaluates at Location 6 until it is certain that it will not detect the fault there. Once Location 7 reaches certainty/likelihood 90%, the algorithm terminates. | Python Code:
with open('example_run.csv') as f: s = f.read()
N = 10
runs = [[1/N for _ in range(N)]]
for line in s.split('\n'):
line = line.strip('[]')
if len(line) > 0:
li = [float(i) for i in line.split(',')]
runs.append(li)
Explanation: In this little experiment, I printed the likelihoods after each iteration.
The test case was failing with 10% probability and the history had 10 locations. And I requested a certainty for termination of 90%.
End of explanation
for i, r in enumerate(runs):
plt.bar(list(range(10)), r)
plt.xlabel('Location')
plt.ylabel('Likelihood after {} iterations'.format(i))
plt.xticks(range(N))
plt.show()
fig, ax = plt.subplots()
# fig.set_tight_layout(True)
ax.set_xlim((0, 10))
ax.set_ylim((0, 1))
line, = ax.plot([], [])
x = list(range(N))
ylabel_func = lambda i: 'Likelihood after {} iterations'.format(i)
def init():
line.set_data([], [])
return (line, )
def animate(i):
y = runs[i]
line.set_data(x, y)
return (line,)
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=72, interval=40, blit=True)
# HTML(anim.to_html5_video())
rc('animation', html='html5')
anim
Explanation: In the next plots you will see that at the beginning the likelihood for the fault location is evenly distributed. There was no observation made.
Then, in the next plot there was an observation ("No fault detected.") at Location 3. Therefore, after Location 3 things are more likely, and on or before it, the fault is less likely.
Scroll down the graphs 'Iteration 14'. Notice that Locations 8 and 9 have dropped to 0, and Location 7 is around 0.25. This means that there was a fault detected at Location 7. Therefore, the algorithm knows that 8 and 9 cannot be the first faulty location.
Afterwards the algorithm evaluates at Location 6 until it is certain that it will not detect the fault there. Once Location 7 reaches certainty/likelihood 90%, the algorithm terminates.
End of explanation |
11,179 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Aufgabe 2
Step1: First we create a training set of size num_samples and num_features.
Step2: Next we run a performance test on the created data set. Therefor we train a random forest classifier multiple times and and measure the training time. Each time we use a different number of jobs to train the classifier. We repeat the process on training sets of various sizes.
Step3: Finally we plot and evaluate our results.
Step4: The training time is inversely proportional to the number of used cpu cores. | Python Code:
# imports
from sklearn.datasets import make_classification
from sklearn.ensemble import RandomForestClassifier
import time
import matplotlib.pyplot as plt
import seaborn as sns
Explanation: Aufgabe 2: Classification
A short test to examine the performance gain when using multiple cores on sklearn's esemble classifier random forest.
Depending on the available system the maximum number of jobs to test and the sample size can be adjusted by changing the respective parameters.
End of explanation
num_samples = 500 * 1000
num_features = 40
X, y = make_classification(n_samples=num_samples, n_features=num_features)
Explanation: First we create a training set of size num_samples and num_features.
End of explanation
# test different number of cores: here max 8
max_cores = 8
num_cpu_list = list(range(1,max_cores + 1))
max_sample_list = [int(l * num_samples) for l in [0.1, 0.2, 1, 0.001]]
training_times_all = []
# the default setting for classifier
clf = RandomForestClassifier()
for max_sample in max_sample_list:
training_times = []
for num_cpu in num_cpu_list:
# change number of cores
clf.set_params(n_jobs=num_cpu)
# train classifier on training data
t = %timeit -o clf.fit(X[:max_sample+1], y[:max_sample+1])
# save the runtime to the list
training_times.append(t.best)
# print logging message
print("Computing for {} samples and {} cores DONE.".format(max_sample,num_cpu))
training_times_all.append(training_times)
print("All computations DONE.")
Explanation: Next we run a performance test on the created data set. Therefor we train a random forest classifier multiple times and and measure the training time. Each time we use a different number of jobs to train the classifier. We repeat the process on training sets of various sizes.
End of explanation
plt.plot(num_cpu_list, training_times_all[0], 'ro', label="{}k".format(max_sample_list[0]//1000))
plt.plot(num_cpu_list, training_times_all[1], "bs" , label="{}k".format(max_sample_list[1]//1000))
plt.plot(num_cpu_list, training_times_all[2], "g^" , label="{}k".format(max_sample_list[2]//1000))
plt.axis([0, len(num_cpu_list)+1, 0, max(training_times_all[2])+1])
plt.title("Training time vs #CPU Cores")
plt.xlabel("#CPU Cores")
plt.ylabel("training time [s]")
plt.legend()
plt.show()
Explanation: Finally we plot and evaluate our results.
End of explanation
plt.plot(num_cpu_list, training_times_all[3], 'ro', label="{}k".format(max_sample_list[3]/1000))
plt.axis([0, len(num_cpu_list)+1, 0, max(training_times_all[3])+1])
plt.title("Training time vs #CPU Cores on small dataset")
plt.xlabel("#CPU Cores")
plt.ylabel("training time [s]")
plt.legend()
plt.show()
Explanation: The training time is inversely proportional to the number of used cpu cores.
End of explanation |
11,180 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Examples
reproduced from http
Step1: Example 2
Step2: Example 3
Exception
The function2 plot_with_table1() and plot_with_table2() are exceptions with respect to the idea of this module
Step4: Example 4
Footer
A footer can be added to the plot. This is interesting if the plot is saved as a stand alone file.
The footer is HTML you can write from scratch but a helper function and a jinja template make it easy.
Images are embeded upon save so the saved file is standalone. Only an internet connection is required to download the js libraries.
Step5: Column, Bar
Step6: Pie
Step7: Pie, Column Drilldown
Step8: Pie Drilldown - 3 levels
Any number of levels works
Step9: Column Range
Step10: Scatter - 1
Step11: Scatter - 2
Step12: Bubble
Step13: Treemap
Building the points datastructure cannot be wrapped without losing flexibility
Example (data and points datastructure taken from http
Step14: Sunburst - 2 levels
Step15: Sunburst - 3 levels
Any number of levels works
Step16: Polar Chart
Step17: Spider Web
Step18: Spider Web DrillDown
Step19: Box Plot
Step21: Heatmap
Step22: Direct access to Highcharts/Highstock documentation
Navigate the object property tree
An info() method gives the official help
WARNING | Python Code:
df = hc.sample.df_timeseries(N=2, Nb_bd=15+0*3700) #<=473
df.info()
display(df.head())
display(df.tail())
g = hc.Highstock()
g.chart.width = 650
g.chart.height = 550
g.legend.enabled = True
g.legend.layout = 'horizontal'
g.legend.align = 'center'
g.legend.maxHeight = 100
g.tooltip.enabled = True
g.tooltip.valueDecimals = 2
g.exporting.enabled = True
g.chart.zoomType = 'xy'
g.title.text = 'Time series plotted with HighStock'
g.subtitle.text = 'Transparent access to the underlying js lib'
g.plotOptions.series.compare = 'percent'
g.yAxis.labels.formatter = hc.scripts.FORMATTER_PERCENT
g.tooltip.pointFormat = hc.scripts.TOOLTIP_POINT_FORMAT_PERCENT
g.tooltip.positioner = hc.scripts.TOOLTIP_POSITIONER_CENTER_TOP
g.xAxis.gridLineWidth = 1.0
g.xAxis.gridLineDashStyle = 'Dot'
g.yAxis.gridLineWidth = 1.0
g.yAxis.gridLineDashStyle = 'Dot'
g.credits.enabled = True
g.credits.text = 'Source: XXX Flow Strategy & Solutions.'
g.credits.href = 'http://www.example.com'
g.series = hc.build.series(df)
g.plot(save=False, version='6.1.2', center=True)
## IF BEHIND A CORPORATE PROXY
## IF NOT PROXY IS PASSED TO .plot() THEN NO HIGHCHARTS VERSION UPDATE IS PERFORMED
## HARDODED VERSIONS ARE USED INSTEAD
# p = hc.Proxy('mylogin', 'mypwd', 'myproxyhost', 'myproxyport')
# g.plot(save=False, version='latest', proxy=p)
options_as_dict = g.options_as_dict()
options_as_dict
options_as_json = g.options_as_json()
options_as_json
Explanation: Examples
reproduced from http://www.highcharts.com/demo/ and http://www.highcharts.com/stock/demo
plot() has the following arguments:
save=True and optionally save_name and optionally save_path (default='saved') will save the graph as a stand alone HTML doc under save_path after creating it if necessary
notebook (default=True) will not inject require and jquery libs as they are already available in the classical notebook. Set to False to inject them.
version (default='latest') will specify the highcharts version to use. It is recommended to leave the default value (6.1.2 as of 4sep18).
proxy (default=None') is necessary if you want to check from highcharts release page what the latest version is, and update the list of all past versions. If no proxy is provided, the versions are hardcoded in the source code.
options_as_dict() will return highchart/highstocks options as a Python dictionary
args: chart_id to specify which div for rendering
options_as_json() will return highchart/highstocks options as json
args: Same save options as plot()
Times series
Example 1
End of explanation
df = hc.sample.df_timeseries(N=3, Nb_bd=2000)
df['Cash'] = 1.0+0.02/260
df['Cash'] = df['Cash'].cumprod()
display(df.head())
display(df.tail())
g = hc.Highstock()
g.chart.height = 550
g.legend.enabled = True
g.legend.layout = 'horizontal'
g.legend.align = 'center'
g.legend.maxHeight = 100
g.tooltip.enabled = True
g.tooltip.valueDecimals = 2
g.exporting.enabled = True
g.chart.zoomType = 'xy'
g.title.text = 'Time series plotted with HighStock'
g.subtitle.text = 'Transparent access to the underlying js lib'
g.plotOptions.series.compare = 'percent'
g.yAxis.labels.formatter = hc.scripts.FORMATTER_PERCENT
g.tooltip.pointFormat = hc.scripts.TOOLTIP_POINT_FORMAT_PERCENT
g.tooltip.positioner = hc.scripts.TOOLTIP_POSITIONER_CENTER_TOP
g.xAxis.gridLineWidth = 1.0
g.xAxis.gridLineDashStyle = 'Dot'
g.yAxis.gridLineWidth = 1.0
g.yAxis.gridLineDashStyle = 'Dot'
g.credits.enabled = True
g.credits.text = 'Source: XXX Flow Strategy & Solutions.'
g.credits.href = 'http://www.example.com'
g.series = hc.build.series(df, visible={'Track3': False})
g.plot(save=True, version='6.1.2', save_name='NoTable')
Explanation: Example 2
End of explanation
# g.plot_with_table_1(dated=False, version='6.1.2', save=True, save_name='Table1')
Explanation: Example 3
Exception
The function2 plot_with_table1() and plot_with_table2() are exceptions with respect to the idea of this module: It is NOT just transparent access to Highchart/Highstock. I added a table (based on datatable.net) to display more data about the period selected. This measurements cannot be calculated beforehand, so it has to be postprocessing.
If save=True function plot_with_table1/2() will create a standalone HTML file containing the output in subdirectory 'saved'. Optionally save_name can be set - an automatic time tag is added to keep things orderly, unless dated=False.
NOTE: Because of css collision between notebook and datatable, the table in the saved file is better looking than in the notebook output area.
End of explanation
g.plotOptions.series.compare = 'value'
g.yAxis.labels.formatter = hc.scripts.FORMATTER_BASIC
g.tooltip.pointFormat = hc.scripts.TOOLTIP_POINT_FORMAT_BASIC
g.tooltip.formatter = hc.scripts.FORMATTER_QUANTILE
disclaimer =
THE VALUE OF YOUR INVESTMENT MAY FLUCTUATE.
THE FIGURES RELATING TO SIMULATED PAST PERFORMANCES REFER TO PAST
PERIODS AND ARE NOT A RELIABLE INDICATOR OF FUTURE RESULTS.
THIS ALSO APPLIES TO HISTORICAL MARKET DATA.
template_footer = hc.scripts.TEMPLATE_DISCLAIMER
create_footer = hc.scripts.from_template
logo_path = hc.scripts.PATH_TO_LOGO_SG
# logo_path = 'http://img.talkandroid.com/uploads/2015/11/Chrome-Logo.png'
# logo_path = hc.scripts.image_src('http://img.talkandroid.com/uploads/2015/11/Chrome-Logo.png')
footer = create_footer(template_footer, comment=disclaimer, img_logo=logo_path)
g.plot_with_table_2(dated=False, version='6.1.2', save=True, save_name='Table2', footer=footer)
Explanation: Example 4
Footer
A footer can be added to the plot. This is interesting if the plot is saved as a stand alone file.
The footer is HTML you can write from scratch but a helper function and a jinja template make it easy.
Images are embeded upon save so the saved file is standalone. Only an internet connection is required to download the js libraries.
End of explanation
df = hc.sample.df_one_idx_several_col()
df
g = hc.Highcharts()
g.chart.type = 'column'
g.chart.width = 500
g.chart.height = 300
# g.plotOptions.column.animation = False
g.title.text = 'Basic Bar Chart'
g.yAxis.title.text = 'Fruit Consumption'
g.xAxis.categories = list(df.index)
g.series = hc.build.series(df)
g.plot(center=True, save=True, version='6.1.2', save_name='test', dated=False)
g.plotOptions.column.stacking = 'normal'
g.title.text = 'Stack Bar Chart'
g.yAxis.title.text = 'Total Fruit Consumption'
g.plot(version='6.1.2')
g.plotOptions.column.stacking = 'percent'
g.yAxis.title.text = 'Fruit Consumption Distribution'
g.plot(version='6.1.2')
g = hc.Highcharts()
g.chart.type = 'bar'
g.chart.width = 500
g.chart.height = 400
g.title.text = 'Basic Bar Chart'
g.xAxis.title.text = 'Fruit Consumption'
g.xAxis.categories = list(df.index)
g.series = hc.build.series(df)
g.plot()
g.plotOptions.bar.stacking = 'normal'
g.title.text = 'Stacked Bar Chart'
g.xAxis.title.text = 'Total Fruit Consumption'
g.plot(version='6.1.2')
g.plotOptions.bar.stacking = 'percent'
g.title.text = 'Stacked Bar Chart'
g.xAxis.title.text = 'Fruit Consumption Distribution'
g.plot(version='6.1.2')
Explanation: Column, Bar
End of explanation
df = hc.sample.df_one_idx_one_col()
df
g = hc.Highcharts()
g.chart.type = 'pie'
g.chart.width = 400
g.chart.height = 400
gpo = g.plotOptions.pie
gpo.showInLegend = True
gpo.dataLabels.enabled = False
g.title.text = 'Browser Market Share'
g.series = hc.build.series(df)
g.plot(version='6.1.2')
g.chart.width = 400
g.chart.height = 300
gpo.showInLegend = False
gpo.dataLabels.enabled = True
gpo.startAngle = -90
gpo.endAngle = 90
gpo.innerSize = '40%'
gpo.center = ['50%', '95%']
g.plot(version='6.1.2')
Explanation: Pie
End of explanation
df = hc.sample.df_two_idx_one_col()
df.head()
g = hc.Highcharts()
g.chart.type = 'pie'
g.chart.width = 500
g.chart.height = 500
g.exporting = False
gpo = g.plotOptions.pie
gpo.showInLegend = False
gpo.dataLabels.enabled = True
gpo.center = ['50%', '50%']
gpo.size = '65%'
g.drilldown.drillUpButton.position = {'x': 0, 'y': 0}
g.title.text = 'Browser Market Share'
g.series, g.drilldown.series = hc.build.series_drilldown(df)
g.plot(version='6.1.2')
g = hc.Highcharts()
g.chart.type = 'bar'
g.chart.width = 500
g.chart.height = 500
g.exporting = False
gpo = g.plotOptions.pie
gpo.showInLegend = False
gpo.dataLabels.enabled = True
gpo.center = ['50%', '50%']
gpo.size = '65%'
g.drilldown.drillUpButton.position = {'x': 0, 'y': 0}
g.title.text = 'Browser Market Share'
g.series, g.drilldown.series = hc.build.series_drilldown(df)
g.plot()
Explanation: Pie, Column Drilldown
End of explanation
df = hc.sample.df_several_idx_one_col_2()
df.head()
df
# g = hc.Highcharts()
# g.chart.type = 'pie'
# g.chart.width = 500
# g.chart.height = 500
# g.exporting = False
# gpo = g.plotOptions.pie
# gpo.showInLegend = False
# gpo.dataLabels.enabled = True
# gpo.center = ['50%', '50%']
# gpo.size = '65%'
# g.drilldown.drillUpButton.position = {'x': 0, 'y': 0}
# g.title.text = 'World Population'
# g.series, g.drilldown.series = hc.build.series_drilldown(df, top_name='World')
# # g.plot(version='6.1.2')
Explanation: Pie Drilldown - 3 levels
Any number of levels works
End of explanation
df = hc.sample.df_one_idx_two_col()
df.head()
g = hc.Highcharts()
g.chart.type = 'columnrange'
g.chart.inverted = True
g.chart.width = 700
g.chart.height = 400
gpo = g.plotOptions.columnrange
gpo.dataLabels.enabled = True
gpo.dataLabels.formatter = 'function() { return this.y + "°C"; }'
g.tooltip.valueSuffix = '°C'
g.xAxis.categories, g.series = hc.build.series_range(df)
g.series[0]['name'] = 'Temperature'
g.yAxis.title.text = 'Temperature (°C)'
g.xAxis.title.text = 'Month'
g.title.text = 'Temperature Variations by Month'
g.subtitle.text = 'Vik, Norway'
g.legend.enabled = False
g.plot(save=True, save_name='index', version='6.1.2', dated=False, notebook=False)
Explanation: Column Range
End of explanation
df = hc.sample.df_scatter()
df.head()
g = hc.Highcharts()
g.chart.type = 'scatter'
g.chart.width = 700
g.chart.height = 500
g.chart.zoomType = 'xy'
g.exporting = False
g.plotOptions.scatter.marker.radius = 5
g.tooltip.headerFormat = '<b>Sex: {series.name}</b><br>'
g.tooltip.pointFormat = '{point.x} cm, {point.y} kg'
g.legend.layout = 'vertical'
g.legend.align = 'left'
g.legend.verticalAlign = 'top'
g.legend.x = 100
g.legend.y = 70
g.legend.floating = True
g.legend.borderWidth = 1
g.xAxis.title.text = 'Height (cm)'
g.yAxis.title.text = 'Weight (kg)'
g.title.text = 'Height Versus Weight of 507 Individuals by Gender'
g.subtitle.text = 'Source: Heinz 2003'
g.series = hc.build.series_scatter(df, color_column='Sex',
color={'Female': 'rgba(223, 83, 83, .5)',
'Male': 'rgba(119, 152, 191, .5)'})
g.plot(version='6.1.2')
Explanation: Scatter - 1
End of explanation
df = hc.sample.df_scatter()
df['Tag'] = np.random.choice(range(int(1e5)), size=len(df), replace=False)
df.head()
g = hc.Highcharts()
g.chart.type = 'scatter'
g.chart.width = 700
g.chart.height = 500
g.chart.zoomType = 'xy'
g.exporting = False
g.plotOptions.scatter.marker.radius = 5
g.tooltip.headerFormat = '<b>Sex: {series.name}</b><br><b>Tag: {point.key}</b><br>'
g.tooltip.pointFormat = '{point.x} cm, {point.y} kg'
g.legend.layout = 'vertical'
g.legend.align = 'left'
g.legend.verticalAlign = 'top'
g.legend.x = 100
g.legend.y = 70
g.legend.floating = True
g.legend.borderWidth = 1
g.xAxis.title.text = 'Height (cm)'
g.yAxis.title.text = 'Weight (kg)'
g.title.text = 'Height Versus Weight of 507 Individuals by Gender'
g.subtitle.text = 'Source: Heinz 2003'
g.series = hc.build.series_scatter(df, color_column='Sex', title_column='Tag',
color={'Female': 'rgba(223, 83, 83, .5)',
'Male': 'rgba(119, 152, 191, .5)'})
g.plot(version='6.1.2')
Explanation: Scatter - 2
End of explanation
df = hc.sample.df_bubble()
df.head()
g = hc.Highcharts()
g.chart.type = 'bubble'
g.chart.width = 700
g.chart.height = 500
g.chart.zoomType = 'xy'
g.plotOptions.bubble.minSize = 20
g.plotOptions.bubble.maxSize = 60
g.legend.enabled = True
g.title.text = 'Bubbles'
g.series = hc.build.series_bubble(df, color={'A': 'rgba(223, 83, 83, .5)', 'B': 'rgba(119, 152, 191, .5)'})
g.plot(version='6.1.2')
Explanation: Bubble
End of explanation
df = hc.sample.df_several_idx_one_col()
df.head()
colors = ['#7cb5ec', '#434348', '#90ed7d', '#f7a35c', '#8085e9',
'#f15c80', '#e4d354', '#2b908f', '#f45b5b', '#91e8e1']
points = hc.build.series_tree(df, set_color=True, colors=colors, set_value=True, precision=2)
points[:5]
g = hc.Highcharts()
g.chart.type = 'treemap'
g.chart.width = 900
g.chart.height = 600
g.title.text = 'Global Mortality Rate 2012, per 100 000 population'
g.subtitle.text = 'Click points to drill down.\nSource: \
<a href="http://apps.who.int/gho/data/node.main.12?lang=en">WHO</a>.'
g.exporting = False
g.series = [{
'type': "treemap",
'layoutAlgorithm': 'squarified',
'allowDrillToNode': True,
'dataLabels': {
'enabled': False
},
'levelIsConstant': False,
'levels': [{
'level': 1,
'dataLabels': {
'enabled': True
},
'borderWidth': 3
}],
'data': points,
}]
g.plot(version='6.1.2')
Explanation: Treemap
Building the points datastructure cannot be wrapped without losing flexibility
Example (data and points datastructure taken from http://jsfiddle.net/gh/get/jquery/1.9.1/highslide-software/highcharts.com/tree/master/samples/highcharts/demo/treemap-large-dataset/
End of explanation
df = hc.sample.df_two_idx_one_col()
df.head()
points = hc.build.series_tree(df, set_total=True, name_total='Total',
set_color=False,
set_value=False, precision=2)
points[:5]
g = hc.Highcharts()
g.chart.type = 'sunburst'
g.title.text = 'Browser Market Share'
g.plotOptions.series.animation = True
g.chart.height = '80%'
g.chart.animation = True
g.exporting = False
g.tooltip = {
'headerFormat': "",
'pointFormat': '<b>{point.name}</b> Market Share is <b>{point.value:,.3f}</b>'
}
g.series = [{
'type': 'sunburst',
'data': points,
'allowDrillToNode': True,
'cursor': 'pointer',
'dataLabels': {
'format': '{point.name}',
'filter': {
'property': 'innerArcLength',
'operator': '>',
'value': 16
}
},
'levels': [{
'level': 2,
'colorByPoint': True,
'dataLabels': {
'rotationMode': 'parallel'
}
},
{
'level': 3,
'colorVariation': {
'key': 'brightness',
'to': -0.5
}
}, {
'level': 4,
'colorVariation': {
'key': 'brightness',
'to': 0.5
}
}]
}]
g.plot(version='6.1.2')
Explanation: Sunburst - 2 levels
End of explanation
df = hc.sample.df_several_idx_one_col_2()
df.head()
points = hc.build.series_tree(df, set_total=True, name_total='World',
set_value=False, set_color=False, precision=0)
points[:5]
g = hc.Highcharts()
g.chart.type = 'sunburst'
g.chart.height = '90%'
g.chart.animation = True
g.title.text = 'World population 2017'
g.subtitle.text = 'Source <href="https://en.wikipedia.org/wiki/List_of_countries_by_population_(United_Nations)">Wikipedia</a>'
g.exporting = False
g.series = [{
'type': "sunburst",
'data': points,
'allowDrillToNode': True,
'cursor': 'pointer',
'dataLabels': {
'format': '{point.name}',
'filter': {
'property': 'innerArcLength',
'operator': '>',
'value': 16
}
},
'levels': [{
'level': 2,
'colorByPoint': True,
'dataLabels': {
'rotationMode': 'parallel'
}
},
{
'level': 3,
'colorVariation': {
'key': 'brightness',
'to': -0.5
}
}, {
'level': 4,
'colorVariation': {
'key': 'brightness',
'to': 0.5
}
}]
}]
g.plot(version='6.1.2')
Explanation: Sunburst - 3 levels
Any number of levels works
End of explanation
df = pd.DataFrame(data=np.array([[8, 7, 6, 5, 4, 3, 2, 1],
[1, 2, 3, 4, 5, 6, 7, 8],
[1, 8, 2, 7, 3, 6, 4, 5]]).T,
columns=['column', 'line', 'area'])
df
g = hc.Highcharts()
g.chart.polar = True
g.chart.width = 500
g.chart.height = 500
g.title.text = 'Polar Chart'
g.pane.startAngle = 0
g.pane.endAngle = 360
g.pane.background = [{'backgroundColor': '#FFF',
'borderWidth': 0
}]
g.xAxis.tickInterval = 45
g.xAxis.min = 0
g.xAxis.max = 360
g.xAxis.labels.formatter = 'function() { return this.value + "°"; }'
g.yAxis.min = 0
g.plotOptions.series.pointStart = 0
g.plotOptions.series.pointInterval = 45
g.plotOptions.column.pointPadding = 0
g.plotOptions.column.groupPadding = 0
g.series = [{
'type': 'column',
'name': 'Column',
'data': list(df['column']),
'pointPlacement': 'between',
}, {
'type': 'line',
'name': 'Line',
'data': list(df['line']),
}, {
'type': 'area',
'name': 'Area',
'data': list(df['area']),
}
]
g.plot(version='6.1.2')
Explanation: Polar Chart
End of explanation
df = pd.DataFrame(data=np.array([[43000, 19000, 60000, 35000, 17000, 10000],
[50000, 39000, 42000, 31000, 26000, 14000]]).T,
columns=['Allocated Budget', 'Actual Spending'],
index = ['Sales', 'Marketing', 'Development', 'Customer Support',
'Information Technology', 'Administration'])
df
g = hc.Highcharts()
g.chart.polar = True
g.chart.width = 650
g.chart.height = 500
g.title.text = 'Budget vs. Spending'
g.title.x = -80
g.pane.size = '80%'
g.pane.background = [{'backgroundColor': '#FFF',
'borderWidth': 0
}]
g.xAxis.tickmarkPlacement = 'on'
g.xAxis.lineWidth = 0
g.xAxis.categories = list(df.index)
g.yAxis.min = 0
g.yAxis.lineWidth = 0
g.yAxis.gridLineInterpolation = 'polygon'
g.tooltip.pointFormat = '<span style="color:{series.color}">{series.name}: <b>${point.y:,.0f}</b><br/>'
g.tooltip.shared = True
g.legend.align = 'right'
g.legend.verticalAlign = 'top'
g.legend.y = 70
g.legend.layout = 'vertical'
g.series = [{
'name': 'Allocated Budget',
'data': list(df['Allocated Budget']),
'pointPlacement': 'on'
}, {
'name': 'Actual Spending',
'data': list(df['Actual Spending']),
'pointPlacement': 'on'
},
]
g.plot(version='6.1.2')
Explanation: Spider Web
End of explanation
df = hc.sample.df_two_idx_several_col()
df.info()
display(df.head(10))
display(df.tail(10))
g = hc.Highcharts()
# g.chart.type = 'column'
g.chart.polar = True
g.plotOptions.series.animation = True
g.chart.width = 950
g.chart.height = 700
g.pane.size = '90%'
g.title.text = 'Perf (%) Contrib by Strategy & Period'
g.xAxis.type = 'category'
g.xAxis.tickmarkPlacement = 'on'
g.xAxis.lineWidth = 0
g.yAxis.gridLineInterpolation = 'polygon'
g.yAxis.lineWidth = 0
g.yAxis.plotLines = [{'color': 'gray', 'value': 0, 'width': 1.5}]
g.tooltip.pointFormat = '<span style="color:{series.color}">{series.name}: <b>{point.y:,.3f}%</b><br/>'
g.tooltip.shared = False
g.legend.enabled = True
g.legend.align = 'right'
g.legend.verticalAlign = 'top'
g.legend.y = 70
g.legend.layout = 'vertical'
# color names from http://www.w3schools.com/colors/colors_names.asp
# color rgba() codes from http://www.hexcolortool.com/
g.series, g.drilldown.series = hc.build.series_drilldown(df, colorByPoint=False,
color={'5Y': 'indigo'},
# color={'5Y': 'rgba(136, 110, 166, 1)'}
)
g.plot(save=True, save_name='ContribTable', version='6.1.2')
Explanation: Spider Web DrillDown
End of explanation
df_obs = pd.DataFrame(data=np.array([[760, 801, 848, 895, 965],
[733, 853, 939, 980, 1080],
[714, 762, 817, 870, 918],
[724, 802, 806, 871, 950],
[834, 836, 864, 882, 910]]),
index=list('ABCDE'))
display(df_obs)
# x, y positions where 0 is the first category
df_outlier = pd.DataFrame(data=np.array([[0, 644],
[4, 718],
[4, 951],
[4, 969]]))
display(df_outlier)
colors = ['#7cb5ec', '#434348', '#90ed7d', '#f7a35c', '#8085e9',
'#f15c80', '#e4d354', '#2b908f', '#f45b5b', '#91e8e1']
g = hc.Highcharts()
g.chart.type = 'boxplot'
g.chart.width = 850
g.chart.height = 500
g.title.text = 'Box Plot Example'
g.legend.enabled = False
g.xAxis.categories = list(df_obs.index)
g.xAxis.title.text = 'Experiment'
g.yAxis.title.text = 'Observations'
g.yAxis.plotLines= [{
'value': 932,
'color': 'red',
'width': 1,
'label': {
'text': 'Theoretical mean: 932',
'align': 'center',
'style': { 'color': 'gray' }
}
}]
g.series = []
g.series.append({
'name': 'Observations',
'data': list(df_obs.values),
'tooltip': { 'headerFormat': '<em>Experiment No {point.key}</em><br/>' },
})
g.series.append({
'name': 'Outlier',
'color': colors[0],
'type': 'scatter',
'data': list(df_outlier.values),
'marker': {
'fillColor': 'white',
'lineWidth': 1,
'lineColor': colors[0],
},
'tooltip': { 'pointFormat': 'Observation: {point.y}' }
})
g.plot(version='6.1.2')
Explanation: Box Plot
End of explanation
df = hc.sample.df_one_idx_several_col_2()
df
colors = ['#7cb5ec', '#434348', '#90ed7d', '#f7a35c', '#8085e9',
'#f15c80', '#e4d354', '#2b908f', '#f45b5b', '#91e8e1']
idx, col, data = hc.build.series_heatmap(df)
g = hc.Highcharts()
g.chart.type = 'heatmap'
g.chart.width = 650
g.chart.height = 450
g.title.text = 'Sales per employee per weekday'
g.xAxis.categories = idx
g.yAxis.categories = col
g.yAxis.title = ''
g.colorAxis = {
'min': 0,
'minColor': '#FFFFFF',
'maxColor': colors[0],
}
g.legend = {
'align': 'right',
'layout': 'vertical',
'margin': 0,
'verticalAlign': 'top',
'y': 25,
'symbolHeight': 280
}
g.tooltip = {
'formatter': function () {
return '<b>' + this.series.xAxis.categories[this.point.x] + '</b> sold <br><b>' +
this.point.value + '</b> items on <br><b>' + this.series.yAxis.categories[this.point.y] + '</b>';
}
}
g.series = []
g.series.append({
'name': 'Sales per Employee',
'borderWidth': 1,
'data': data,
'dataLabels': {
'enabled': True,
'color': '#000000',
}
})
g.plot(version='6.1.2')
Explanation: Heatmap
End of explanation
g = hc.Highcharts()
g.yAxis.info()
g.yAxis.labels.format.info()
g = hc.Highstock()
g.plotOptions.info()
g = hc.Highcharts()
g.legend.align.info()
Explanation: Direct access to Highcharts/Highstock documentation
Navigate the object property tree
An info() method gives the official help
WARNING: Once a property is set, the info method is not accessible any more
End of explanation |
11,181 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<p>
<img src="http
Step1:
Step2:
Step3: | Python Code:
from itertools import repeat
from sympy import *
#from type_system import *
%run ../../src/commons.py
%run ./type-system.py
Explanation: <p>
<img src="http://www.cerm.unifi.it/chianti/images/logo%20unifi_positivo.jpg"
alt="UniFI logo" style="float: left; width: 20%; height: 20%;">
<div align="right">
Massimo Nocentini<br>
</div>
</p>
<br>
<div align="center">
<b>Abstract</b><br>
In this document we collect a naive <i>type system</i> based on sets.
</div>
End of explanation
init_printing()
x,y,m,n,t,z = symbols('x y m n t z', commutative=True)
alpha, beta, gamma, eta = symbols(r'\alpha \beta \gamma \eta', commutative=True)
f,g = Function('f'), Function('g')
Explanation:
End of explanation
bin_tree_gfs = bin_tree(tyvar(alpha)[z]).gf()
bin_tree_gfs
bin_tree_gf = bin_tree_gfs[0]
bin_tree_gf.series(z, n=10, kernel='ordinary')
bin_tree_gf.series(z, n=10, kernel='catalan')
occupancy(bin_tree_gf, syms=[alpha], objects='unlike', boxes='unlike').series(z)
Explanation:
End of explanation
bin_tree_of_boolean_gfs = bin_tree(maybe(tyvar(alpha))[z]).gf()
bin_tree_of_boolean_gfs
bin_tree_of_boolean_gf = bin_tree_of_boolean_gfs[0]
occupancy(bin_tree_of_boolean_gf, syms=[alpha], objects='unlike', boxes='unlike').series(z,n=6, kernel='ordinary')
Explanation:
End of explanation |
11,182 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial
Step1: basics
Step2: A print and a plot function are implemented to represent kernel objects.
Step3: Implemented kernels
Many kernels are already implemented in GPy. The following figure gives a summary of some of them (a comprehensive list can be list can be found but typing GPy.kern.<tab>)
Step4: Operations to combine kernels
In GPy, kernel objects can be added or multiplied to create a mutlitude of kernel objects. Parameters and their gradients are handled automatically, and so appear in the combined objects. When kernels are used inside GP objects all the necessary graidents are automagically computed using the chain-rule.
Step5: Note that the kernels that have been added are pythonic in that the objects remain linked
Step6: Operating on different domains
When multiplying and adding kernels, there are two general possibilites, one can assume that the kernels to add/multiply are defined on the same space or on different spaces | Python Code:
import GPy
import numpy as np
Explanation: Tutorial : A kernel overview
Nicolas Durrande and James Hensman, 2013, 2014
The aim of this tutorial is to give a better understanding of the kernel objects in GPy and to list the ones that are already implemented.
First we import the libraries we will need
End of explanation
ker1 = GPy.kern.RBF(1) # Equivalent to ker1 = GPy.kern.rbf(input_dim=1, variance=1., lengthscale=1.)
ker2 = GPy.kern.RBF(input_dim=1, variance = .75, lengthscale=2.)
ker3 = GPy.kern.RBF(1, .5, .5)
Explanation: basics: construction, printing and plotting
For most kernels, the input dimension (domain) is the only mandatory parameter to define a kernel object. However, it is also possible to specify the values of the parameters. For example, the three following commands are valid for defining a squared exponential kernel (ie rbf or Gaussian)
End of explanation
print ker2
_ = ker1.plot(ax=plt.gca())
_ = ker2.plot(ax=plt.gca())
_ = ker3.plot(ax=plt.gca())
Explanation: A print and a plot function are implemented to represent kernel objects.
End of explanation
figure, axes = plt.subplots(3,3, figsize=(10,10), tight_layout=True)
kerns = [GPy.kern.RBF(1), GPy.kern.Exponential(1), GPy.kern.Matern32(1), GPy.kern.Matern52(1), GPy.kern.Brownian(1), GPy.kern.Bias(1), GPy.kern.Linear(1), GPy.kern.PeriodicExponential(1), GPy.kern.White(1)]
for k,a in zip(kerns, axes.flatten()):
k.plot(ax=a, x=1)
a.set_title(k.name.replace('_', ' '))
Explanation: Implemented kernels
Many kernels are already implemented in GPy. The following figure gives a summary of some of them (a comprehensive list can be list can be found but typing GPy.kern.<tab>):
End of explanation
# Product of kernels
k1 = GPy.kern.RBF(1,1.,2.)
k2 = GPy.kern.Matern32(1, 0.5, 0.2)
k_prod = k1 *k2
print k_prod
k_prod.plot()
# Sum of kernels
k1 = GPy.kern.RBF(1,1.,2.)
k2 = GPy.kern.Matern32(1, 0.5, 0.2)
k_add = k1 + k2
print k_add
k_add.plot()
Explanation: Operations to combine kernels
In GPy, kernel objects can be added or multiplied to create a mutlitude of kernel objects. Parameters and their gradients are handled automatically, and so appear in the combined objects. When kernels are used inside GP objects all the necessary graidents are automagically computed using the chain-rule.
End of explanation
print k1, '\n'
k_add.rbf.variance = 12.
print k1
Explanation: Note that the kernels that have been added are pythonic in that the objects remain linked: changing parameters of an add kernel changes those of the constituent parts, and vice versa
End of explanation
k1 = GPy.kern.Linear(input_dim=1, active_dims=[0]) # works on the first column of X, index=0
k2 = GPy.kern.ExpQuad(input_dim=1, lengthscale=3, active_dims=[1]) # works on the second column of X, index=1
k = k1 * k2
k.plot(x=np.ones((1,2)))
def plot_sample(k):
xx, yy = np.mgrid[-3:3:30j, -3:3:30j]
X = np.vstack((xx.flatten(), yy.flatten())).T
K = k.K(X)
s = np.random.multivariate_normal(np.zeros(X.shape[0]), K)
#plt.contourf(xx, yy, s.reshape(*xx.shape), cmap=plt.cm.hot)
plt.imshow(s.reshape(*xx.shape), interpolation='nearest')
plt.colorbar()
plot_sample(k)
k1 = GPy.kern.PeriodicExponential(input_dim=1, active_dims=[0], period=6, lower=-10, upper=10)# + GPy.kern.Bias(1, variance=0, active_dims=[0])
k2 = GPy.kern.PeriodicExponential(input_dim=1, active_dims=[1], period=8, lower=-10, upper=10)# + GPy.kern.Bias(1, variance=0, active_dims=[0])
#k2 = GPy.kern.ExpQuad(1, active_dims=[1])
k = k1 * k2
plot_sample(k)
Explanation: Operating on different domains
When multiplying and adding kernels, there are two general possibilites, one can assume that the kernels to add/multiply are defined on the same space or on different spaces:
a kernel over $\mathbb{R} \times \mathbb{R}: k(x,x') = k_1(x,x') \times k_2(x,x')$
a kernel over $\mathbb{R}^2 \times \mathbb{R}^2: k(\mathbf{x},\mathbf{x}') = k_1(x_1,x'_1) \times k_2(x_2,x'_2)$
To keep things as general as possible, in GPy kernels are assigned active_dims which tell the kernel what to work on. To create a kernel which is a product of krnels on different spaces, we can do
End of explanation |
11,183 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
It Starts with a Dataset
Step1: Transforming Text to Numbers
Example Predictions
Step2: Creating the Input Data
Step3: And now we can initialize our (empty) input layer as vector of 0s. We'll modify it later by putting "1"s in various positions.
Step4: And now we want to create a function that will set our layer_0 list to the correct sequence of 1s and 0s based on a single review. Now if you remember our picture before, you might have noticed something. Each word had a specific place in the input of our network.
Step5: In order to create a function that can update our layer_0 variable based on a review, we have to decide which spots in our layer_0 vector (list of numbers) correlate with each word. Truth be told, it doesn't matter which ones we choose, only that we pick spots for each word and stick with them. Let's decide those positions now and store them in a python dictionary called "word2index".
Step6: ...and now we can use this new "word2index" dictionary to populate our input layer with the right 1s in the right places.
Step7: Creating the Target Data
Step8: Putting it all together in a Neural Network
Step9: Making our Network Train and Run Faster
Step10: First Inefficiency
Step11: Second Inefficiency
Step12: See how they generate exactly the same value? Let's update our new neural network to do this.
Step13: Making Learning Faster & Easier by Reducing Noise
Step14: What's Going On in the Weights? | Python Code:
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r')
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r')
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
reviews[0]
labels[0]
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
import numpy as np
from collections import Counter
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt > 10):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio+0.01)))
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
Explanation: It Starts with a Dataset
End of explanation
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
Explanation: Transforming Text to Numbers
Example Predictions
End of explanation
vocab = set(total_counts.keys())
vocab_size = len(vocab)
print(vocab_size)
Explanation: Creating the Input Data
End of explanation
import numpy as np
layer_0 = np.zeros((1,vocab_size))
layer_0
Explanation: And now we can initialize our (empty) input layer as vector of 0s. We'll modify it later by putting "1"s in various positions.
End of explanation
from IPython.display import Image
Image(filename='sentiment_network.png')
Explanation: And now we want to create a function that will set our layer_0 list to the correct sequence of 1s and 0s based on a single review. Now if you remember our picture before, you might have noticed something. Each word had a specific place in the input of our network.
End of explanation
word2index = {}
for i,word in enumerate(vocab):
word2index[word] = i
word2index
Explanation: In order to create a function that can update our layer_0 variable based on a review, we have to decide which spots in our layer_0 vector (list of numbers) correlate with each word. Truth be told, it doesn't matter which ones we choose, only that we pick spots for each word and stick with them. Let's decide those positions now and store them in a python dictionary called "word2index".
End of explanation
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] = 1
update_input_layer(reviews[0])
layer_0
Explanation: ...and now we can use this new "word2index" dictionary to populate our input layer with the right 1s in the right places.
End of explanation
def get_target_for_label(label):
if(label == 'POSITIVE'):
return 1
else:
return 0
get_target_for_label(labels[0])
get_target_for_label(labels[1])
Explanation: Creating the Target Data
End of explanation
from IPython.display import Image
Image(filename='sentiment_network_2.png')
import time
import sys
import numpy as np
# Let's tweak our network from before to model these phenomena
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
np.random.seed(1)
self.pre_process_data()
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self):
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
self.review_vocab = list(review_vocab)
label_vocab = set()
for label in labels:
label_vocab.add(label)
self.label_vocab = list(label_vocab)
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.learning_rate = learning_rate
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] = 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def train(self, training_reviews, training_labels):
assert(len(training_reviews) == len(training_labels))
correct_so_far = 0
start = time.time()
for i in range(len(training_reviews)):
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
self.update_input_layer(review)
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# TODO: Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# TODO: Update the weights
self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step
if(np.abs(layer_2_error) < 0.5):
correct_so_far += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
def test(self, testing_reviews, testing_labels):
correct = 0
start = time.time()
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ "% #Correct:" + str(correct) + " #Tested:" + str(i+1) + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
# Input Layer
self.update_input_layer(review.lower())
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
if(layer_2[0] > 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000])
# evaluate our model before training (just to show how horrible it is)
mlp.test(reviews[-1000:],labels[-1000:])
# train the network
mlp.train(reviews[:-1000],labels[:-1000])
# evaluate the model after training
mlp.test(reviews[-1000:],labels[-1000:])
mlp.run("That movie was great")
Explanation: Putting it all together in a Neural Network
End of explanation
layer_0 = np.zeros(10)
layer_0
layer_0[4] = 1
layer_0[9] = 1
layer_0
weights_0_1 = np.random.randn(10,5)
layer_1 = layer_0.dot(weights_0_1)
layer_1
Image(filename='sentiment_network_sparse.png')
Explanation: Making our Network Train and Run Faster
End of explanation
Image(filename='sentiment_network_sparse_2.png')
Explanation: First Inefficiency: "0" neurons waste computation
End of explanation
#inefficient thing we did before
layer_1 = layer_0.dot(weights_0_1)
layer_1
# new, less expensive lookup table version
layer_1 = weights_0_1[4] + weights_0_1[9]
layer_1
Explanation: Second Inefficiency: "1" neurons don't need to multiply!
The Solution: Create layer_1 by adding the vectors for each word.
End of explanation
import time
import sys
# Let's tweak our network from before to model these phenomena
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
np.random.seed(1)
self.pre_process_data(reviews)
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self,reviews):
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
self.review_vocab = list(review_vocab)
label_vocab = set()
for label in labels:
label_vocab.add(label)
self.label_vocab = list(label_vocab)
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.learning_rate = learning_rate
self.layer_0 = np.zeros((1,input_nodes))
self.layer_1 = np.zeros((1,hidden_nodes))
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
self.layer_0[0][self.word2index[word]] = 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def train(self, training_reviews_raw, training_labels):
training_reviews = list()
for review in training_reviews_raw:
indices = set()
for word in review.split(" "):
if(word in self.word2index.keys()):
indices.add(self.word2index[word])
training_reviews.append(list(indices))
assert(len(training_reviews) == len(training_labels))
correct_so_far = 0
start = time.time()
for i in range(len(training_reviews)):
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
# Hidden layer
# layer_1 = self.layer_0.dot(self.weights_0_1)
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
# Output layer
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
for index in review:
self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate # update input-to-hidden weights with gradient descent step
if(np.abs(layer_2_error) < 0.5):
correct_so_far += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
def test(self, testing_reviews, testing_labels):
correct = 0
start = time.time()
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ "% #Correct:" + str(correct) + " #Tested:" + str(i+1) + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
# Input Layer
# Hidden layer
self.layer_1 *= 0
unique_indices = set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
unique_indices.add(self.word2index[word])
for index in unique_indices:
self.layer_1 += self.weights_0_1[index]
# Output layer
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
if(layer_2[0] > 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],learning_rate=0.01)
# train the network
mlp_full.train(reviews[:-1000],labels[:-1000])
# evaluate our model before training (just to show how horrible it is)
mlp_full.test(reviews[-1000:],labels[-1000:])
Explanation: See how they generate exactly the same value? Let's update our new neural network to do this.
End of explanation
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
from bokeh.models import ColumnDataSource, LabelSet
from bokeh.plotting import figure, show, output_file
from bokeh.io import output_notebook
output_notebook()
hist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="Word Positive/Negative Affinity Distribution")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
frequency_frequency = Counter()
for word, cnt in total_counts.most_common():
frequency_frequency[cnt] += 1
hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="The frequency distribution of the words in our corpus")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
import time
import sys
import numpy as np
# Let's tweak our network from before to model these phenomena
class SentimentNetwork:
def __init__(self, reviews,labels,min_count = 10,polarity_cutoff = 0.1,hidden_nodes = 10, learning_rate = 0.1):
np.random.seed(1)
self.pre_process_data(reviews, polarity_cutoff, min_count)
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self,reviews, polarity_cutoff,min_count):
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt >= 50):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio + 0.01)))
review_vocab = set()
for review in reviews:
for word in review.split(" "):
if(total_counts[word] > min_count):
if(word in pos_neg_ratios.keys()):
if((pos_neg_ratios[word] >= polarity_cutoff) or (pos_neg_ratios[word] <= -polarity_cutoff)):
review_vocab.add(word)
else:
review_vocab.add(word)
self.review_vocab = list(review_vocab)
label_vocab = set()
for label in labels:
label_vocab.add(label)
self.label_vocab = list(label_vocab)
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.learning_rate = learning_rate
self.layer_0 = np.zeros((1,input_nodes))
self.layer_1 = np.zeros((1,hidden_nodes))
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
self.layer_0[0][self.word2index[word]] = 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def train(self, training_reviews_raw, training_labels):
training_reviews = list()
for review in training_reviews_raw:
indices = set()
for word in review.split(" "):
if(word in self.word2index.keys()):
indices.add(self.word2index[word])
training_reviews.append(list(indices))
assert(len(training_reviews) == len(training_labels))
correct_so_far = 0
start = time.time()
for i in range(len(training_reviews)):
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
# Hidden layer
# layer_1 = self.layer_0.dot(self.weights_0_1)
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
# Output layer
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
for index in review:
self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate # update input-to-hidden weights with gradient descent step
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
if(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
def test(self, testing_reviews, testing_labels):
correct = 0
start = time.time()
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ "% #Correct:" + str(correct) + " #Tested:" + str(i+1) + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
# Input Layer
# Hidden layer
self.layer_1 *= 0
unique_indices = set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
unique_indices.add(self.word2index[word])
for index in unique_indices:
self.layer_1 += self.weights_0_1[index]
# Output layer
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
Explanation: Making Learning Faster & Easier by Reducing Noise
End of explanation
mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01)
mlp_full.train(reviews[:-1000],labels[:-1000])
Image(filename='sentiment_network_sparse.png')
import matplotlib.colors as colors
words_to_visualize = list()
for word, ratio in pos_neg_ratios.most_common(500):
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
for word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]:
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
pos = 0
neg = 0
colors_list = list()
vectors_list = list()
for word in words_to_visualize:
if word in pos_neg_ratios.keys():
vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]])
if(pos_neg_ratios[word] > 0):
pos+=1
colors_list.append("#"+colors.rgb2hex([0,min(255,pos_neg_ratios[word] * 1),0])[3:])
else:
neg+=1
colors_list.append("#000000")
# colors_list.append("#"+colors.rgb2hex([0,0,min(255,pos_neg_ratios[word] * 1)])[3:])
len(vectors_list)
len(colors_list)
# from sklearn.manifold import TSNE
# tsne = TSNE(n_components=2, random_state=0)
# words_top_ted_tsne = tsne.fit_transform(vectors_list)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="vector T-SNE for most polarized words")
source = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0],
x2=words_top_ted_tsne[:,1],
names=words_to_visualize))
p.scatter(x="x1", y="x2", size=8, source=source,color=colors_list)
word_labels = LabelSet(x="x1", y="x2", text="names", y_offset=6,
text_font_size="8pt", text_color="#555555",
source=source, text_align='center')
# p.add_layout(word_labels)
show(p)
# green indicates positive words, black indicates negative words
Explanation: What's Going On in the Weights?
End of explanation |
11,184 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Create a training set, a test set and a set to predict for
Step1: Inspect the features, I know these features (at leasr spectral indices) are correlated but also have high variance, I could pick my favorite features and use those instead | Python Code:
features=wisps.INDEX_NAMES
Explanation: Create a training set, a test set and a set to predict for
End of explanation
#remove infinities and nans
def remove_infinities_and_nans(array):
array=np.log10(array)
infinbools=np.isinf(array)
nanbools=np.isnan(array)
mask=np.logical_or(infinbools, nanbools)
array[mask]=-99
return array
spex[features]=spex[features].apply(remove_infinities_and_nans, axis=0)
data[features]=data[features].apply(remove_infinities_and_nans, axis=0)
from sklearn.decomposition import PCA
pca = PCA(n_components=3, svd_solver='full')
pca.fit(spex[features].values)
spex_pcaed=pca.transform(spex[features].values)
proj_sample=pca.transform(data[features].values)
colors=an.color_from_spts(spex.spt.values, cmap='viridis')
fig, ax=plt.subplots()
plt.scatter(proj_sample[:,0], proj_sample[:,1], alpha=0.006, color='k')
plt.scatter(spex_pcaed[:,0],spex_pcaed[:, 1], color=colors, s=5.)
plt.xlabel('axis-1', fontsize=18)
plt.ylabel('axis-2', fontsize=18)
#ax.set_yscale('log')
#ax.set_xscale('log')
plt.xlim([-1., .4])
plt.ylim([-1., 1.])
import pandas as pd
train_df=spex
train_df['axis1']=spex_pcaed[:,0]
train_df['axis2']=spex_pcaed[:,1]
train_df['spt']=spex.spt
pred_df=data
pred_df['axis1']=proj_sample[:,0]
pred_df['axis2']=proj_sample[:,1]
pred_df['spt']=data.spt
from sklearn.preprocessing import MinMaxScaler, StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import confusion_matrix,accuracy_score
def add_labels(spt):
label=0.0
if 17<=spt<20:
label=1.0
if 20<=spt<30:
label=2.0
if 30<=spt<39.:
label=3.0
return label
def compute_accuracy_score(features=features, split_size=0.5):
scaler = MinMaxScaler(feature_range=(0, 1))
#train_set=train_df[features]
X_train, X_test, y_train, y_test = train_test_split(train_df[features].values, train_df['label'].values, test_size=split_size,
random_state=123456) ###grammar
scaler.fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
rf = RandomForestClassifier( oob_score=True, verbose=0.)
rf.fit(X_train, y_train)
pred_labels = rf.predict(X_test)
model_accuracy = accuracy_score(y_test, pred_labels)
#pred_probs=rf.predict_proba(X_test)
#decispth=rf.decision_path(X_test)
return model_accuracy, rf, scaler
rf_features=['axis1', 'axis2', 'snr1', 'snr2', 'f_test','line_chi', 'spex_chi']
train_df['label']=train_df.spt.apply(add_labels)
pred_df['label']=pred_df.spt.apply(add_labels)
train_df[rf_features]=train_df[rf_features].apply(remove_infinities_and_nans, axis=0)
pred_df[rf_features]=pred_df[rf_features].apply(remove_infinities_and_nans, axis=0)
acc, model, scaler=compute_accuracy_score(features=rf_features, split_size=0.5)
#apply model
pred_set=scaler.transform(pred_df[rf_features].values)
pred_labels=model.predict(pred_set)
plt.hist(pred_labels)
len(pred_labels>0)
plt.plot(pca.explained_variance_ratio_)
Explanation: Inspect the features, I know these features (at leasr spectral indices) are correlated but also have high variance, I could pick my favorite features and use those instead
End of explanation |
11,185 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Clipper Tutorial
Step1: Extract the images
Now, we must extract the data into a format we can load. This will make use of the provided extract_cifar.py
This dataset has 50,000 training datapoints and 10,000 test datapoints. We don't need to use all of them to demonstrate how Clipper works. Feel free to adjust max_train_datapoints and max_test_datapoints or set them to None if you wish to use all the data available for training and testing. You can change these vaues and re-run this command in the future if you wish.
Using 10,000 training images (as opposed to the full 50,000 in the dataset) yields similar prediction accuracies and takes less time to extract into a readable format.
Step2: Load Cifar
The first step in building any application, using machine-learning or otherwise, is to understand the application requirements. Load the dataset into the notebook so you can examine it and better understand the dataset you will be working with. The cifar_utils library provides several utilities for working with CIFAR data – we will make use of one of them here.
Step3: Take a look at the data you've loaded. The size and blurriness of these photos should give you a better understanding of the difficulty of the task you will ask of your machine learning models! If you'd like to see more images, increase the number of rows of images displayed -- the last argument to the function -- to a number greater than 2.
Step4: Start Clipper
Now you're ready to start Clipper! You will be using the clipper_admin client library to perform administrative commands.
Remember, Docker and Docker-Compose must be installed before deploying Clipper. Visit https
Step5: Congratulations! You now have a running Clipper instance that you can start to interact with. Think of your clipper Python object as a vehicle for that interaction. Try using it to see the applications deployed to this Clipper instance
Step6: Create an application
In order to query Clipper for predictions, you need to create an application. Each application specifies a name, a set of models it can query, the query input datatype, the selection policy, and a latency service level objective. Once you register an application with Clipper, the system will create two REST endpoints
Step7: Now when you list the applications registered with Clipper, you should see the newly registered "cifar_demo" application show up!
Step8: Start serving
Now that you have registered an application, you can start querying the application for predictions. In this case,
Clipper has created two endpoints | Python Code:
cifar_loc = ""
%run ./download_cifar.py $cifar_loc
Explanation: Clipper Tutorial: Part 1
This tutorial will walk you through the process of starting Clipper, creating and querying a Clipper application, and deploying models to Clipper. In the first part of the demo, you will set up Clipper and create an application without involving any machine learning, demonstrating how a frontend developer or dev-ops engineer can set up and query Clipper without having to know anything about the machine-learning models involved.
As an example, this tutorial will walk you through creating an application that labels images as either pictures of birds or planes. You will use the CIFAR-10 dataset as the source of these images.
Download the images
As the first step in the tutorial, download the CIFAR dataset that your Clipper application will work with. You can do this by specifying a download location, cifar_loc, and running the below code. This will make use of the provided download_cifar.py.
This download can take some time. If it fails before you see the output "Finished downloading", go to the download location you specified, delete cifar-10-python.tar.gz, and attempt the download again.
End of explanation
max_train_datapoints = 10000
max_test_datapoints = 10000
%run ./extract_cifar.py $cifar_loc $max_train_datapoints $max_test_datapoints
Explanation: Extract the images
Now, we must extract the data into a format we can load. This will make use of the provided extract_cifar.py
This dataset has 50,000 training datapoints and 10,000 test datapoints. We don't need to use all of them to demonstrate how Clipper works. Feel free to adjust max_train_datapoints and max_test_datapoints or set them to None if you wish to use all the data available for training and testing. You can change these vaues and re-run this command in the future if you wish.
Using 10,000 training images (as opposed to the full 50,000 in the dataset) yields similar prediction accuracies and takes less time to extract into a readable format.
End of explanation
import cifar_utils
test_x, test_y = cifar_utils.filter_data(
*cifar_utils.load_cifar(cifar_loc, cifar_filename="cifar_test.data", norm=True))
no_norm_x, no_norm_y = cifar_utils.filter_data(
*cifar_utils.load_cifar(cifar_loc, cifar_filename="cifar_test.data", norm=False))
Explanation: Load Cifar
The first step in building any application, using machine-learning or otherwise, is to understand the application requirements. Load the dataset into the notebook so you can examine it and better understand the dataset you will be working with. The cifar_utils library provides several utilities for working with CIFAR data – we will make use of one of them here.
End of explanation
%matplotlib inline
cifar_utils.show_example_images(no_norm_x, no_norm_y, 2)
Explanation: Take a look at the data you've loaded. The size and blurriness of these photos should give you a better understanding of the difficulty of the task you will ask of your machine learning models! If you'd like to see more images, increase the number of rows of images displayed -- the last argument to the function -- to a number greater than 2.
End of explanation
import sys
import os
from clipper_admin import Clipper
# Change the username if necessary
user = ""
# Set the path to the SSH key
key = ""
# Set the SSH host
host = ""
clipper = Clipper(host, user, key)
clipper.start()
Explanation: Start Clipper
Now you're ready to start Clipper! You will be using the clipper_admin client library to perform administrative commands.
Remember, Docker and Docker-Compose must be installed before deploying Clipper. Visit https://docs.docker.com/compose/install/ for instructions on how to do so. In addition, we recommend using Anaconda and Anaconda environments to manage Python.
Start by installing the library with pip:
sh
pip install clipper_admin
Clipper uses Docker to manage application configurations and to deploy machine-learning models. Make sure your Docker daemon, local or remote, is up and running. You can check this by running docker ps in your command line – if your Docker daemon is not running, you will be told explicitly.
Starting Clipper will have the following effect on your setup: <img src="img/start_clipper.png" style="width: 350px;"/>
If you'd like to deploy Clipper locally, you can leave the user and key variables blank and set host="localhost". Otherwise, you can deploy Clipper remotely to a machine that you have SSH access to. Set the user variable to your SSH username, the key variable to the path to your SSH key, and the host variable to the remote hostname or IP address.
If your SSH server is running on a non-standard port, you can specify the SSH port to use as another argument to the Clipper constructor. For example, clipper = Clipper(host, user, key, ssh_port=9999).
End of explanation
clipper.get_all_apps()
Explanation: Congratulations! You now have a running Clipper instance that you can start to interact with. Think of your clipper Python object as a vehicle for that interaction. Try using it to see the applications deployed to this Clipper instance:
End of explanation
app_name = "cifar_demo"
model_name = "birds_vs_planes_classifier"
# If the model doesn't return a prediction in time, predict
# label 0 (bird) by default
default_output = "0"
clipper.register_application(
app_name,
model_name,
"doubles",
default_output,
slo_micros=20000)
Explanation: Create an application
In order to query Clipper for predictions, you need to create an application. Each application specifies a name, a set of models it can query, the query input datatype, the selection policy, and a latency service level objective. Once you register an application with Clipper, the system will create two REST endpoints: one for requesting predictions and for providing feedback.
By associating the query interface with a specific application, Clipper allows frontend developers the flexibility to have multiple applications running in the same Clipper instance. Applications can request predictions from any model in Clipper. This allows a single Clipper instance to serve multiple machine-learning applications. It also provides a convenient mechanism for beta-testing or incremental rollout by creating experimental and stable applications for the same set of queries.
For this tutorial, you will create an application named "cifar_demo" and register a candidate model. Note that Clipper allows you to create the application before deploying the models. Clipper will be moving to a label-based model specification mechanism soon, so that in the future you won't have to explicitly enumerate all the models you want to query up front.
Registering the cifar_demo application with Clipper will have the following effect on your setup: <img src="img/register_app.png" style="width: 500px;"/>
Don't worry if this command seems to take a long time. Before starting Clipper, the Docker containers must be downloaded from Docker Hub. These containers are fairly large and may take awhile to download depending on the speed of your internet connection.
End of explanation
clipper.get_all_apps(verbose=True)
Explanation: Now when you list the applications registered with Clipper, you should see the newly registered "cifar_demo" application show up!
End of explanation
import seaborn as sns
sns.set_style("whitegrid")
sns.despine()
import matplotlib as mpl
%matplotlib notebook
cifar_utils.run_serving_workload(host, app_name, test_x, test_y)
Explanation: Start serving
Now that you have registered an application, you can start querying the application for predictions. In this case,
Clipper has created two endpoints:
http://HOSTNAME:1337/cifar_demo/predict
http://HOSTNAME:1337/cifar_demo/update
You will now start querying Clipper with a simple Python frontend app that computes the average accuracy of the responses after every 100 requests and updates a plot of the results with every iteration.
This diagram shows how the accuracy plot is receiving its test predictions: <img src="img/serve_predictions.png" style="width: 500px;"/>
End of explanation |
11,186 | Given the following text description, write Python code to implement the functionality described.
Description:
Replace each element of Array with it 's corresponding rank
Function to assign rank to array elements ; Copy input array into newArray ; Sort newArray [ ] in ascending order ; Dictionary to store the rank of the array element ; Update rank of element ; Assign ranks to elements ; Driver Code ; Given array arr [ ] ; Function call ; Print the array elements
| Python Code:
def changeArr(input1 ) :
newArray = input1 . copy()
newArray . sort()
ranks = { }
rank = 1
for index in range(len(newArray ) ) :
element = newArray[index ] ;
if element not in ranks :
ranks[element ] = rank
rank += 1
for index in range(len(input1 ) ) :
element = input1[index ]
input1[index ] = ranks[input1[index ] ]
if __name__== "__main __":
arr =[100 , 2 , 70 , 2 ]
changeArr(arr )
print(arr )
|
11,187 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Rewriting rules in ReGraph
In the context of ReGraph, by rewriting rules we mean the rules of sesqui-pushout rewriting (see more details here). A rewriting rule consists of the three graphs
Step1: 1. Creating a rewriting rule from a pattern and injecting transformations
Step2: Every rule can be converted to a sequence of human-readable commands,
Step3: 2. Creating a rewriting rule from $lhs$, $p$, $rhs$ and two maps
By default, Rule objects in ReGraph are initialized with three graph objects (NXGraph) corresponding to $p$, $lhs$ and $rhs$, together with two Python dictionaries encoding the homomorphisms $p \rightarrow lhs$ and $p \rightarrow rhs$. This may be useful in a lot of different scenarios. For instance, as in the following example.
Step4: As the result of cloning of the node 1, all its incident edges are copied to the newly created clone node (variable p_clone). However, in our rule we would like to keep only some of the edges and remove the rest as follows.
Step5: Instead of initializing our rule from the pattern and injecting a lot of edge removals, we could directly initialize three objects for $p$, $lhs$ and $rhs$, where $p$ contains only the desired edges. In the following example, because the rule does not specify any merges or additions (so $rhs$ is isomorphic to $p$), we can omit the parameter $rhs$ in the constructor of Rule. | Python Code:
from regraph import NXGraph, Rule, plot_rule
Explanation: Rewriting rules in ReGraph
In the context of ReGraph, by rewriting rules we mean the rules of sesqui-pushout rewriting (see more details here). A rewriting rule consists of the three graphs: $p$ – preserved part, $lhs$ – left hand side, $rhs$ – right hand side, and two mappings: from $p$ to $lhs$ and from $p$ to $rhs$.
Informally, $lhs$ represents a pattern to match in a graph, subject to rewriting. $p$ together with $p \rightarrow lhs$ mapping specifies a part of the pattern which stays preseved during rewriting, i.e. all the nodes/edges/attributes present in $lhs$ but not $p$ will be removed. $rhs$ and $p \rightarrow rhs$ specify nodes/edges/attributes to add to the $p$. In addition, rules defined is such a way allow to clone and merge nodes. If two nodes from $p$ map to the same node in $lhs$, the node corresponding to this node of the pattern will be cloned. Symmetrically, if two nodes from $p$ map to the same node in $rhs$, the corresponding two nodes will be merged.
The following examples will illustrate the idea behind the sesqui-pushout rewriting rules more clearly:
End of explanation
# Define the left-hand side of the rule
pattern = NXGraph()
pattern.add_nodes_from([1, 2, 3])
pattern.add_edges_from([(1, 2), (2, 3)])
rule1 = Rule.from_transform(pattern)
# `inject_clone_node` returns the IDs of the newly created
# clone in P and RHS
p_clone, rhs_clone = rule1.inject_clone_node(1)
rule1.inject_add_node("new_node")
rule1.inject_add_edge("new_node", rhs_clone)
plot_rule(rule1)
Explanation: 1. Creating a rewriting rule from a pattern and injecting transformations
End of explanation
print(rule1.to_commands())
Explanation: Every rule can be converted to a sequence of human-readable commands,
End of explanation
# Define the left-hand side of the rule
pattern = NXGraph()
pattern.add_nodes_from([1, 2, 3])
pattern.add_edges_from([(1, 2), (1, 3), (1, 1), (2, 3)])
# Define the preserved part of the rule
rule2 = Rule.from_transform(pattern)
p_clone, rhs_clone = rule2.inject_clone_node(1)
plot_rule(rule2)
print("New node corresponding to the clone: ", p_clone)
print(rule2.p.edges())
Explanation: 2. Creating a rewriting rule from $lhs$, $p$, $rhs$ and two maps
By default, Rule objects in ReGraph are initialized with three graph objects (NXGraph) corresponding to $p$, $lhs$ and $rhs$, together with two Python dictionaries encoding the homomorphisms $p \rightarrow lhs$ and $p \rightarrow rhs$. This may be useful in a lot of different scenarios. For instance, as in the following example.
End of explanation
rule2.inject_remove_edge(1, 1)
rule2.inject_remove_edge(p_clone, p_clone)
rule2.inject_remove_edge(p_clone, 1)
rule2.inject_remove_edge(p_clone, 2)
rule2.inject_remove_edge(1, 3)
print(rule2.p.edges())
plot_rule(rule2)
Explanation: As the result of cloning of the node 1, all its incident edges are copied to the newly created clone node (variable p_clone). However, in our rule we would like to keep only some of the edges and remove the rest as follows.
End of explanation
# Define the left-hand side of the rule
lhs = NXGraph()
lhs.add_nodes_from([1, 2, 3])
lhs.add_edges_from([(1, 2), (1, 3), (1, 1), (2, 3)])
# Define the preserved part of the rule
p = NXGraph()
p.add_nodes_from([1, "1_clone", 2, 3])
p.add_edges_from([
(1, 2),
(1, "1_clone"),
("1_clone", 3),
(2, 3)])
p_lhs = {1: 1, "1_clone": 1, 2: 2, 3: 3}
# Initialize a rule object
rule3 = Rule(p, lhs, p_lhs=p_lhs)
plot_rule(rule3)
print("New node corresponding to the clone: ", "1_clone")
print(rule3.p.edges())
Explanation: Instead of initializing our rule from the pattern and injecting a lot of edge removals, we could directly initialize three objects for $p$, $lhs$ and $rhs$, where $p$ contains only the desired edges. In the following example, because the rule does not specify any merges or additions (so $rhs$ is isomorphic to $p$), we can omit the parameter $rhs$ in the constructor of Rule.
End of explanation |
11,188 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
验证码识别 简单版本
Step1: 生成验证码
Step2: 模型
Step3: 模型一共有36.5万参数。 | Python Code:
import time
import os
from multiprocessing import Pool
from captcha.image import ImageCaptcha
import numpy as np
import skimage.io as io
import tensorflow as tf
import matplotlib.pylab as plt
%matplotlib inline
Explanation: 验证码识别 简单版本
End of explanation
IMG_H = 64
IMG_W = 160
IMG_CHANNALS = 1
CAPTCHA_SIZE = 4
CAPTCHA_NUM = 36
N_CLASSES = CAPTCHA_SIZE * CAPTCHA_NUM
# 生成验证码,大小64*160, 灰色
def gen_baptcha(text):
image = ImageCaptcha()
img = image.generate_image(text)
img = img.convert("L").resize([IMG_W, IMG_H])
ret = np.array(img,dtype=np.uint8).reshape([IMG_H,IMG_W,1])
return ret
def text_2_label(text):
key_list = list('0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ')
value_list = np.eye(CAPTCHA_NUM, dtype=np.int32).tolist()
label_dict = dict(zip(key_list, value_list))
label_ = map(lambda t: label_dict[t], list(text.upper()))
ret = np.array(label_, dtype=np.uint8).flatten()
return ret
def label_2_text(label):
key_list = list('0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ')
ret_list = [key_list[t] for t in label.reshape([CAPTCHA_SIZE,CAPTCHA_NUM]).argmax(axis=1)]
return ''.join(ret_list)
def get_data(batch_size):
char_set = list('1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ')
text_arr = np.random.choice(char_set, batch_size * CAPTCHA_SIZE, replace=True)
text_list = [''.join(t) for t in np.split(text_arr, batch_size)]
images = np.asarray([gen_baptcha(text) for text in text_list], dtype=np.float32)
labels = np.asarray([text_2_label(text) for text in text_list], dtype=np.int32)
return images, labels
def test_plot():
nr,nc = 10, 5
batch = nr*nc
images,labels = get_data(batch)
plt.figure(figsize=(12,5))
for i in range(batch):
plt.subplot(nr,nc,i+1)
plt.axis("off")
plt.subplots_adjust(top=1.5)
plt.imshow(images[i,:,:,0])
plt.show()
return images,labels
images, labels = test_plot()
label_2_text(labels[0])
Explanation: 生成验证码
End of explanation
def interface(x):
with tf.name_scope("conv-1"):
w = tf.Variable(tf.random_normal([3, 3, 1, 32], stddev=0.01))
b = tf.Variable(tf.constant(0., shape=[32]))
x = tf.nn.conv2d(x, w, strides=[1, 1, 1, 1], padding='SAME')
x = tf.nn.bias_add(x, b)
m,v = tf.nn.moments(x,[0])
x = tf.nn.batch_normalization(x, mean=m, variance=v, offset=None, scale=None, variance_epsilon=1e-6)
x = tf.nn.relu(x)
with tf.name_scope("pool-1"):
x = tf.nn.max_pool(x, ksize=[1, 2, 2, 1],strides=[1, 2, 2, 1],padding='SAME')
with tf.name_scope("conv-2"):
w = tf.Variable(tf.random_normal([3, 3, 32, 64], stddev=0.01))
b = tf.Variable(tf.constant(0., shape=[64]))
x = tf.nn.conv2d(x, w, strides=[1, 1, 1, 1], padding='SAME')
x = tf.nn.bias_add(x, b)
m,v = tf.nn.moments(x,[0])
x = tf.nn.batch_normalization(x, mean=m, variance=v, offset=None, scale=None, variance_epsilon=1e-6)
x = tf.nn.relu(x)
with tf.name_scope("pool-2"):
x = tf.nn.max_pool(x, ksize=[1, 2, 2, 1],strides=[1, 2, 2, 1],padding='SAME')
with tf.name_scope("conv-3"):
w = tf.Variable(tf.random_normal([3, 3, 64, 64], stddev=0.01))
b = tf.Variable(tf.constant(0., shape=[64]))
x = tf.nn.conv2d(x, w, strides=[1, 1, 1, 1], padding='SAME')
x = tf.nn.bias_add(x, b)
m,v = tf.nn.moments(x,[0])
x = tf.nn.batch_normalization(x, mean=m, variance=v, offset=None, scale=None, variance_epsilon=1e-6)
x = tf.nn.relu(x)
with tf.name_scope("pool-3"):
x = tf.nn.max_pool(x, ksize=[1, 2, 2, 1],strides=[1, 2, 2, 1],padding='SAME') # N * 8 * 20 * 64
with tf.name_scope("fc-4"):
shape = x.get_shape()
size = shape[1].value * shape[2].value * shape[3].value
x = tf.reshape(x, [-1, size])
w = tf.Variable(tf.random_normal([size, 1024], stddev=0.01))
b = tf.Variable(tf.constant(0., shape=[1024]))
x = tf.matmul(x, w) + b
m,v = tf.nn.moments(x,[0])
x = tf.nn.batch_normalization(x, mean=m, variance=v, offset=None, scale=None, variance_epsilon=1e-6)
x = tf.nn.relu(x)
with tf.name_scope("fc-5"):
w = tf.Variable(tf.random_normal([1024, N_CLASSES], stddev=0.01))
b = tf.Variable(tf.constant(0., shape=[N_CLASSES]))
x = tf.matmul(x, w) + b
return x
Explanation: 模型
End of explanation
MAX_STEP = 500 #100000
BATCH_SIZE = 64
def train():
x = tf.placeholder(tf.float32, [BATCH_SIZE,IMG_H, IMG_W, IMG_CHANNALS])
y = tf.placeholder(tf.int32, [BATCH_SIZE, N_CLASSES])
x_ = x/255.0 - 0.5 # 归一化
logits = interface(x_)
logits_ = tf.reshape(logits, [-1, CAPTCHA_SIZE, CAPTCHA_NUM])
labels_ = tf.reshape(y, [-1, CAPTCHA_SIZE, CAPTCHA_NUM])
with tf.name_scope('loss'):
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits_, labels=labels_, dim=-1)
loss = tf.reduce_mean(cross_entropy, name="loss")
with tf.name_scope("accuracy"):
correct = tf.equal(tf.argmax(logits_, -1), tf.argmax(labels_, -1))
accuracy_one = tf.reduce_mean(tf.cast(correct, tf.float32), name="accuracy_one")
correct_all = tf.reduce_all(correct, axis=-1)
accuracy_all = tf.reduce_mean(tf.cast(correct_all, tf.float32), name="accuracy_all")
optimizer = tf.train.AdamOptimizer(learning_rate=0.001).minimize(loss)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
for i in range(MAX_STEP):
x_bt, y_bt = get_data(BATCH_SIZE)
_ = sess.run(optimizer, feed_dict={x: x_bt, y:y_bt})
if i % 100 == 0 or (i+1) == MAX_STEP:
x_bt, y_bt = get_data(BATCH_SIZE)
acc_one, acc_all = sess.run([accuracy_one, accuracy_all], feed_dict={x: x_bt, y: y_bt})
print "step: %d, accuracy: %.4f | %.4f" % (i, acc_one, acc_all)
sess.close()
train()
Explanation: 模型一共有36.5万参数。
End of explanation |
11,189 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Paragraph to mem prototype
Import modules
Step1: Define some constant variables
Chagne path to your books!
all_books
Step2: Harry paragraphs
We define the harryness of a paragraph as the number of instances of 'Harry' and 'Harry'/sentence in a paragraph
Step3: QA Histograms
We process the loaded pensieve docs. Some basic QA histograms are filled and we plot the # of instances of the different 'verbs' document wise. This will be used to select the significant verbs in each paragraph (appearance freq. < 50)
Step4: Next we plot frequency and frequency per sentence per paragraph
Step5: Generating memory text
With the previous information we can define meaningful paragraphs and sentences. We can iterate over paragraphs, selecting those with nHarry's/sentence > 0.4 and construct memories
Step6: Quick test on documents as proof of principal
Step7: More data?
Scrape the internet for more on Harry Potter. There is a lot of fan finctions plus summaried information on the seires out there.
Import libraries and make call from wikia page using api | Python Code:
#import sys
#sys.path.append('/Users/michaellomnitz/Documents/CDIPS-AI/pensieve/pensieve')
import pensieve as pens
import textacy
from collections import defaultdict
from random import random
import numpy as np
import matplotlib.pyplot as plt
Explanation: Paragraph to mem prototype
Import modules
End of explanation
all_books = ['../../clusterpot/book1.txt']
# '../../clusterpot/book2.txt',
# '../../clusterpot/book3.txt',
# '../../clusterpot/book4.txt',
# '../../clusterpot/book5.txt',
# '../../clusterpot/book6.txt',
# '../../clusterpot/book7.txt']
#all_books = ['../../clusterpot/book1.txt']
colors = ['black', 'red', 'green', 'blue', 'cyan', 'yellow','magenta']
bins = [ 5*i for i in range(0,201) ]
docs = []
for book in all_books:
docs.append(pens.Doc(book))
Explanation: Define some constant variables
Chagne path to your books!
all_books : Path to the 7 seven books used to mine meomories
coolors & bins : Defining some hisotrgram constants
docs : List of pensieve.Doc's, one for each book
End of explanation
def harry_paragraphs( d ):
x = []
y= []
par = []
par_weight = {}
for i, p in enumerate(d.paragraphs):
count = 1
for sent in p.doc.sents:
#print(sent.text)
count+=1
#lines.append(count)
#print(sent)
harryness = p.words['names']['Harry']
harryness_per_sent = p.words['names']['Harry']/count #+ p.words['names']['Potter']
#+ p.words['names']['Potter'] + p.words['names']['Harry Potter']
if len(p.text) > 0:
#print(harryness/len(p.text))
if harryness >=1:
x.append(harryness)
y.append(harryness_per_sent)
if harryness >= 3:
par.append(i)
par_weight[i] = harryness
return x,y
Explanation: Harry paragraphs
We define the harryness of a paragraph as the number of instances of 'Harry' and 'Harry'/sentence in a paragraph
End of explanation
histogram = []
hist_per_sent = []
words_hist = []
for i, d in enumerate(docs):
print('Reading ',d)
h, h_per_line = harry_paragraphs(d)
histogram.append(h)
hist_per_sent.append(h_per_line)
for verb in d.words['verbs']:
words_hist.append(d.words['verbs'][verb])
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_xlabel('Appearance frequency')
ax.set_title('All names')
#
plt.hist(words_hist,bins)
plt.yscale('log')
#
plt.show()
Explanation: QA Histograms
We process the loaded pensieve docs. Some basic QA histograms are filled and we plot the # of instances of the different 'verbs' document wise. This will be used to select the significant verbs in each paragraph (appearance freq. < 50)
End of explanation
fig2 = plt.figure()
ax = fig2.add_subplot(111)
ax.set_xlabel('Harry appearance frequency')
plt.hist(histogram)#, color=colors)
# for i, h in enumerate(histogram):
# lab = "book"+str(i)
# plt.hist(h,color = colors[i],label = lab)
plt.yscale('log')
ax.legend(bbox_to_anchor=(0, 0, 1, 1))
plt.show()
#print(len(par))
fig3 = plt.figure()
ax = fig3.add_subplot(111)
ax.set_xlabel('Harry appearance frequency per sentence in paragraph')
# for i, h in enumerate(hist_per_sent):
# lab = "book"+str(i)
# plt.hist(h,color = colors[i],label = lab)
plt.hist(hist_per_sent)
plt.yscale('log')
ax.legend(bbox_to_anchor=(0, 0, 1, 1))
plt.show()
Explanation: Next we plot frequency and frequency per sentence per paragraph
End of explanation
max = 100
def para_2_mem(p,counter):
main_verbs = textacy.spacy_utils.get_main_verbs_of_sent(p.doc)
results = []
for verb in main_verbs:
my_string = ''
if p.words['verbs'][verb.text] < 50:
# --- Getting aux verbs
span = textacy.spacy_utils.get_span_for_verb_auxiliaries(verb)
complex_verb = p.doc[span[0]].text
span_end = 1
if textacy.spacy_utils.is_negated_verb(verb) is True:
complex_verb = complex_verb+' not '
span_end = 0
for a in range(span[0]+1,span[1]+span_end):
complex_verb +=" "+p.doc[span[1]].text
# ---
subjects = textacy.spacy_utils.get_subjects_of_verb(verb)
objects = textacy.spacy_utils.get_objects_of_verb(verb)
if len(subjects)>0 and len(objects)>0:
results.append([subjects,complex_verb,objects])
#if counter < max:
#print(subjects,complex_verb,objects)
#print(" ------------------------------- ")
return results
Explanation: Generating memory text
With the previous information we can define meaningful paragraphs and sentences. We can iterate over paragraphs, selecting those with nHarry's/sentence > 0.4 and construct memories:
* Find textacy main verbs and select meaningful verbs (documenta wise freq. < 50) and find their subjects and objects
* Get verb auxiliaries and check if there is necesary negation
End of explanation
# Will only print out the first processed paragraphs for QA
def doc2mem(docs):
counter = 0
passed = 0
for p in docs.paragraphs:
#
count = 1
for sent in p.doc.sents:
count+=1
#
if p.words['names']['Harry']/count >= 0.9 :
#
passed +=1
print(" Sentences ",counter," \n",p.text)
print(" ------------------------------- ")
log = ''
for key in p.words['places']:
log+=key+' '
#print("Places: ", log)
#print(" ------------------------------- ")
log = ''
for key in p.words['names']:
log+=key+' '
print("People: ", log)
print(" ------------------------------- ")
print('Actions \n ')
#
log = para_2_mem(p,counter)
for i_log in log:
print(i_log)
#print(log)
counter+=1
print(" ------------------------------- ")
return passed
counter = 0
passed = 0
for d in docs:
passed+=doc2mem(d)
# print(len(d.paragraphs))
#passed+=doc2mem(d)
print(passed)
Explanation: Quick test on documents as proof of principal
End of explanation
import requests
import json
r = requests.get('http://harrypotter.wikia.com/api/v1/Articles/Top')
hp_id = [it['id'] for it in json.loads(r.content)['items'] if it['title'] == 'Harry Potter'][0]
r = requests.get('http://harrypotter.wikia.com/api/v1/Articles/AsSimpleJson', params={'id': hp_id})
json.dump(r.text, open('HarryPotterWikia1.json', 'w'))
cont = json.loads(r.text)
with open('HarryPotterWikia.txt', 'w') as f:
for section in cont['sections']:
#print(section['title'])
#f.write(section['title'].encode('utf8')+'\n')
f.write(section['title']+'\n')
for unit in section['content']:
if unit['type'] == 'paragraph':f.write(unit['text']+'\n')
text = pens.Doc('HarryPotterWikia.txt')
doc2mem(text)
for a in textacy.extract.named_entities(docs[0].paragraphs[10].doc, include_types=['PERSON'],min_freq = 2):
print(a)
Explanation: More data?
Scrape the internet for more on Harry Potter. There is a lot of fan finctions plus summaried information on the seires out there.
Import libraries and make call from wikia page using api
End of explanation |
11,190 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Host-guest complex setup and simulation using SMIRNOFF
This notebook takes a SMILES string for a guest and a 3D structure for a host, and generates an initial structure of the complex using docking. It then proceeds to solvate, parameterize the system, and then minimize and do a short simulation with OpenMM.
Please note this is intended for educational purposes and comprises a worked example, not a polished tool. The usual disclaimers apply -- don't take anything here as advice on how you should set up these types of systems; this is just an example of setting up a nontrivial system with SMIRNOFF.
Author - David Mobley (UC Irvine)
Prerequisites
Before beginning, you're going to need a license to the OpenEye toolkits (free for academics), and have these installed and working, ideally in your anaconda Python distribution. Then you'll also need the openff-forcefield toolkits installed, which you can do via conda install -c conda-forge openff-toolkit if you are using anaconda Python.
You'll also need the oenotebook OpenEye Jupyter notebook library installed, such as via pip install -i https
Step1: Configuration for your run
We'll use this to configure where to get input files, where to write output files, etc.
Step2: Quickly draw your guest and make sure it's what you intended
OENotebook is super useful and powerful; see https
Step3: Get host file and prep it for docking
(Note that we are going to skip charge assignment for the purposes of this example, because it's slow. So you want to use an input file which has provided charges, OR add charge assignment.)
Retrieve host file, do file bookkeeping
Step4: Prep host file for docking
Here we'll load the host and prepare for docking, which takes a bit of time as it has to get prepared as a "receptor" for docking into
Step5: Generate 3D structure of our guest and dock it
Step6: Visualize in 3D to make sure we placed the guest into the binding site
This is optional, but very helpful to make sure you're starting off with your guest in the binding site. To execute this you'll need nglview for visualization and mdtraj for working with trajectory files
Step7: Solvate complex
Next we're going to solvate the complex using PDBFixer -- a fairly basic tool, but one which should work. Before doing so, we need to combine the host and the guest into a single OEMol, and then in this case we'll write out a file containing this as a PDB for PDBFixer to read. However, we won't actually use the Topology from that PDB going forward, as the PDB will lose chemistry information we currently have in our OEMols (e.g. it can't retain charges, etc.). Instead, we'll obtain an OpenMM Topology by converting directly from OEChem using utility functionality in oeommtools, and we'll solvate THIS using PDBFixer. PDBFixer will still lose the relevant chemistry information, so we'll just copy any water/ions it added back into our original system.
Step8: Apply SMIRNOFF to set up the system for simulation with OpenMM
Next, we apply a SMIRNOFF force field (SMIRNOFF99Frosst) to the system to set it up for simulation with OpenMM (or writing out, via ParmEd, to formats for use in a variety of other simulation packages).
Prepping a system with SMIRNOFF takes basically three components
Step9: Load our force field and parameterize the system
This uses the SMIRNOFF ForceField class and SMIRNOFF XML files to parameterize the system.
Step10: Minimize and (very briefly) simulate our system
Here we will do an energy minimization, followed by a very very brief simulation. These are done in separate cells since OpenMM is quite slow on CPUs so you may not want to run the simulation on your computer if you are using a CPU.
Finalize prep and energy minimize
Step11: Run an MD simulation of a few steps, storing a trajectory for visualization | Python Code:
# NBVAL_SKIP
from openeye import oechem # OpenEye Python toolkits
import oenotebook as oenb
# Check license
print("Is your OEChem licensed? ", oechem.OEChemIsLicensed())
from openeye import oeomega # Omega toolkit
from openeye import oequacpac #Charge toolkit
from openeye import oedocking # Docking toolkit
from oeommtools import utils as oeommutils # Tools for OE/OpenMM
from simtk import unit #Unit handling for OpenMM
from simtk.openmm import app
from simtk.openmm.app import PDBFile
from openff.toolkit.typing.engines.smirnoff import *
import os
from pdbfixer import PDBFixer # for solvating
Explanation: Host-guest complex setup and simulation using SMIRNOFF
This notebook takes a SMILES string for a guest and a 3D structure for a host, and generates an initial structure of the complex using docking. It then proceeds to solvate, parameterize the system, and then minimize and do a short simulation with OpenMM.
Please note this is intended for educational purposes and comprises a worked example, not a polished tool. The usual disclaimers apply -- don't take anything here as advice on how you should set up these types of systems; this is just an example of setting up a nontrivial system with SMIRNOFF.
Author - David Mobley (UC Irvine)
Prerequisites
Before beginning, you're going to need a license to the OpenEye toolkits (free for academics), and have these installed and working, ideally in your anaconda Python distribution. Then you'll also need the openff-forcefield toolkits installed, which you can do via conda install -c conda-forge openff-toolkit if you are using anaconda Python.
You'll also need the oenotebook OpenEye Jupyter notebook library installed, such as via pip install -i https://pypi.anaconda.org/openeye/simple openeye-oenotebook (but for some particular environments this fails and you might pip install instead from https://pypi.anaconda.org/openeye/label/beta/simple/openeye-oenotebook or see troubleshooting tips in , and oeommtools, a library for working with OpenEye/OpenMM in conjunction, which is installable via conda install -c OpenEye/label/Orion -c omnia. You also need pdbfixer.
A possibly complete set of installation instructions is to download Anaconda 3 and then do something like this:
./Anaconda3-4.4.0-Linux-x86_64.sh
conda update --all
conda install -c OpenEye/label/Orion -c omnia oeommtools
conda install -c conda-forge openmm openff-toolkit nglview pdbfixer
pip install -i https://pypi.anaconda.org/openeye/simple openeye-toolkits
pip install -i https://pypi.anaconda.org/openeye/simple/openeye-oenotebook
For nglview viewing of 3D structures within the notebook, you will likely also need to run jupyter nbextension install --py nglview-js-widgets --user and jupyter-nbextension enable nglview --py --sys-prefix.
Some platforms may have issues with the openeye-oenotebook installation, so a workaround may be something like pip install https://pypi.anaconda.org/openeye/label/beta/simple/openeye-oenotebook/0.8.1/OpenEye_oenotebook-0.8.1-py2.py3-none-any.whl.
Import some tools we need initially
(Let's do this early so you can fail quickly if you don't have the tools you need)
End of explanation
# Where will we write outputs? Directory will be created if it does not exist
datadir = 'datafiles'
# Where will we download the host file from? The below is an uncharged host
#host_source = 'https://raw.githubusercontent.com/MobleyLab/SAMPL6/master/host_guest/OctaAcidsAndGuests/OA.mol2' #octa acid
# Use file provided in this directory - already charged
host_source = 'OA.mol2'
# What SMILES string for the guest? Should be isomeric SMILES
guest_smiles = 'OC(CC1CCCC1)=O' # Use cyclopentyl acetic acid, the first SAMPL6 octa acid guest
# Another useful source of host-guest files is the benchmarksets repo, e.g. github.com/mobleylab/benchmarksets
# This notebook has also been tested on CB7 Set 1 host-cb7.mol2 with SMILES CC12CC3CC(C1)(CC(C3)(C2)[NH3+])C.
Explanation: Configuration for your run
We'll use this to configure where to get input files, where to write output files, etc.
End of explanation
# NBVAL_SKIP
# Create empty OEMol
mol = oechem.OEMol()
# Convert SMILES
oechem.OESmilesToMol(mol, guest_smiles)
# Draw
oenb.draw_mol(mol)
Explanation: Quickly draw your guest and make sure it's what you intended
OENotebook is super useful and powerful; see https://www.eyesopen.com/notebooks-directory. Here we only use a very small amount of what's available, drawing on http://notebooks.eyesopen.com/introduction-to-oenb.html
End of explanation
# NBVAL_SKIP
# Output host and guest files
hostfile = os.path.join(datadir, 'host.mol2')
guestfile = os.path.join(datadir, 'guest.mol2')
# Create data dir if not present
if not os.path.isdir(datadir):
os.mkdir(datadir)
# Set host file name and retrieve file
if 'http' in host_source:
import urllib
urllib.request.urlretrieve(host_source, hostfile)
else:
import shutil
shutil.copy(host_source, hostfile)
Explanation: Get host file and prep it for docking
(Note that we are going to skip charge assignment for the purposes of this example, because it's slow. So you want to use an input file which has provided charges, OR add charge assignment.)
Retrieve host file, do file bookkeeping
End of explanation
# NBVAL_SKIP
# Read in host file
ifile = oechem.oemolistream(hostfile)
host = oechem.OEMol()
oechem.OEReadMolecule( ifile, host)
ifile.close()
# Prepare a receptor - Start by getting center of mass to use as a hint for where to dock
com = oechem.OEFloatArray(3)
oechem.OEGetCenterOfMass(host, com)
# Create receptor, as per https://docs.eyesopen.com/toolkits/python/dockingtk/receptor.html#creating-a-receptor
receptor = oechem.OEGraphMol()
oedocking.OEMakeReceptor(receptor, host, com[0], com[1], com[2])
Explanation: Prep host file for docking
Here we'll load the host and prepare for docking, which takes a bit of time as it has to get prepared as a "receptor" for docking into
End of explanation
# NBVAL_SKIP
#initialize omega for conformer generation
omega = oeomega.OEOmega()
omega.SetMaxConfs(100) #Generate up to 100 conformers since we'll use for docking
omega.SetIncludeInput(False)
omega.SetStrictStereo(True) #Refuse to generate conformers if stereochemistry not provided
#Initialize charge generation
chargeEngine = oequacpac.OEAM1BCCCharges()
# Initialize docking
dock = oedocking.OEDock()
dock.Initialize(receptor)
# Build OEMol from SMILES
# Generate new OEMol and parse SMILES
mol = oechem.OEMol()
oechem.OEParseSmiles( mol, guest_smiles)
# Set to use a simple neutral pH model
oequacpac.OESetNeutralpHModel(mol)
# Generate conformers with Omega; keep only best conformer
status = omega(mol)
if not status:
print("Error generating conformers for %s." % (guest_smiles))
#print(smi, name, mol.NumAtoms()) #Print debug info -- make sure we're getting protons added as we should
# Assign AM1-BCC charges
oequacpac.OEAssignCharges(mol, chargeEngine)
# Dock to host
dockedMol = oechem.OEGraphMol()
status = dock.DockMultiConformerMolecule(dockedMol, mol) #By default returns only top scoring pose
sdtag = oedocking.OEDockMethodGetName(oedocking.OEDockMethod_Chemgauss4)
oedocking.OESetSDScore(dockedMol, dock, sdtag)
dock.AnnotatePose(dockedMol)
# Write out docked pose if docking successful
if status == oedocking.OEDockingReturnCode_Success:
outmol = dockedMol
# Write out
tripos_mol2_filename = os.path.join(os.path.join(datadir, 'docked_guest.mol2'))
ofile = oechem.oemolostream( tripos_mol2_filename )
oechem.OEWriteMolecule( ofile, outmol)
ofile.close()
# Clean up residue names in mol2 files that are tleap-incompatible: replace substructure names with valid text.
infile = open( tripos_mol2_filename, 'r')
lines = infile.readlines()
infile.close()
newlines = [line.replace('<0>', 'GUEST') for line in lines]
outfile = open(tripos_mol2_filename, 'w')
outfile.writelines(newlines)
outfile.close()
else:
raise Exception("Error: Docking failed.")
Explanation: Generate 3D structure of our guest and dock it
End of explanation
# NBVAL_SKIP
# Import modules
import nglview
import mdtraj
# Load host structure ("trajectory")
traj = mdtraj.load(os.path.join(datadir, 'host.mol2'))
# Load guest structure
lig = mdtraj.load(os.path.join(tripos_mol2_filename))
# Figure out which atom indices correspond to the guest, for use in visualization
atoms_guest = [ traj.n_atoms+i for i in range(lig.n_atoms)]
# "Stack" host and guest Trajectory objects into a single object
complex = traj.stack(lig)
# Visualize
view = nglview.show_mdtraj(complex)
view.add_representation('spacefill', selection="all")
view.add_representation('spacefill', selection=atoms_guest, color='blue') #Adjust guest to show as blue for contrast
# The view command needs to be the last command issued to nglview
view
Explanation: Visualize in 3D to make sure we placed the guest into the binding site
This is optional, but very helpful to make sure you're starting off with your guest in the binding site. To execute this you'll need nglview for visualization and mdtraj for working with trajectory files
End of explanation
# NBVAL_SKIP
# Join OEMols into complex
complex = host.CreateCopy()
oechem.OEAddMols( complex, outmol)
print("Host+guest number of atoms %s" % complex.NumAtoms())
# Write out complex PDB file (won't really use it except as a template)
ostream = oechem.oemolostream( os.path.join(datadir, 'complex.pdb'))
oechem.OEWriteMolecule( ostream, complex)
ostream.close()
# Solvate the system using PDBFixer
# Loosely follows https://github.com/oess/openmm_orion/blob/master/ComplexPrepCubes/utils.py
fixer = PDBFixer( os.path.join(datadir, 'complex.pdb'))
# Convert between OpenEye and OpenMM Topology
omm_top, omm_pos = oeommutils.oemol_to_openmmTop(complex)
# Do it a second time to create a topology we can destroy
fixer_top, fixer_pos = oeommutils.oemol_to_openmmTop(complex)
chain_names = []
for chain in omm_top.chains():
chain_names.append(chain.id)
# Use correct topology, positions
#fixer.topology = copy.deepcopy(omm_top)
fixer.topology = fixer_top
fixer.positions = fixer_pos
# Solvate in 20 mM NaCl and water
fixer.addSolvent(padding=unit.Quantity( 1.0, unit.nanometers), ionicStrength=unit.Quantity( 20, unit.millimolar))
print("Number of atoms after applying PDBFixer: %s" % fixer.topology.getNumAtoms())
# The OpenMM topology produced by the solvation fixer has missing bond
# orders and aromaticity. So our next job is to update our existing OpenMM Topology by copying
# in just the water molecules and ions
# Atom dictionary between the the PDBfixer topology and the water_ion topology
fixer_atom_to_wat_ion_atom = {}
# Loop over new topology and copy water molecules and ions into pre-existing topology
for chain in fixer.topology.chains():
if chain.id not in chain_names:
n_chain = omm_top.addChain(chain.id)
for res in chain.residues():
n_res = omm_top.addResidue(res.name, n_chain)
for at in res.atoms():
n_at = omm_top.addAtom(at.name, at.element, n_res)
fixer_atom_to_wat_ion_atom[at] = n_at
# Copy over any bonds needed
for bond in fixer.topology.bonds():
at0 = bond[0]
at1 = bond[1]
try:
omm_top.addBond(fixer_atom_to_wat_ion_atom[at0],
fixer_atom_to_wat_ion_atom[at1], type=None, order=1)
except:
pass
# Build new position array
omm_pos = omm_pos + fixer.positions[len(omm_pos):]
# Write file of solvated system for visualization purposes
PDBFile.writeFile(omm_top, omm_pos, open(os.path.join(datadir, 'complex_solvated.pdb'), 'w'))
Explanation: Solvate complex
Next we're going to solvate the complex using PDBFixer -- a fairly basic tool, but one which should work. Before doing so, we need to combine the host and the guest into a single OEMol, and then in this case we'll write out a file containing this as a PDB for PDBFixer to read. However, we won't actually use the Topology from that PDB going forward, as the PDB will lose chemistry information we currently have in our OEMols (e.g. it can't retain charges, etc.). Instead, we'll obtain an OpenMM Topology by converting directly from OEChem using utility functionality in oeommtools, and we'll solvate THIS using PDBFixer. PDBFixer will still lose the relevant chemistry information, so we'll just copy any water/ions it added back into our original system.
End of explanation
# NBVAL_SKIP
# Keep a list of OEMols of our components
oemols = []
# Build ions from SMILES strings
smiles = ['[Na+]', '[Cl-]']
for smi in smiles:
mol = oechem.OEMol()
oechem.OESmilesToMol(mol, smi)
# Make sure we have partial charges assigned for these (monatomic, so equal to formal charge)
for atom in mol.GetAtoms():
atom.SetPartialCharge(atom.GetFormalCharge())
oemols.append(mol)
# Build water reference molecule
mol = oechem.OEMol()
oechem.OESmilesToMol(mol, 'O')
oechem.OEAddExplicitHydrogens(mol)
oechem.OETriposAtomNames(mol)
oemols.append(mol)
# Add oemols of host and guest
oemols.append(host)
oemols.append(outmol)
Explanation: Apply SMIRNOFF to set up the system for simulation with OpenMM
Next, we apply a SMIRNOFF force field (SMIRNOFF99Frosst) to the system to set it up for simulation with OpenMM (or writing out, via ParmEd, to formats for use in a variety of other simulation packages).
Prepping a system with SMIRNOFF takes basically three components:
- An OpenMM Topology for the system, which we have from above (coming out of PDBFixer)
- OEMol objects for the components of the system (here host, guest, water and ions)
- The force field XML files
Here, we do not yet have OEMol objects for the ions so our first step is to generate those, and combine it with the host and guest OEMols
Build a list of OEMols of all our components
End of explanation
# NBVAL_SKIP
# Load force fields for small molecules (plus default ions), water, and (temporarily) hydrogen bonds.
# TODO add HBonds constraint through createSystem when openforcefield#32 is implemented, alleviating need for constraints here
ff = ForceField('test_forcefields/smirnoff99Frosst.offxml',
'test_forcefields/hbonds.offxml',
'test_forcefields/tip3p.offxml')
# Set up system
# This draws to some extent on Andrea Rizzi's code at https://github.com/MobleyLab/SMIRNOFF_paper_code/blob/master/scripts/create_input_files.py
system = ff.createSystem(fixer.topology, oemols, nonbondedMethod = PME, nonbondedCutoff=1.1*unit.nanometer, ewaldErrorTolerance=1e-4) #, constraints=smirnoff.HBonds)
# TODO add HBonds constraints here when openforcefield#32 is implemented.
# Fix switching function.
# TODO remove this when openforcefield#31 is fixed
for force in system.getForces():
if isinstance(force, openmm.NonbondedForce):
force.setUseSwitchingFunction(True)
force.setSwitchingDistance(1.0*unit.nanometer)
Explanation: Load our force field and parameterize the system
This uses the SMIRNOFF ForceField class and SMIRNOFF XML files to parameterize the system.
End of explanation
# NBVAL_SKIP
# Even though we're just going to minimize, we still have to set up an integrator, since a Simulation needs one
integrator = openmm.VerletIntegrator(2.0*unit.femtoseconds)
# Prep the Simulation using the parameterized system, the integrator, and the topology
simulation = app.Simulation(fixer.topology, system, integrator)
# Copy in the positions
simulation.context.setPositions( fixer.positions)
# Get initial state and energy; print
state = simulation.context.getState(getEnergy = True)
energy = state.getPotentialEnergy() / unit.kilocalories_per_mole
print("Energy before minimization (kcal/mol): %.2g" % energy)
# Minimize, get final state and energy and print
simulation.minimizeEnergy()
state = simulation.context.getState(getEnergy=True, getPositions=True)
energy = state.getPotentialEnergy() / unit.kilocalories_per_mole
print("Energy after minimization (kcal/mol): %.2g" % energy)
newpositions = state.getPositions()
Explanation: Minimize and (very briefly) simulate our system
Here we will do an energy minimization, followed by a very very brief simulation. These are done in separate cells since OpenMM is quite slow on CPUs so you may not want to run the simulation on your computer if you are using a CPU.
Finalize prep and energy minimize
End of explanation
# NBVAL_SKIP
# Set up NetCDF reporter for storing trajectory; prep for Langevin dynamics
from mdtraj.reporters import NetCDFReporter
integrator = openmm.LangevinIntegrator(300*unit.kelvin, 1./unit.picosecond, 2.*unit.femtoseconds)
# Prep Simulation
simulation = app.Simulation(fixer.topology, system, integrator)
# Copy in minimized positions
simulation.context.setPositions(newpositions)
# Initialize velocities to correct temperature
simulation.context.setVelocitiesToTemperature(300*unit.kelvin)
# Set up to write trajectory file to NetCDF file in data directory every 100 frames
netcdf_reporter = NetCDFReporter(os.path.join(datadir, 'trajectory.nc'), 100) #Store every 100 frames
# Initialize reporters, including a CSV file to store certain stats every 100 frames
simulation.reporters.append(netcdf_reporter)
simulation.reporters.append(app.StateDataReporter(os.path.join(datadir, 'data.csv'), 100, step=True, potentialEnergy=True, temperature=True, density=True))
# Run the simulation and print start info; store timing
print("Starting simulation")
start = time.clock()
simulation.step(1000) #1000 steps of dynamics
end = time.clock()
# Print elapsed time info, finalize trajectory file
print("Elapsed time %.2f seconds" % (end-start))
netcdf_reporter.close()
print("Done!")
# NBVAL_SKIP
# Load stored trajectory using MDTraj; the trajectory doesn't contain chemistry info so we also load a PDB
traj= mdtraj.load(os.path.join(datadir, 'trajectory.nc'), top=os.path.join(datadir, 'complex_solvated.pdb'))
#Recenter/impose periodicity to the system
anchor = traj.top.guess_anchor_molecules()[0]
imgd = traj.image_molecules(anchor_molecules=[anchor])
traj.center_coordinates()
# View the trajectory
view = nglview.show_mdtraj(traj)
# I haven't totally figured out nglview's selection language for our purposes here, so I'm just showing two residues
# which seems (in this case) to include the host and guest plus an ion (?).
view.add_licorice('1-2')
view
# NBVAL_SKIP
# Save centered trajectory for viewing elsewhere
traj.save_netcdf(os.path.join(datadir, 'trajectory_centered.nc'))
Explanation: Run an MD simulation of a few steps, storing a trajectory for visualization
End of explanation |
11,191 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Wet Bulb Calculation Analysis
Common/Helper Methods
Step2: Ranges and Sensor Accuracy
Analysis Assumptions
Step3: Sensor Errors
Step4: Tim Brice and Todd Hall Wet Bulb calculation
Original Javascript version
Step5: Max Error Propagation
Step6: Max Error Table
Step7: Backward Stability
If we use the same RH and P and only change T, it should be smooth or that might indicate not backward stable.
Step8: Lets do the same check with RH
Step9: Backward stability check with P
Step10: Uncertainty
The algorithm I use to calculate uncertainty is also a good test for backward stability.
Step11: Pressure Sensor Configuration
Does lack of having a pressure sensor affect our errors if pressure can vary from 17hPa from some known value? | Python Code:
%matplotlib inline
import math
from numpy import *
import matplotlib.pyplot as plt
from matplotlib import cm
from pylab import *
from operator import itemgetter
import hygrometry
def frange(x, y, jump):
Like range(), but works with floats.
x=float(x)
y=float(y)
jump = float(jump)
while x < y:
yield x
x += jump
def plot2derrors(all_maxerrors):
mmap=[{}, {}, {}]
for i in range(0, len(all_maxerrors[0])):
for j in [0, 1, 2]:
key="%.1f" % (all_maxerrors[j][i],)
if key in mmap[j]:
if mmap[j][key][1] < all_maxerrors[3][i]:
mmap[j][key] = [all_maxerrors[j][i], all_maxerrors[3][i]]
else:
mmap[j][key] = [all_maxerrors[j][i], all_maxerrors[3][i]]
xl=['T', 'H', 'P']
for j in [0, 1, 2]:
X=[]
Y=[]
for key in mmap[j]:
X.append(mmap[j][key][0])
Y.append(mmap[j][key][1])
X, Y = [list(x) for x in zip(*sorted(zip(X, Y), key=lambda pair: pair[0]))]
figure()
plot(X, Y, 'r')
xlabel(xl[j])
ylabel('max(E)')
show()
def calc_maxerrors(wet_bulb_calc_func, sensor_range, sensor_errors, steps, pressure_sensor=True, pressure_constant=977):
tidx=0
rhidx=1
pidx=2
merr = 0
minfo = []
all_maxerrors = [[], [], [], []]
count = 0.0
mserr = 0.0
for Tc in frange(sensor_range[tidx][0], sensor_range[tidx][1], steps[tidx]):
for P in frange(sensor_range[pidx][0], sensor_range[pidx][1], steps[pidx]):
for RH in frange(sensor_range[rhidx][0], sensor_range[rhidx][1], steps[rhidx]):
sterr = 0
Ptruth=P
if not pressure_sensor:
Ptruth=pressure_constant
st = [Tc, RH, P]
truth = wet_bulb_calc_func(Tc, RH, Ptruth)
eTc=sensor_errors[tidx](Tc)
eRH=sensor_errors[rhidx](RH)
eP=sensor_errors[pidx](P)
combos = [[eTc, 0, 0], [-eTc, 0, 0], \
[0, eRH, 0], [0, -eRH, 0], \
[0, 0, eP], [0, 0, -eP], \
[eTc, eRH, 0], [eTc, -eRH, 0], [-eTc, eRH, 0], [-eTc, -eRH, 0], \
[eTc, 0, eP], [eTc, 0, -eP], [-eTc, 0, eP], [-eTc, 0, -eP], \
[0, eRH, eP], [0, eRH, -eP], [0, -eRH, eP], [0, -eRH, -eP], \
[eTc, eRH, eP], [eTc, -eRH, eP], [eTc, -eRH, -eP],\
[-eTc, eRH, eP], [-eTc, -eRH, eP], [-eTc, -eRH, -eP]]
for c in combos:
tmp = wet_bulb_calc_func(Tc+c[0], RH+c[1], P+c[2])
nerr = math.fabs(truth-tmp)
mserr += nerr
count += 1.0
if nerr > merr:
merr = nerr
minfo = [merr, [Tc, RH, P], c]
if nerr > sterr:
sterr = nerr
all_maxerrors[0].append(Tc)
all_maxerrors[1].append(RH)
all_maxerrors[2].append(P)
all_maxerrors[3].append(sterr)
mserr=mserr/count
minfo.append(mserr)
return [minfo, all_maxerrors]
def calc_uncertainty(wet_bulb_calc_func, sensor_range, sensor_errors, steps, pressure_sensor=True, pressure_constant=977):
tidx=0
rhidx=1
pidx=2
merr = 0
minfo = []
h=0.01
h_T=h
h_RH=h
h_P=h
all_maxerrors = [[], [], [], []]
for Tc in frange(sensor_range[tidx][0], sensor_range[tidx][1], steps[tidx]):
for RH in frange(sensor_range[rhidx][0], sensor_range[rhidx][1], steps[rhidx]):
for P in frange(sensor_range[pidx][0], sensor_range[pidx][1], steps[pidx]):
if not pressure_sensor:
eP=math.fabs(P-pressure_constant)
#eP=0
eTc=sensor_errors[tidx](Tc)
eRH=sensor_errors[rhidx](RH)
eP=sensor_errors[pidx](P)
dTc_dF=(wet_bulb_calc_func(Tc+h_T, RH, P) - wet_bulb_calc_func(Tc-h_T, RH, P))/(2.0*h_T)
dRH_dF=(wet_bulb_calc_func(Tc, RH+h_RH, P) - wet_bulb_calc_func(Tc, RH-h_RH, P))/(2.0*h_RH)
dP_dF=(wet_bulb_calc_func(Tc, RH, P+h_P) - wet_bulb_calc_func(Tc, RH, P-h_P))/(2.0*h_P)
dF = math.sqrt( (dTc_dF*eTc)**2.0 + (dRH_dF*eRH)**2.0 + (dP_dF*eP)**2.0 )
if dF > merr:
merr = dF
minfo = [merr, [Tc, RH, P]]
all_maxerrors[0].append(Tc)
all_maxerrors[1].append(RH)
all_maxerrors[2].append(P)
all_maxerrors[3].append(dF)
return [minfo, all_maxerrors]
def batch_WB(wet_bulb_func, range_t, range_rh, range_p):
ret = []
for Tc in frange(range_t[0], range_t[1], range_t[2]):
for RH in frange(range_rh[0], range_rh[1], range_rh[2]):
for P in frange(range_p[0], range_p[1], range_p[2]):
value = wet_bulb_func(Tc, RH, P)
ret.append([Tc, RH, P, value])
return ret
Explanation: Wet Bulb Calculation Analysis
Common/Helper Methods
End of explanation
range_Tc = [hygrometry.conv_f2c(85.), hygrometry.conv_f2c(110.)]
range_P = [904., 1050.]
range_RH = [20., 100.]
srange=[range_Tc, range_RH, range_P]
Explanation: Ranges and Sensor Accuracy
Analysis Assumptions:
* Temperatures range from 85F - 110F
* Pressures range in the from 904 hPa - 1050 hPa
* RH in from 20% - 100%
Example sensor to show error propgation will be the SHT-75.
End of explanation
#These errors are from SHT75 datasheet, Figure 2 and Figure 3
def error_Tc(Tc):
#-40C - 10
if Tc <= 10:
return (-1.1/50.)*(Tc-10) + 0.4
#10C - 25C
elif Tc <= 25:
return (-0.1/15.)*(Tc-25) + 0.3
#25C - 40C
elif Tc <= 40:
return (0.1/15.)*(Tc-25) + 0.3
#40C - 100C
elif Tc <= 100:
return (1.1/50.)*(Tc-40) + 0.4
else:
return Inf
#These errors are from SHT75 datasheet
def error_RH(RH):
if RH >= 10.0 and RH <= 90:
return 1.8
elif RH < 10.0:
return (-2.2/10.0)*(RH-10.0) + 1.8
else:
return (2.2/10.0)*(RH-90.0) + 1.8
#TODO: Look at pressure datasheet and our calculations some more.
def error_P(P):
return 2.7
#Plot sensor errors
figure()
T = []
DT = []
for t in frange(-40., 100., 1):
T.append(t)
DT.append(error_Tc(t))
plot(T, DT, 'r')
xlabel('T (C)')
ylabel('DT (C)')
title('SHT-75 Temperature Error')
show()
figure()
RH = []
DRH = []
for rh in frange(0., 101., 1):
RH.append(rh)
DRH.append(error_RH(rh))
plot(RH, DRH, 'r')
xlabel('RH (%RH)')
ylabel('DRH (%RH)')
title('SHT-75 Humidity Error')
show()
figure()
P = []
DP = []
for p in frange(range_P[0], range_P[1], 1):
P.append(p)
DP.append(error_P(p))
plot(P, DP, 'r')
xlabel('P (hPa)')
ylabel('DP (hPa)')
title('Pressure Module Error')
show()
Explanation: Sensor Errors
End of explanation
print (hygrometry.wetbulb(20, 0, 985) - 5.85) < 0.01
print (hygrometry.wetbulb(20, 0, 985) - 20.) < 0.01
print math.fabs(hygrometry.wetbulb(20, 50, 985) - 13.8) < 0.01
print (hygrometry.wetbulb(30, 25, 985) - 16.92) < 0.01
print (hygrometry.wetbulb(40, 50, 3) - 27.67) < 0.01
print (hygrometry.wetbulb(41.3, 24.2, 351.6) - 20.5) < 0.01
print (hygrometry.wetbulb(22.9, 81.2, 476.1) - 20.12) < 0.01
print (hygrometry.wetbulb(41.2, 24, 900) - 23.83) < 0.01
print math.fabs(hygrometry.wetbulb(41, 12, 1000) - 20.08) < 0.01
Explanation: Tim Brice and Todd Hall Wet Bulb calculation
Original Javascript version: http://www.srh.noaa.gov/epz/?n=wxcalc_rh
Easy Calculation tries to explian it: https://www.easycalculation.com/weather/learn-dewpoint-wetbulb.php
vapor pressure, saturation vapor pressure, dew point:
$$ es = 6.112 \cdot e^{\frac{17.67 \cdot T_c}{T_c+243.5}} $$
$$ V = \frac{es \cdot RH}{100} $$
$$ T_d = \frac{243.5 \cdot \ln(V/6.112)}{17.67 - \ln(V/6.112)} $$
Then sent to some incremental loop function, calc_wb()
Error and Stability NOAA
Forward Stability
End of explanation
step_Tc = 0.3
step_RH = 1.8
step_P = 2.7
ssteps=[step_Tc, step_RH, step_P]
info, errs = calc_maxerrors(hygrometry.wetbulb, srange, [error_Tc, error_RH, error_P], ssteps, True)
print info
print "Max error: %f" % (info[0],)
plot2derrors(errs)
Explanation: Max Error Propagation
End of explanation
print " RH/Temperature Wet Bulb Error Table in Celsius"
print "T\RH| ",
for xh in frange(0., 101., 10.):
print " %-3s " % ("%d" % (xh,),),
print
print "-"*(11*6)+"------",
frow = True
for y in frange(0., 51., 5.):
print
print "%-3s | " % ("%d" % (y,),),
for x in frange(0., 101., 10.):
info, errs = calc_maxerrors(hygrometry.wetbulb, [[y, y+1], [x, x+1], [904.0, 1050.0]], [error_Tc, error_RH, error_P], [5, 10, 2.7], True)
merr = array(errs[3]).max()
print "%.2f " % (merr,),
print
print " RH/Temperature Wet Bulb Error Table in Fahrenheit"
print "T\RH| ",
for xh in frange(0., 101., 10.):
print " %-3s " % ("%d" % (xh,),),
print
print "-"*(11*6)+"------",
frow = True
for y in frange(0., 51., hygrometry.conv_f2c(37.)):
print
print "%-3s | " % ("%d" % (hygrometry.conv_c2f(y),),),
for x in frange(0., 101., 10.):
info, errs = calc_maxerrors(hygrometry.wetbulb, [[y, y+1], [x, x+1], [904.0, 1050.0]], [error_Tc, error_RH, error_P], [5, 10, 2.7], True)
merr = array(errs[3]).max()
print "%.2f " % (hygrometry.conv_c2f(merr)-32.0,),
print
Explanation: Max Error Table
End of explanation
values = batch_WB(hygrometry.wetbulb, [hygrometry.conv_f2c(85.), hygrometry.conv_f2c(110.), 0.01], [65., 66., 65.], [977., 978., 977.])
Tv, RHv, Pv, Vv = zip(*values)
figure()
plot(Tv, Vv, 'r')
xlabel('T')
ylabel('WB')
show()
Explanation: Backward Stability
If we use the same RH and P and only change T, it should be smooth or that might indicate not backward stable.
End of explanation
values = batch_WB(hygrometry.wetbulb, [37., 38., 37.], [20, 100, 0.1], [977., 978., 977.])
Tv, RHv, Pv, Vv = zip(*values)
figure()
plot(RHv, Vv, 'r')
xlabel('RH')
ylabel('WB')
show()
Explanation: Lets do the same check with RH
End of explanation
values = batch_WB(hygrometry.wetbulb, [37., 38., 37.], [65., 66., 65.], [904., 1050., 0.1])
Tv, RHv, Pv, Vv = zip(*values)
figure()
plot(Pv, Vv, 'r')
xlabel('P')
ylabel('WB')
show()
Explanation: Backward stability check with P
End of explanation
step_Tc = 0.3
step_RH = 1.8
step_P = 2.7
ssteps=[step_Tc, step_RH, step_P]
info, errs = calc_uncertainty(hygrometry.wetbulb, srange, [error_Tc, error_RH, error_P], ssteps, True)
print info
print "Uncertainty: %f" % (info[0],)
plot2derrors(errs)
Explanation: Uncertainty
The algorithm I use to calculate uncertainty is also a good test for backward stability.
End of explanation
def error_ambientP(P):
return 17.0
print "Change of Wet Bulb Error in Fahrenheit caused from lack of pressure sensor"
print "T\RH| ",
for xh in frange(0., 101., 10.):
print " %-3s " % ("%d" % (xh,),),
print
print "-"*(11*6)+"------",
frow = True
for y in frange(0., 51., hygrometry.conv_f2c(37.)):
print
print "%-3s | " % ("%d" % (hygrometry.conv_c2f(y),),),
for x in frange(0., 101., 10.):
info, errs = calc_maxerrors(hygrometry.wetbulb, [[y, y+1], [x, x+1], [904.0, 1050.0]], [error_Tc, error_RH, error_P], [5., 10., 2.7], True)
merr = array(errs[3]).max()
info, errs = calc_maxerrors(hygrometry.wetbulb, [[y, y+1], [x, x+1], [904.0, 1050.0]], [error_Tc, error_RH, error_ambientP], [5., 10., 2.7], True)
merr2 = array(errs[3]).max()
print "%.2f " % (hygrometry.conv_c2f(math.fabs(merr-merr2))-32.0,),
print
Explanation: Pressure Sensor Configuration
Does lack of having a pressure sensor affect our errors if pressure can vary from 17hPa from some known value?
End of explanation |
11,192 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Proof of work for framework migrations
Jupyter Notebook Demo with Python, pandas and matplotlib.
Context
This analysis shows the progress of the rewrite work from Technical Requirement AB311. Here, the standardization towards JavaEE 8 has to be carried out. In the course of this, the database access technology JPA* has to be used instead of JDBC**.
<sub><sup>*JPA
Step1: Detection of the framework components
Using a regular expression, the two database access technologies can be extracted from the file path. All other files are filtered out using dropna().
Step2: Analysis
Work progress
To track the progress, we calculate the changed source code lines as an approximation.
Step3: Time summary
The respective progresses are merged according to their timestamps.
Step4: Visualization
For the comparison of progress, both technologies are listed side by side.
Step5: Current status of framework migration
Results according to line changes per technology. | Python Code:
import pandas as pd
log = pd.read_csv("../dataset/git_log_refactoring_simple.csv", parse_dates=[3])
log.head()
Explanation: Proof of work for framework migrations
Jupyter Notebook Demo with Python, pandas and matplotlib.
Context
This analysis shows the progress of the rewrite work from Technical Requirement AB311. Here, the standardization towards JavaEE 8 has to be carried out. In the course of this, the database access technology JPA* has to be used instead of JDBC**.
<sub><sup>*JPA: Java Persistence API (newer way of accessing a database)</sub></sup><br/>
<sub><sup>**JDBC: Java Database Connectivity (classic direct access to database)</sub></sup>
Idea
Tracking of changes via the version control system
Importing the development history with project status as of 01/15/2019 from the software version control system.
<sup>Please note that this is synthetic / generated data, as the real dataset is not available. More details about the generation see this notebook.</sup>
End of explanation
log['tech'] = log['file'].str.extract("/(jpa|jdbc)/")
log = log.dropna()
log.head()
Explanation: Detection of the framework components
Using a regular expression, the two database access technologies can be extracted from the file path. All other files are filtered out using dropna().
End of explanation
log['lines'] = log['additions'] - log['deletions']
log.head()
Explanation: Analysis
Work progress
To track the progress, we calculate the changed source code lines as an approximation.
End of explanation
log_timed = log.groupby(['timestamp', 'tech']).lines.sum()
log_timed.head()
Explanation: Time summary
The respective progresses are merged according to their timestamps.
End of explanation
log_progress = log_timed.unstack().fillna(0).cumsum()
log_progress.head()
Explanation: Visualization
For the comparison of progress, both technologies are listed side by side.
End of explanation
log_progress.plot(figsize=[15,8]);
Explanation: Current status of framework migration
Results according to line changes per technology.
End of explanation |
11,193 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Planar Point Patterns in PySAL
Author
Step1: Creating Point Patterns
From lists
We can build a point pattern by using Python lists of coordinate pairs $(s_0, s_1,\ldots, s_m)$ as follows
Step2: Thus $s_0 = (66.22, 32.54), \ s_{11}=(54.46, 8.48)$.
Step3: From numpy arrays
Step4: From shapefiles
This example uses 200 randomly distributed points within the counties of Virginia. Coordinates are for UTM zone 17 N.
Step5: Attributes of PySAL Point Patterns
Step6: Intensity Estimates
The intensity of a point process at point $s_i$ can be defined as
Step7: Intensity based on convex hull | Python Code:
import pysal.lib as ps
import numpy as np
from pysal.explore.pointpats import PointPattern
Explanation: Planar Point Patterns in PySAL
Author: Serge Rey sjsrey@gmail.com and Wei Kang weikang9009@gmail.com
Introduction
This notebook introduces the basic PointPattern class in PySAL and covers the following:
What is a point pattern?
Creating Point Patterns
Atributes of Point Patterns
Intensity Estimates
Next steps
What is a point pattern?
We introduce basic terminology here and point the interested reader to more detailed references on the underlying theory of the statistical analysis of point patterns.
Points and Event Points
To start we consider a series of point locations, $(s_1, s_2, \ldots, s_n)$ in a study region $\Re$. We limit our focus here to a two-dimensional space so that $s_j = (x_j, y_j)$ is the spatial coordinate pair for point location $j$.
We will be interested in two different types of points.
Event Points
Event Points are locations where something of interest has occurred. The term event is very general here and could be used to represent a wide variety of phenomena. Some examples include:
locations of individual plants of a certain species
archeological sites
addresses of disease cases
locations of crimes
the distribution of neurons
among many others.
It is important to recognize that in the statistical analysis of point patterns the interest extends beyond the observed point pattern at hand.
The observed patterns are viewed as realizations from some underlying spatial stochastic process.
Arbitrary Points
The second type of point we consider are those locations where the phenomena of interest has not been observed. These go by various names such as "empty space" or "regular" points, and at first glance might seem less interesting to a spatial analayst. However, these types of points play a central role in a class of point pattern methods that we explore below.
Point Pattern Analysis
The analysis of event points focuses on a number of different characteristics of the collective spatial pattern that is observed. Often the pattern is jugded against the hypothesis of complete spatial randomness (CSR). That is, one assumes that the point events arise independently of one another and with constant probability across $\Re$, loosely speaking.
Of course, many of the empirical point patterns we encounter do not appear to be generated from such a simple stochastic process. The depatures from CSR can be due to two types of effects.
First order effects
For a point process, the first-order properties pertain to the intensity of the process across space. Whether and how the intensity of the point pattern varies within our study region are questions that assume center stage. Such variation in the itensity of the pattern of, say, addresses of individuals with a certain type of non-infectious disease may reflect the underlying population density. In other words, although the point pattern of disease cases may display variation in intensity in our study region, and thus violate the constant probability of an event condition, that spatial drift in the pattern intensity could be driven by an underlying covariate.
Second order effects
The second channel by which departures from CSR can arise is through interaction and dependence between events in space. The canonical example being contagious diseases whereby the presence of an infected individual increases the probability of subsequent additional cases nearby.
When a pattern departs from expectation under CSR, this is suggestive that the underlying process may have some spatial structure that merits further investigation. Thus methods for detection of deviations from CSR and testing for alternative processes have given rise to a large literature in point pattern statistics.
Methods of Point Pattern Analysis in PySAL
The points module in PySAL implements basic methods of point pattern analysis organized into the following groups:
Point Processing
Centrography and Visualization
Quadrat Based Methods
Distance Based Methods
In the remainder of this notebook we shall focus on point processing.
End of explanation
points = [[66.22, 32.54], [22.52, 22.39], [31.01, 81.21],
[9.47, 31.02], [30.78, 60.10], [75.21, 58.93],
[79.26, 7.68], [8.23, 39.93], [98.73, 77.17],
[89.78, 42.53], [65.19, 92.08], [54.46, 8.48]]
p1 = PointPattern(points)
p1.mbb
Explanation: Creating Point Patterns
From lists
We can build a point pattern by using Python lists of coordinate pairs $(s_0, s_1,\ldots, s_m)$ as follows:
End of explanation
p1.summary()
type(p1.points)
np.asarray(p1.points)
p1.mbb
Explanation: Thus $s_0 = (66.22, 32.54), \ s_{11}=(54.46, 8.48)$.
End of explanation
points = np.asarray(points)
points
p1_np = PointPattern(points)
p1_np.summary()
Explanation: From numpy arrays
End of explanation
f = ps.examples.get_path('vautm17n_points.shp')
fo = ps.io.open(f)
pp_va = PointPattern(np.asarray([pnt for pnt in fo]))
fo.close()
pp_va.summary()
Explanation: From shapefiles
This example uses 200 randomly distributed points within the counties of Virginia. Coordinates are for UTM zone 17 N.
End of explanation
pp_va.summary()
pp_va.points
pp_va.head()
pp_va.tail()
Explanation: Attributes of PySAL Point Patterns
End of explanation
pp_va.lambda_mbb
Explanation: Intensity Estimates
The intensity of a point process at point $s_i$ can be defined as:
$$\lambda(s_j) = \lim \limits_{|\mathbf{A}s_j| \to 0} \left { \frac{E(Y(\mathbf{A}s_j)}{|\mathbf{A}s_j|} \right } $$
where $\mathbf{A}s_j$ is a small region surrounding location $s_j$ with area $|\mathbf{A}s_j|$, and $E(Y(\mathbf{A}s_j)$ is the expected number of event points in $\mathbf{A}s_j$.
The intensity is the mean number of event points per unit of area at point $s_j$.
Recall that one of the implications of CSR is that the intensity of the point process is constant in our study area $\Re$. In other words $\lambda(s_j) = \lambda(s_{j+1}) = \ldots = \lambda(s_n) = \lambda \ \forall s_j \in \Re$. Thus, if the area of $\Re$ = $|\Re|$ the expected number of event points in the study region is: $E(Y(\Re)) = \lambda |\Re|.$
In PySAL, the intensity is estimated by using a geometric object to encode the study region. We refer to this as the window, $W$. The reason for distinguishing between $\Re$ and $W$ is that the latter permits alternative definitions of the bounding object.
Intensity estimates are based on the following:
$$\hat{\lambda} = \frac{n}{|W|}$$
where $n$ is the number of points in the window $W$, and $|W|$ is the area of $W$.
Intensity based on minimum bounding box:
$$\hat{\lambda}{mbb} = \frac{n}{|W{mbb}|}$$
where $W_{mbb}$ is the minimum bounding box for the point pattern.
End of explanation
pp_va.lambda_hull
Explanation: Intensity based on convex hull:
$$\hat{\lambda}{hull} = \frac{n}{|W{hull}|}$$
where $W_{hull}$ is the convex hull for the point pattern.
End of explanation |
11,194 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Better ML Engineering with ML Metadata
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Install and import TFX
Step3: Import packages
Did you restart the runtime?
If you are using Google Colab, the first time that you run
the cell above, you must restart the runtime by clicking
above "RESTART RUNTIME" button or using "Runtime > Restart
runtime ..." menu. This is because of the way that Colab
loads packages.
Step4: Check the TFX, and MLMD versions.
Step5: Download the dataset
In this colab, we use the Palmer Penguins dataset which can be found on Github. We processed the dataset by leaving out any incomplete records, and drops island and sex columns, and converted labels to int32. The dataset contains 334 records of the body mass and the length and depth of penguins' culmens, and the length of their flippers. You use this data to classify penguins into one of three species.
Step6: Create an InteractiveContext
To run TFX components interactively in this notebook, create an InteractiveContext. The InteractiveContext uses a temporary directory with an ephemeral MLMD database instance. Note that calls to InteractiveContext are no-ops outside the Colab environment.
In general, it is a good practice to group similar pipeline runs under a Context.
Step7: Construct the TFX Pipeline
A TFX pipeline consists of several components that perform different aspects of the ML workflow. In this notebook, you create and run the ExampleGen, StatisticsGen, SchemaGen, and Trainer components and use the Evaluator and Pusher component to evaluate and push the trained model.
Refer to the components tutorial for more information on TFX pipeline components.
Note
Step8: Instantiate and run the StatisticsGen Component
Step9: Instantiate and run the SchemaGen Component
Step10: Instantiate and run the Trainer Component
Step11: Run the Trainer component.
Step12: Evaluate and push the model
Use the Evaluator component to evaluate and 'bless' the model before using the Pusher component to push the model to a serving directory.
Step13: Running the TFX pipeline populates the MLMD Database. In the next section, you use the MLMD API to query this database for metadata information.
Query the MLMD Database
The MLMD database stores three types of metadata
Step14: Create some helper functions to view the data from the MD store.
Step15: First, query the MD store for a list of all its stored ArtifactTypes.
Step16: Next, query all PushedModel artifacts.
Step17: Query the MD store for the latest pushed model. This tutorial has only one pushed model.
Step18: One of the first steps in debugging a pushed model is to look at which trained model is pushed and to see which training data is used to train that model.
MLMD provides traversal APIs to walk through the provenance graph, which you can use to analyze the model provenance.
Step19: Query the parent artifacts for the pushed model.
Step20: Query the properties for the model.
Step21: Query the upstream artifacts for the model.
Step22: Get the training data the model trained with.
Step23: Now that you have the training data that the model trained with, query the database again to find the training step (execution). Query the MD store for a list of the registered execution types.
Step24: The training step is the ExecutionType named tfx.components.trainer.component.Trainer. Traverse the MD store to get the trainer run that corresponds to the pushed model. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
try:
import colab
!pip install --upgrade pip
except:
pass
Explanation: Better ML Engineering with ML Metadata
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tfx/tutorials/mlmd/mlmd_tutorial"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tfx/blob/master/docs/tutorials/mlmd/mlmd_tutorial.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tfx/blob/master/docs/tutorials/mlmd/mlmd_tutorial.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
<td><a target="_blank" href="https://storage.googleapis.com/tensorflow_docs/tfx/docs/tutorials/mlmd/mlmd_tutorial.ipynb">
<img width=32px src="https://www.tensorflow.org/images/download_logo_32px.png">Download notebook</a></td>
</td>
</table>
Assume a scenario where you set up a production ML pipeline to classify penguins. The pipeline ingests your training data, trains and evaluates a model, and pushes it to production.
However, when you later try using this model with a larger dataset that contains different kinds of penguins, you observe that your model does not behave as expected and starts classifying the species incorrectly.
At this point, you are interested in knowing:
What is the most efficient way to debug the model when the only available artifact is the model in production?
Which training dataset was used to train the model?
Which training run led to this erroneous model?
Where are the model evaluation results?
Where to begin debugging?
ML Metadata (MLMD) is a library that leverages the metadata associated with ML models to help you answer these questions and more. A helpful analogy is to think of this metadata as the equivalent of logging in software development. MLMD enables you to reliably track the artifacts and lineage associated with the various components of your ML pipeline.
In this tutorial, you set up a TFX Pipeline to create a model that classifies penguins into three species based on the body mass and the length and depth of their culmens, and the length of their flippers. You then use MLMD to track the lineage of pipeline components.
TFX Pipelines in Colab
Colab is a lightweight development environment which differs significantly from a production environment. In production, you may have various pipeline components like data ingestion, transformation, model training, run histories, etc. across multiple, distributed systems. For this tutorial, you should be aware that siginificant differences exist in Orchestration and Metadata storage - it is all handled locally within Colab. Learn more about TFX in Colab here.
Setup
First, we install and import the necessary packages, set up paths, and download data.
Upgrade Pip
To avoid upgrading Pip in a system when running locally, check to make sure that we're running in Colab. Local systems can of course be upgraded separately.
End of explanation
!pip install -q -U tfx
Explanation: Install and import TFX
End of explanation
import os
import tempfile
import urllib
import pandas as pd
import tensorflow_model_analysis as tfma
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
Explanation: Import packages
Did you restart the runtime?
If you are using Google Colab, the first time that you run
the cell above, you must restart the runtime by clicking
above "RESTART RUNTIME" button or using "Runtime > Restart
runtime ..." menu. This is because of the way that Colab
loads packages.
End of explanation
from tfx import v1 as tfx
print('TFX version: {}'.format(tfx.__version__))
import ml_metadata as mlmd
print('MLMD version: {}'.format(mlmd.__version__))
Explanation: Check the TFX, and MLMD versions.
End of explanation
DATA_PATH = 'https://raw.githubusercontent.com/tensorflow/tfx/master/tfx/examples/penguin/data/labelled/penguins_processed.csv'
_data_root = tempfile.mkdtemp(prefix='tfx-data')
_data_filepath = os.path.join(_data_root, "penguins_processed.csv")
urllib.request.urlretrieve(DATA_PATH, _data_filepath)
Explanation: Download the dataset
In this colab, we use the Palmer Penguins dataset which can be found on Github. We processed the dataset by leaving out any incomplete records, and drops island and sex columns, and converted labels to int32. The dataset contains 334 records of the body mass and the length and depth of penguins' culmens, and the length of their flippers. You use this data to classify penguins into one of three species.
End of explanation
interactive_context = InteractiveContext()
Explanation: Create an InteractiveContext
To run TFX components interactively in this notebook, create an InteractiveContext. The InteractiveContext uses a temporary directory with an ephemeral MLMD database instance. Note that calls to InteractiveContext are no-ops outside the Colab environment.
In general, it is a good practice to group similar pipeline runs under a Context.
End of explanation
example_gen = tfx.components.CsvExampleGen(input_base=_data_root)
interactive_context.run(example_gen)
Explanation: Construct the TFX Pipeline
A TFX pipeline consists of several components that perform different aspects of the ML workflow. In this notebook, you create and run the ExampleGen, StatisticsGen, SchemaGen, and Trainer components and use the Evaluator and Pusher component to evaluate and push the trained model.
Refer to the components tutorial for more information on TFX pipeline components.
Note: Constructing a TFX Pipeline by setting up the individual components involves a lot of boilerplate code. For the purpose of this tutorial, it is alright if you do not fully understand every line of code in the pipeline setup.
Instantiate and run the ExampleGen Component
End of explanation
statistics_gen = tfx.components.StatisticsGen(
examples=example_gen.outputs['examples'])
interactive_context.run(statistics_gen)
Explanation: Instantiate and run the StatisticsGen Component
End of explanation
infer_schema = tfx.components.SchemaGen(
statistics=statistics_gen.outputs['statistics'], infer_feature_shape=True)
interactive_context.run(infer_schema)
Explanation: Instantiate and run the SchemaGen Component
End of explanation
# Define the module file for the Trainer component
trainer_module_file = 'penguin_trainer.py'
%%writefile {trainer_module_file}
# Define the training algorithm for the Trainer module file
import os
from typing import List, Text
import tensorflow as tf
from tensorflow import keras
from tfx import v1 as tfx
from tfx_bsl.public import tfxio
from tensorflow_metadata.proto.v0 import schema_pb2
# Features used for classification - culmen length and depth, flipper length,
# body mass, and species.
_LABEL_KEY = 'species'
_FEATURE_KEYS = [
'culmen_length_mm', 'culmen_depth_mm', 'flipper_length_mm', 'body_mass_g'
]
def _input_fn(file_pattern: List[Text],
data_accessor: tfx.components.DataAccessor,
schema: schema_pb2.Schema, batch_size: int) -> tf.data.Dataset:
return data_accessor.tf_dataset_factory(
file_pattern,
tfxio.TensorFlowDatasetOptions(
batch_size=batch_size, label_key=_LABEL_KEY), schema).repeat()
def _build_keras_model():
inputs = [keras.layers.Input(shape=(1,), name=f) for f in _FEATURE_KEYS]
d = keras.layers.concatenate(inputs)
d = keras.layers.Dense(8, activation='relu')(d)
d = keras.layers.Dense(8, activation='relu')(d)
outputs = keras.layers.Dense(3)(d)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.Adam(1e-2),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.SparseCategoricalAccuracy()])
return model
def run_fn(fn_args: tfx.components.FnArgs):
schema = schema_pb2.Schema()
tfx.utils.parse_pbtxt_file(fn_args.schema_path, schema)
train_dataset = _input_fn(
fn_args.train_files, fn_args.data_accessor, schema, batch_size=10)
eval_dataset = _input_fn(
fn_args.eval_files, fn_args.data_accessor, schema, batch_size=10)
model = _build_keras_model()
model.fit(
train_dataset,
epochs=int(fn_args.train_steps / 20),
steps_per_epoch=20,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps)
model.save(fn_args.serving_model_dir, save_format='tf')
Explanation: Instantiate and run the Trainer Component
End of explanation
trainer = tfx.components.Trainer(
module_file=os.path.abspath(trainer_module_file),
examples=example_gen.outputs['examples'],
schema=infer_schema.outputs['schema'],
train_args=tfx.proto.TrainArgs(num_steps=100),
eval_args=tfx.proto.EvalArgs(num_steps=50))
interactive_context.run(trainer)
Explanation: Run the Trainer component.
End of explanation
_serving_model_dir = os.path.join(tempfile.mkdtemp(),
'serving_model/penguins_classification')
eval_config = tfma.EvalConfig(
model_specs=[
tfma.ModelSpec(label_key='species', signature_name='serving_default')
],
metrics_specs=[
tfma.MetricsSpec(metrics=[
tfma.MetricConfig(
class_name='SparseCategoricalAccuracy',
threshold=tfma.MetricThreshold(
value_threshold=tfma.GenericValueThreshold(
lower_bound={'value': 0.6})))
])
],
slicing_specs=[tfma.SlicingSpec()])
evaluator = tfx.components.Evaluator(
examples=example_gen.outputs['examples'],
model=trainer.outputs['model'],
schema=infer_schema.outputs['schema'],
eval_config=eval_config)
interactive_context.run(evaluator)
pusher = tfx.components.Pusher(
model=trainer.outputs['model'],
model_blessing=evaluator.outputs['blessing'],
push_destination=tfx.proto.PushDestination(
filesystem=tfx.proto.PushDestination.Filesystem(
base_directory=_serving_model_dir)))
interactive_context.run(pusher)
Explanation: Evaluate and push the model
Use the Evaluator component to evaluate and 'bless' the model before using the Pusher component to push the model to a serving directory.
End of explanation
connection_config = interactive_context.metadata_connection_config
store = mlmd.MetadataStore(connection_config)
# All TFX artifacts are stored in the base directory
base_dir = connection_config.sqlite.filename_uri.split('metadata.sqlite')[0]
Explanation: Running the TFX pipeline populates the MLMD Database. In the next section, you use the MLMD API to query this database for metadata information.
Query the MLMD Database
The MLMD database stores three types of metadata:
Metadata about the pipeline and lineage information associated with the pipeline components
Metadata about artifacts that were generated during the pipeline run
Metadata about the executions of the pipeline
A typical production environment pipeline serves multiple models as new data arrives. When you encounter erroneous results in served models, you can query the MLMD database to isolate the erroneous models. You can then trace the lineage of the pipeline components that correspond to these models to debug your models
Set up the metadata (MD) store with the InteractiveContext defined previously to query the MLMD database.
End of explanation
def display_types(types):
# Helper function to render dataframes for the artifact and execution types
table = {'id': [], 'name': []}
for a_type in types:
table['id'].append(a_type.id)
table['name'].append(a_type.name)
return pd.DataFrame(data=table)
def display_artifacts(store, artifacts):
# Helper function to render dataframes for the input artifacts
table = {'artifact id': [], 'type': [], 'uri': []}
for a in artifacts:
table['artifact id'].append(a.id)
artifact_type = store.get_artifact_types_by_id([a.type_id])[0]
table['type'].append(artifact_type.name)
table['uri'].append(a.uri.replace(base_dir, './'))
return pd.DataFrame(data=table)
def display_properties(store, node):
# Helper function to render dataframes for artifact and execution properties
table = {'property': [], 'value': []}
for k, v in node.properties.items():
table['property'].append(k)
table['value'].append(
v.string_value if v.HasField('string_value') else v.int_value)
for k, v in node.custom_properties.items():
table['property'].append(k)
table['value'].append(
v.string_value if v.HasField('string_value') else v.int_value)
return pd.DataFrame(data=table)
Explanation: Create some helper functions to view the data from the MD store.
End of explanation
display_types(store.get_artifact_types())
Explanation: First, query the MD store for a list of all its stored ArtifactTypes.
End of explanation
pushed_models = store.get_artifacts_by_type("PushedModel")
display_artifacts(store, pushed_models)
Explanation: Next, query all PushedModel artifacts.
End of explanation
pushed_model = pushed_models[-1]
display_properties(store, pushed_model)
Explanation: Query the MD store for the latest pushed model. This tutorial has only one pushed model.
End of explanation
def get_one_hop_parent_artifacts(store, artifacts):
# Get a list of artifacts within a 1-hop of the artifacts of interest
artifact_ids = [artifact.id for artifact in artifacts]
executions_ids = set(
event.execution_id
for event in store.get_events_by_artifact_ids(artifact_ids)
if event.type == mlmd.proto.Event.OUTPUT)
artifacts_ids = set(
event.artifact_id
for event in store.get_events_by_execution_ids(executions_ids)
if event.type == mlmd.proto.Event.INPUT)
return [artifact for artifact in store.get_artifacts_by_id(artifacts_ids)]
Explanation: One of the first steps in debugging a pushed model is to look at which trained model is pushed and to see which training data is used to train that model.
MLMD provides traversal APIs to walk through the provenance graph, which you can use to analyze the model provenance.
End of explanation
parent_artifacts = get_one_hop_parent_artifacts(store, [pushed_model])
display_artifacts(store, parent_artifacts)
Explanation: Query the parent artifacts for the pushed model.
End of explanation
exported_model = parent_artifacts[0]
display_properties(store, exported_model)
Explanation: Query the properties for the model.
End of explanation
model_parents = get_one_hop_parent_artifacts(store, [exported_model])
display_artifacts(store, model_parents)
Explanation: Query the upstream artifacts for the model.
End of explanation
used_data = model_parents[0]
display_properties(store, used_data)
Explanation: Get the training data the model trained with.
End of explanation
display_types(store.get_execution_types())
Explanation: Now that you have the training data that the model trained with, query the database again to find the training step (execution). Query the MD store for a list of the registered execution types.
End of explanation
def find_producer_execution(store, artifact):
executions_ids = set(
event.execution_id
for event in store.get_events_by_artifact_ids([artifact.id])
if event.type == mlmd.proto.Event.OUTPUT)
return store.get_executions_by_id(executions_ids)[0]
trainer = find_producer_execution(store, exported_model)
display_properties(store, trainer)
Explanation: The training step is the ExecutionType named tfx.components.trainer.component.Trainer. Traverse the MD store to get the trainer run that corresponds to the pushed model.
End of explanation |
11,195 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Working with data files
Reading and writing data files is a common task, and Python offers native support for working with many kinds of data files. Today, we're going to be working mainly with CSVs.
Import the csv module
We're going to be working with delimited text files, so the first thing we need to do is import this functionality from the standard library.
Step1: Opening a file to read the contents
We're going to use something called a with statement to open a file and read the contents. The open() function takes at least two arguments
Step2: Simple filtering
If you wanted to filter your data, you could use an if statement inside your with block.
Step3: Exercise
Read in the MLB data, print only the names and salaries of players who make at least $1 million. (Hint
Step4: DictReader
Step5: Writing to CSV files
You can also use the csv module to create csv files -- same idea, you just need to change the mode to 'w'. As with reading, there's a list-based writing method and a dictionary-based method.
Step6: Using DictWriter to write data
Similar to using the list-based method, except that you need to ensure that the keys in your dictionaries of data match exactly a list of fieldnames.
Step7: You can open multiple files for reading/writing
Sometimes you want to open multiple files at the same time. One thing you might want to do | Python Code:
import csv
Explanation: Working with data files
Reading and writing data files is a common task, and Python offers native support for working with many kinds of data files. Today, we're going to be working mainly with CSVs.
Import the csv module
We're going to be working with delimited text files, so the first thing we need to do is import this functionality from the standard library.
End of explanation
# open the MLB data file `as` mlb
with open('data/mlb.csv', 'r') as mlb:
# create a reader object
reader = csv.reader(mlb)
# loop over the rows in the file
for row in reader:
# assign variables to each element in the row (shortcut!)
name, team, position, salary, start_year, end_year, years = row
# print the row, which is a list
print(row)
Explanation: Opening a file to read the contents
We're going to use something called a with statement to open a file and read the contents. The open() function takes at least two arguments: The path to the file you're opening and what "mode" you're opening it in.
To start with, we're going to use the 'r' mode to read the data. We'll use the default arguments for delimiter -- comma -- and we don't need to specify a quote character.
Important: If you open a data file in w (write) mode, anything that's already in the file will be erased.
The file we're using -- MLB roster data from 2017 -- lives at data/mlb.csv.
Once we have the file open, we're going to use some functionality from the csv module to iterate over the lines of data and print each one.
Specifically, we're going to use the csv.reader method, which returns a list of lines in the data file. Each line, in turn, is a list of the "cells" of data in that line.
Then we're going to loop over the lines of data and print each line. We can also use bracket notation to retrieve elements from inside each line of data.
End of explanation
# open the MLB data file `as` mlb
with open('data/mlb.csv', 'r') as mlb:
# create a reader object
reader = csv.reader(mlb)
# move past the header row
next(reader)
# loop over the rows in the file
for row in reader:
# assign variables to each element in the row (shortcut!)
name, team, position, salary, start_year, end_year, years = row
# print the line of data ~only~ if the player is on the Twins
if team == 'MIN':
# print the row, which is a list
print(row)
Explanation: Simple filtering
If you wanted to filter your data, you could use an if statement inside your with block.
End of explanation
# open the MLB data file `as` mlb
with open('data/mlb.csv', 'r') as mlb:
# create a reader object
reader = csv.reader(mlb)
# move past the header row
next(reader)
# loop over the rows in the file
for row in reader:
# assign variables to each element in the row (shortcut!)
name, team, position, salary, start_year, end_year, years = row
# print the line of data ~only~ if the player is on the Twins
if int(salary) >= 1000000:
# print the row, which is a list
print(name, salary)
Explanation: Exercise
Read in the MLB data, print only the names and salaries of players who make at least $1 million. (Hint: Use type coercion!)
End of explanation
# open the MLB data file `as` mlb
with open('data/mlb.csv', 'r') as mlb:
# create a reader object
reader = csv.DictReader(mlb)
# loop over the rows in the file
for row in reader:
# print just the player's name (the column header is "NAME")
print(row['NAME'])
Explanation: DictReader: Another way to read CSV files
Sometimes it's more convenient to work with data files as a list of dictionaries instead of a list of lists. That way, you don't have to remember the position of each "column" of data -- you can just reference the column name. To do it, we'll use a csv.DictReader object instead of a csv.reader object. Otherwise the code is much the same.
End of explanation
# define the column names
COLNAMES = ['name', 'org', 'position']
# let's make a few rows of data to write
DATA_TO_WRITE = [
['Cody', 'IRE', 'Training Director'],
['Maggie', 'The New York Times', 'Reporter'],
['Donald', 'The White House', 'President']
]
# open an output file in write mode
with open('people-list.csv', 'w') as outfile:
# create a writer object
writer = csv.writer(outfile)
# write the header row
writer.writerow(COLNAMES)
# loop over the data and write to file
for human in DATA_TO_WRITE:
writer.writerow(human)
Explanation: Writing to CSV files
You can also use the csv module to create csv files -- same idea, you just need to change the mode to 'w'. As with reading, there's a list-based writing method and a dictionary-based method.
End of explanation
# define the column names
COLNAMES = ['name', 'org', 'position']
# let's make a few rows of data to write
DATA_TO_WRITE = [
{'name': 'Cody', 'org': 'IRE', 'position': 'Training Director'},
{'name': 'Maggie', 'org': 'The New York Times', 'position': 'Reporter'},
{'name': 'Donald', 'org': 'The White House', 'position': 'President'}
]
# open an output file in write mode
with open('people-dict.csv', 'w') as outfile:
# create a writer object -- pass the list of column names to the `fieldnames` keyword argument
writer = csv.DictWriter(outfile, fieldnames=COLNAMES)
# use the writeheader method to write the header row
writer.writeheader()
# loop over the data and write to file
for human in DATA_TO_WRITE:
writer.writerow(human)
Explanation: Using DictWriter to write data
Similar to using the list-based method, except that you need to ensure that the keys in your dictionaries of data match exactly a list of fieldnames.
End of explanation
# open the MLB data file `as` mlb
# also, open `mlb-copy.csv` to write to
with open('data/mlb.csv', 'r') as mlb, open('mlb-copy.csv', 'w') as mlb_copy:
# create a reader object
reader = csv.DictReader(mlb)
# create a writer object
# we're going to use the `fieldnames` attribute of the DictReader object
# as our output headers, as well
# b/c we're basically just making a copy
writer = csv.DictWriter(mlb_copy, fieldnames=reader.fieldnames)
# write header row
writer.writeheader()
# loop over the rows in the file
for row in reader:
# what type of object is `row`?
# how would we find out?
# write row to output file
writer.writerow(row)
Explanation: You can open multiple files for reading/writing
Sometimes you want to open multiple files at the same time. One thing you might want to do: Opening a file of raw data in read mode, clean each row in a loop and write out the clean data to a new file.
You can open multiple files in the same with block -- just separate your open() functions with a comma.
For this example, we're not going to do any cleaning -- we're just going to copy the contents of one file to another.
End of explanation |
11,196 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Load Image As Greyscale
Step2: Blur Image
Step3: View Image | Python Code:
# Load image
import cv2
import numpy as np
from matplotlib import pyplot as plt
Explanation: Title: Blurring Images
Slug: blurring_images
Summary: How to blurring images using OpenCV in Python.
Date: 2017-09-11 12:00
Category: Machine Learning
Tags: Preprocessing Images
Authors: Chris Albon
Preliminaries
End of explanation
# Load image as grayscale
image = cv2.imread('images/plane_256x256.jpg', cv2.IMREAD_GRAYSCALE)
Explanation: Load Image As Greyscale
End of explanation
# Blur image
image_blurry = cv2.blur(image, (5,5))
Explanation: Blur Image
End of explanation
# Show image
plt.imshow(image_blurry, cmap='gray'), plt.xticks([]), plt.yticks([])
plt.show()
Explanation: View Image
End of explanation |
11,197 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vertex SDK
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Note
Step3: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
Step4: Set your project ID
If you don't know your project ID, you may be able to get your project ID using gcloud.
Step5: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. It is recommended to choose the region closest to you.
Americas
Step6: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
Step7: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outsides of your organization.
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Next, set up some variables used throughout the notebook.
Import libraries and define constants
Step11: Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
Step12: Set hardware accelerators
You can set hardware accelerators for training and prediction.
Set the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify
Step13: Set pre-built containers
Set the pre-built Docker container image for training and prediction.
For the latest list, see Pre-built containers for training.
For the latest list, see Pre-built containers for prediction.
Step14: Set machine type
Next, set the machine type to use for training and prediction.
Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for training and prediction.
machine type
n1-standard
Step15: Tutorial
Now you are ready to start creating your own custom model and training for Boston Housing.
Examine the training package
Package layout
Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.
PKG-INFO
README.md
setup.cfg
setup.py
trainer
__init__.py
task.py
The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.
The file trainer/task.py is the Python script for executing the custom training job. Note, when referred to the worker pool specification, replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).
Package Assembly
In the following cells, you will assemble the training package.
Step16: Task.py contents
In the next cell, you write the contents of the training script task.py. I won't go into detail, it's just there for you to browse. In summary
Step17: Store training script on your Cloud Storage bucket
Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
Step18: Create and run custom training job
To train a custom model, you perform two steps
Step19: Prepare your command-line arguments
Now define the command-line arguments for your custom training container
Step20: Run the custom training job
Next, you run the custom job to start the training job by invoking the method run, with the following parameters
Step21: Load the saved model
Your model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.
To load, you use the TF.Keras model.load_model() method passing it the Cloud Storage path where the model is saved -- specified by MODEL_DIR.
Step22: Evaluate the model
Now let's find out how good the model is.
Load evaluation data
You will load the Boston Housing test (holdout) data from tf.keras.datasets, using the method load_data(). This returns the dataset as a tuple of two elements. The first element is the training data and the second is the test data. Each element is also a tuple of two elements
Step23: Perform the model evaluation
Now evaluate how well the model in the custom job did.
Step24: Get the serving function signature
You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.
When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.
You also need to know the name of the serving function's input and output layer for constructing the explanation metadata -- which is discussed subsequently.
Step25: Explanation Specification
To get explanations when doing a prediction, you must enable the explanation capability and set corresponding settings when you upload your custom model to a Vertex Model resource. These settings are referred to as the explanation metadata, which consists of
Step26: Explanation Metadata
Let's first dive deeper into the explanation metadata, which consists of
Step27: Upload the model
Next, upload your model to a Model resource using Model.upload() method, with the following parameters
Step28: Deploy the model
Next, deploy your model for online prediction. To deploy the model, you invoke the deploy method, with the following parameters
Step29: Get test item
You will use an example out of the test (holdout) portion of the dataset as a test item.
Step30: Make the prediction with explanation
Now that your Model resource is deployed to an Endpoint resource, one can do online explanations by sending prediction requests to the Endpoint resource.
Request
The format of each instance is
Step31: Understanding the explanations response
First, you will look at what your model predicted and compare it to the actual value.
Step32: Examine feature attributions
Next you will look at the feature attributions for this particular example. Positive attribution values mean a particular feature pushed your model prediction up by that amount, and vice versa for negative attribution values.
Step33: Check your explanations and baselines
To better make sense of the feature attributions you're getting, you should compare them with your model's baseline. In most cases, the sum of your attribution values + the baseline should be very close to your model's predicted value for each input. Also note that for regression models, the baseline_score returned from AI Explanations will be the same for each example sent to your model. For classification models, each class will have its own baseline.
In this section, you'll send 10 test examples to your model for prediction to compare the feature attributions with the baseline. Then you'll run each test example's attributions through a sanity check in the sanity_check_explanations method.
Get explanations
Step34: Sanity check
In the function below you perform a sanity check on the explanations.
Step35: Undeploy the model
When you are done doing predictions, you undeploy the model from the Endpoint resource. This deprovisions all compute resources and ends billing for the deployed model.
Step36: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the notebook.
Otherwise, you can delete the individual resources you created in this notebook | Python Code:
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
Explanation: Vertex SDK: Custom Training Tabular Regression Models for Online Prediction and Explainability
Overview
This notebook demonstrates how to use the Vertex SDK to train and deploy a custom tabular regression model for online prediction with explanation.
Dataset
The dataset used for this notebook is the Boston Housing Prices dataset. The version of the dataset you will use in this notebook is built into TensorFlow. The trained model predicts the median price of a house in units of 1K USD.
Learning objectives
In this notebook, you create a custom model from a Python script in a Google prebuilt Docker container using the Vertex SDK, and then do a prediction with explanations on the deployed model by sending data. You can alternatively create custom models using gcloud command-line tool or online using Cloud Console.
The steps performed include:
Create a Vertex custom job for training a model.
Train a TensorFlow model.
Retrieve and load the model artifacts.
View the model evaluation.
Set explanation parameters.
Upload the model as a Vertex Model resource.
Deploy the Model resource to a serving Endpoint resource.
Make a prediction with explanation.
Undeploy the Model resource.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
Installation
Install the latest version of Vertex SDK for Python.
End of explanation
! pip3 install -U google-cloud-storage $USER_FLAG
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
if os.getenv("IS_TESTING"):
! pip3 install --upgrade tensorflow $USER_FLAG
Explanation: Note: Please ignore any incompatibility warnings and errors.
End of explanation
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Set your project ID
If you don't know your project ID, you may be able to get your project ID using gcloud.
End of explanation
REGION = "us-central1" # @param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. It is recommended to choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about Vertex AI regions
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
End of explanation
BUCKET_NAME = "gs://[your-project-id]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outsides of your organization.
End of explanation
! gsutil mb -l $REGION $BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import google.cloud.aiplatform as aip
Explanation: Set up variables
Next, set up some variables used throughout the notebook.
Import libraries and define constants
End of explanation
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
Explanation: Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
End of explanation
import os
import sys
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (None, None)
if os.getenv("IS_TESTING_DEPLOY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPLOY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
Explanation: Set hardware accelerators
You can set hardware accelerators for training and prediction.
Set the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:
(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)
Otherwise specify (None, None) to use a container image to run on a CPU.
Learn more here hardware accelerator support for your region
Note: TF releases before 2.3 for GPU support will fail to load the custom model in this notebook. It is a known issue and fixed in TF 2.3 -- which is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.
End of explanation
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2-1"
if TF[0] == "2":
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf2-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf2-cpu.{}".format(TF)
else:
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf-cpu.{}".format(TF)
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION)
DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
Explanation: Set pre-built containers
Set the pre-built Docker container image for training and prediction.
For the latest list, see Pre-built containers for training.
For the latest list, see Pre-built containers for prediction.
End of explanation
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
Explanation: Set machine type
Next, set the machine type to use for training and prediction.
Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for training and prediction.
machine type
n1-standard: 3.75GB of memory per vCPU.
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: The following is not supported for training:
standard: 2 vCPUs
highcpu: 2, 4 and 8 vCPUs
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.
End of explanation
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: Boston Housing tabular regression\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: [email protected]\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
Explanation: Tutorial
Now you are ready to start creating your own custom model and training for Boston Housing.
Examine the training package
Package layout
Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.
PKG-INFO
README.md
setup.cfg
setup.py
trainer
__init__.py
task.py
The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.
The file trainer/task.py is the Python script for executing the custom training job. Note, when referred to the worker pool specification, replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).
Package Assembly
In the following cells, you will assemble the training package.
End of explanation
%%writefile custom/trainer/task.py
# Single, Mirror and Multi-Machine Distributed Training for Boston Housing
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
import numpy as np
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.')
parser.add_argument('--lr', dest='lr',
default=0.001, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=20, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=100, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
parser.add_argument('--param-file', dest='param_file',
default='/tmp/param.txt', type=str,
help='Output file for parameters')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
# Single Machine, multiple compute device
elif args.distribute == 'mirror':
strategy = tf.distribute.MirroredStrategy()
# Multiple Machine, multiple compute device
elif args.distribute == 'multi':
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# Multi-worker configuration
print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
def make_dataset():
# Scaling Boston Housing data features
def scale(feature):
max = np.max(feature)
feature = (feature / max).astype(np.float)
return feature, max
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.boston_housing.load_data(
path="boston_housing.npz", test_split=0.2, seed=113
)
params = []
for _ in range(13):
x_train[_], max = scale(x_train[_])
x_test[_], _ = scale(x_test[_])
params.append(max)
# store the normalization (max) value for each feature
with tf.io.gfile.GFile(args.param_file, 'w') as f:
f.write(str(params))
return (x_train, y_train), (x_test, y_test)
# Build the Keras model
def build_and_compile_dnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Dense(128, activation='relu', input_shape=(13,)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(1, activation='linear')
])
model.compile(
loss='mse',
optimizer=tf.keras.optimizers.RMSprop(learning_rate=args.lr))
return model
NUM_WORKERS = strategy.num_replicas_in_sync
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size.
BATCH_SIZE = 16
GLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
model = build_and_compile_dnn_model()
# Train the model
(x_train, y_train), (x_test, y_test) = make_dataset()
model.fit(x_train, y_train, epochs=args.epochs, batch_size=GLOBAL_BATCH_SIZE)
model.save(args.model_dir)
Explanation: Task.py contents
In the next cell, you write the contents of the training script task.py. I won't go into detail, it's just there for you to browse. In summary:
Get the directory where to save the model artifacts from the command line (--model_dir), and if not specified, then from the environment variable AIP_MODEL_DIR.
Loads Boston Housing dataset from TF.Keras builtin datasets
Builds a simple deep neural network model using TF.Keras model API.
Compiles the model (compile()).
Sets a training distribution strategy according to the argument args.distribute.
Trains the model (fit()) with epochs specified by args.epochs.
Saves the trained model (save(args.model_dir)) to the specified model directory.
Saves the maximum value for each feature f.write(str(params)) to the specified parameters file.
End of explanation
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_boston.tar.gz
Explanation: Store training script on your Cloud Storage bucket
Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
End of explanation
# Define your custom training job
job = # TODO 1: Your code goes here(
display_name="boston_" + TIMESTAMP,
script_path="custom/trainer/task.py",
container_uri=TRAIN_IMAGE,
requirements=["gcsfs==0.7.1", "tensorflow-datasets==4.4"],
)
print(job)
Explanation: Create and run custom training job
To train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job.
Create custom training job
A custom training job is created with the CustomTrainingJob class, with the following parameters:
display_name: The human-readable name for the custom training job.
container_uri: The training container image.
requirements: Package requirements for the training container image (e.g., pandas).
script_path: The relative path to the training script.
End of explanation
MODEL_DIR = "{}/{}".format(BUCKET_NAME, TIMESTAMP)
EPOCHS = 20
STEPS = 100
DIRECT = True
if DIRECT:
CMDARGS = [
"--model-dir=" + MODEL_DIR,
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
]
else:
CMDARGS = [
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
]
Explanation: Prepare your command-line arguments
Now define the command-line arguments for your custom training container:
args: The command-line arguments to pass to the executable that is set as the entry point into the container.
--model-dir : For demonstrations, this command-line argument is used to specify where to store the model artifacts.
direct: You pass the Cloud Storage location as a command line argument to your training script (set variable DIRECT = True), or
indirect: The service passes the Cloud Storage location as the environment variable AIP_MODEL_DIR to your training script (set variable DIRECT = False). In this case, you tell the service the model artifact location in the job specification.
"--epochs=" + EPOCHS: The number of epochs for training.
"--steps=" + STEPS: The number of steps per epoch.
End of explanation
if TRAIN_GPU:
job.run(
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_type=TRAIN_GPU.name,
accelerator_count=TRAIN_NGPU,
base_output_dir=MODEL_DIR,
sync=True,
)
else:
job.run(
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
base_output_dir=MODEL_DIR,
sync=True,
)
model_path_to_deploy = MODEL_DIR
Explanation: Run the custom training job
Next, you run the custom job to start the training job by invoking the method run, with the following parameters:
args: The command-line arguments to pass to the training script.
replica_count: The number of compute instances for training (replica_count = 1 is single node training).
machine_type: The machine type for the compute instances.
accelerator_type: The hardware accelerator type.
accelerator_count: The number of accelerators to attach to a worker replica.
base_output_dir: The Cloud Storage location to write the model artifacts to.
sync: Whether to block until completion of the job.
End of explanation
import tensorflow as tf
local_model = tf.keras.models.load_model(MODEL_DIR)
Explanation: Load the saved model
Your model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.
To load, you use the TF.Keras model.load_model() method passing it the Cloud Storage path where the model is saved -- specified by MODEL_DIR.
End of explanation
import numpy as np
from tensorflow.keras.datasets import boston_housing
(_, _), (x_test, y_test) = boston_housing.load_data(
path="boston_housing.npz", test_split=0.2, seed=113
)
def scale(feature):
max = np.max(feature)
feature = (feature / max).astype(np.float32)
return feature
# Let's save one data item that has not been scaled
x_test_notscaled = x_test[0:1].copy()
for _ in range(13):
x_test[_] = scale(x_test[_])
x_test = x_test.astype(np.float32)
print(x_test.shape, x_test.dtype, y_test.shape)
print("scaled", x_test[0])
print("unscaled", x_test_notscaled)
Explanation: Evaluate the model
Now let's find out how good the model is.
Load evaluation data
You will load the Boston Housing test (holdout) data from tf.keras.datasets, using the method load_data(). This returns the dataset as a tuple of two elements. The first element is the training data and the second is the test data. Each element is also a tuple of two elements: the feature data, and the corresponding labels (median value of owner-occupied home).
You don't need the training data, and hence why it's loaded as (_, _).
Before you can run the data through evaluation, you need to preprocess it:
x_test:
1. Normalize (rescale) the data in each column by dividing each value by the maximum value of that column. This replaces every single value with a 32-bit floating-point number between 0 and 1.
End of explanation
# TODO 2: Your code goes here
Explanation: Perform the model evaluation
Now evaluate how well the model in the custom job did.
End of explanation
loaded = tf.saved_model.load(model_path_to_deploy)
serving_input = list(
loaded.signatures["serving_default"].structured_input_signature[1].keys()
)[0]
print("Serving function input:", serving_input)
serving_output = list(loaded.signatures["serving_default"].structured_outputs.keys())[0]
print("Serving function output:", serving_output)
input_name = local_model.input.name
print("Model input name:", input_name)
output_name = local_model.output.name
print("Model output name:", output_name)
Explanation: Get the serving function signature
You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.
When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.
You also need to know the name of the serving function's input and output layer for constructing the explanation metadata -- which is discussed subsequently.
End of explanation
XAI = "ig" # [ shapley, ig, xrai ]
if XAI == "shapley":
PARAMETERS = {"sampled_shapley_attribution": {"path_count": 10}}
elif XAI == "ig":
PARAMETERS = {"integrated_gradients_attribution": {"step_count": 50}}
elif XAI == "xrai":
PARAMETERS = {"xrai_attribution": {"step_count": 50}}
parameters = aip.explain.ExplanationParameters(PARAMETERS)
Explanation: Explanation Specification
To get explanations when doing a prediction, you must enable the explanation capability and set corresponding settings when you upload your custom model to a Vertex Model resource. These settings are referred to as the explanation metadata, which consists of:
parameters: This is the specification for the explainability algorithm to use for explanations on your model. You can choose between:
Shapley - Note, not recommended for image data -- can be very long running
XRAI
Integrated Gradients
metadata: This is the specification for how the algorithm is applied on your custom model.
Explanation Parameters
Let's first dive deeper into the settings for the explainability algorithm.
Shapley
Assigns credit for the outcome to each feature, and considers different permutations of the features. This method provides a sampling approximation of exact Shapley values.
Use Cases:
- Classification and regression on tabular data.
Parameters:
path_count: This is the number of paths over the features that will be processed by the algorithm. An exact approximation of the Shapley values requires M! paths, where M is the number of features. For the CIFAR10 dataset, this would be 784 (28*28).
For any non-trivial number of features, this is too compute expensive. You can reduce the number of paths over the features to M * path_count.
Integrated Gradients
A gradients-based method to efficiently compute feature attributions with the same axiomatic properties as the Shapley value.
Use Cases:
- Classification and regression on tabular data.
- Classification on image data.
Parameters:
step_count: This is the number of steps to approximate the remaining sum. The more steps, the more accurate the integral approximation. The general rule of thumb is 50 steps, but as you increase so does the compute time.
XRAI
Based on the integrated gradients method, XRAI assesses overlapping regions of the image to create a saliency map, which highlights relevant regions of the image rather than pixels.
Use Cases:
Classification on image data.
Parameters:
step_count: This is the number of steps to approximate the remaining sum. The more steps, the more accurate the integral approximation. The general rule of thumb is 50 steps, but as you increase so does the compute time.
In the next code cell, set the variable XAI to which explainabilty algorithm you will use on your custom model.
End of explanation
INPUT_METADATA = {
"input_tensor_name": serving_input,
"encoding": "BAG_OF_FEATURES",
"modality": "numeric",
"index_feature_mapping": [
"crim",
"zn",
"indus",
"chas",
"nox",
"rm",
"age",
"dis",
"rad",
"tax",
"ptratio",
"b",
"lstat",
],
}
OUTPUT_METADATA = {"output_tensor_name": serving_output}
input_metadata = aip.explain.ExplanationMetadata.InputMetadata(INPUT_METADATA)
output_metadata = aip.explain.ExplanationMetadata.OutputMetadata(OUTPUT_METADATA)
metadata = aip.explain.ExplanationMetadata(
inputs={"features": input_metadata}, outputs={"medv": output_metadata}
)
Explanation: Explanation Metadata
Let's first dive deeper into the explanation metadata, which consists of:
outputs: A scalar value in the output to attribute -- what to explain. For example, in a probability output [0.1, 0.2, 0.7] for classification, one wants an explanation for 0.7. Consider the following formulae, where the output is y and that's what you want to understand.
y = f(x)
Consider the following formulae, where the outputs are y and z. Since you can only do attribution for one scalar value, you have to pick whether you want to explain the output y or z. Assume in this example the model is object detection and y and z are the bounding box and the object classification. You would want to pick which of the two outputs to explain.
y, z = f(x)
The dictionary format for outputs is:
{ "outputs": { "[your_display_name]":
"output_tensor_name": [layer]
}
}
<blockquote>
- [your_display_name]: A human-readable name you assign to the output to explain. A common example is "probability".<br/>
- "output_tensor_name": The key/value field to identify the output layer to explain. <br/>
- [layer]: The output layer to explain. In a single task model, like a tabular regressor, it is the last (topmost) layer in the model.
</blockquote>
inputs: The features for attribution -- how they contributed to the output. Consider the following formulae, where a and b are the features. You have to pick the features that explain how they contributed. Assume that this model is deployed for A/B testing, where a are the data_items for the prediction and b identifies whether the model instance is A or B. You would want to pick a (or some subset of) for the features, and not b since it does not contribute to the prediction.
y = f(a,b)
The minimum dictionary format for inputs is:
{ "inputs": { "[your_display_name]":
"input_tensor_name": [layer]
}
}
<blockquote>
- [your_display_name]: A human-readable name you assign to the input to explain. A common example is "features".<br/>
- "input_tensor_name": The key/value field to identify the input layer for the feature attribution. <br/>
- [layer]: The input layer for feature attribution. In a single input tensor model, it is the first (bottom-most) layer in the model.
</blockquote>
Since the inputs to the model are tabular, you can specify the following two additional fields as reporting/visualization aids:
<blockquote>
- "modality": "image": Indicates the field values are image data.
</blockquote>
Since the inputs to the model are tabular, you can specify the following two additional fields as reporting/visualization aids:
<blockquote>
- "encoding": "BAG_OF_FEATURES" : Indicates that the inputs are set of tabular features.<br/>
- "index_feature_mapping": [ feature-names ] : A list of human-readable names for each feature. For this example, you use the feature names specified in the dataset.<br/>
- "modality": "numeric": Indicates the field values are numeric.
</blockquote>
End of explanation
model = # TODO 3: Your code goes here(
display_name="boston_" + TIMESTAMP,
artifact_uri=MODEL_DIR,
serving_container_image_uri=DEPLOY_IMAGE,
explanation_parameters=parameters,
explanation_metadata=metadata,
sync=False,
)
model.wait()
Explanation: Upload the model
Next, upload your model to a Model resource using Model.upload() method, with the following parameters:
display_name: The human-readable name for the Model resource.
artifact: The Cloud Storage location of the trained model artifacts.
serving_container_image_uri: The serving container image.
sync: Whether to execute the upload asynchronously or synchronously.
explanation_parameters: Parameters to configure explaining for Model's predictions.
explanation_metadata: Metadata describing the Model's input and output for explanation.
If the upload() method is run asynchronously, you can subsequently block until completion with the wait() method.
End of explanation
DEPLOYED_NAME = "boston-" + TIMESTAMP
TRAFFIC_SPLIT = {"0": 100}
MIN_NODES = 1
MAX_NODES = 1
if DEPLOY_GPU:
endpoint = # TODO 4a: Your code goes here(
deployed_model_display_name=DEPLOYED_NAME,
traffic_split=TRAFFIC_SPLIT,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_GPU,
accelerator_count=DEPLOY_NGPU,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
else:
endpoint = # TODO 4b: Your code goes here(
deployed_model_display_name=DEPLOYED_NAME,
traffic_split=TRAFFIC_SPLIT,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_GPU,
accelerator_count=0,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
Explanation: Deploy the model
Next, deploy your model for online prediction. To deploy the model, you invoke the deploy method, with the following parameters:
deployed_model_display_name: A human-readable name for the deployed model.
traffic_split: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.
If only one model, then specify as { "0": 100 }, where "0" refers to this model being uploaded and 100 means 100% of the traffic.
If there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { "0": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100.
machine_type: The type of machine to use for training.
accelerator_type: The hardware accelerator type.
accelerator_count: The number of accelerators to attach to a worker replica.
starting_replica_count: The number of compute instances to initially provision.
max_replica_count: The maximum number of compute instances to scale to. In this notebook, only one instance is provisioned.
This can take 10-15 minutes to complete.
End of explanation
test_item = x_test[0]
test_label = y_test[0]
print(test_item.shape)
Explanation: Get test item
You will use an example out of the test (holdout) portion of the dataset as a test item.
End of explanation
instances_list = [test_item.tolist()]
prediction = # TODO 5: Your code goes here
print(prediction)
Explanation: Make the prediction with explanation
Now that your Model resource is deployed to an Endpoint resource, one can do online explanations by sending prediction requests to the Endpoint resource.
Request
The format of each instance is:
[feature_list]
Since the explain() method can take multiple items (instances), send your single test item as a list of one test item.
Response
The response from the explain() call is a Python dictionary with the following entries:
ids: The internal assigned unique identifiers for each prediction request.
predictions: The prediction per instance.
deployed_model_id: The Vertex AI identifier for the deployed Model resource which did the predictions.
explanations: The feature attributions
End of explanation
value = prediction[0][0][0]
print("Predicted Value:", value)
Explanation: Understanding the explanations response
First, you will look at what your model predicted and compare it to the actual value.
End of explanation
!python3 -m pip install tabulate
from tabulate import tabulate
feature_names = [
"crim",
"zn",
"indus",
"chas",
"nox",
"rm",
"age",
"dis",
"rad",
"tax",
"ptratio",
"b",
"lstat",
]
attributions = prediction.explanations[0].attributions[0].feature_attributions
rows = []
for i, val in enumerate(feature_names):
rows.append([val, test_item[i], attributions[val]])
print(tabulate(rows, headers=["Feature name", "Feature value", "Attribution value"]))
Explanation: Examine feature attributions
Next you will look at the feature attributions for this particular example. Positive attribution values mean a particular feature pushed your model prediction up by that amount, and vice versa for negative attribution values.
End of explanation
# Prepare 10 test examples to your model for prediction
instances = []
for i in range(10):
instances.append(x_test[i].tolist())
response = endpoint.explain(instances)
Explanation: Check your explanations and baselines
To better make sense of the feature attributions you're getting, you should compare them with your model's baseline. In most cases, the sum of your attribution values + the baseline should be very close to your model's predicted value for each input. Also note that for regression models, the baseline_score returned from AI Explanations will be the same for each example sent to your model. For classification models, each class will have its own baseline.
In this section, you'll send 10 test examples to your model for prediction to compare the feature attributions with the baseline. Then you'll run each test example's attributions through a sanity check in the sanity_check_explanations method.
Get explanations
End of explanation
import numpy as np
def sanity_check_explanations(
explanation, prediction, mean_tgt_value=None, variance_tgt_value=None
):
passed_test = 0
total_test = 1
# `attributions` is a dict where keys are the feature names
# and values are the feature attributions for each feature
baseline_score = explanation.attributions[0].baseline_output_value
print("baseline:", baseline_score)
# Sanity check 1
# The prediction at the input is equal to that at the baseline.
# Please use a different baseline. Some suggestions are: random input, training
# set mean.
if abs(prediction - baseline_score) <= 0.05:
print("Warning: example score and baseline score are too close.")
print("You might not get attributions.")
else:
passed_test += 1
print("Sanity Check 1: Passed")
print(passed_test, " out of ", total_test, " sanity checks passed.")
i = 0
for explanation in response.explanations:
try:
prediction = np.max(response.predictions[i]["scores"])
except TypeError:
prediction = np.max(response.predictions[i])
sanity_check_explanations(explanation, prediction)
i += 1
Explanation: Sanity check
In the function below you perform a sanity check on the explanations.
End of explanation
endpoint.undeploy_all()
Explanation: Undeploy the model
When you are done doing predictions, you undeploy the model from the Endpoint resource. This deprovisions all compute resources and ends billing for the deployed model.
End of explanation
delete_all = True
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if "endpoint" in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the AutoML or Pipeline trainig job
try:
if "dag" in globals():
dag.delete()
except Exception as e:
print(e)
# Delete the custom trainig job
try:
if "job" in globals():
job.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if "batch_predict_job" in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object
try:
if "hpt_job" in globals():
hpt_job.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the notebook.
Otherwise, you can delete the individual resources you created in this notebook:
Dataset
Pipeline
Model
Endpoint
AutoML Training Job
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation |
11,198 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: Post-training weight quantization
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Train a TensorFlow model
Step3: For the example, since you trained the model for just a single epoch, so it only trains to ~96% accuracy.
Convert to a TensorFlow Lite model
Using the Python TFLiteConverter, you can now convert the trained model into a TensorFlow Lite model.
Now load the model using the TFLiteConverter
Step4: Write it out to a tflite file
Step5: To quantize the model on export, set the optimizations flag to optimize for size
Step6: Note how the resulting file, is approximately 1/4 the size.
Step7: Run the TFLite models
Run the TensorFlow Lite model using the Python TensorFlow Lite
Interpreter.
Load the model into an interpreter
Step8: Test the model on one image
Step9: Evaluate the models
Step10: Repeat the evaluation on the weight quantized model to obtain
Step11: In this example, the compressed model has no difference in the accuracy.
Optimizing an existing model
Resnets with pre-activation layers (Resnet-v2) are widely used for vision applications.
Pre-trained frozen graph for resnet-v2-101 is available on
Tensorflow Hub.
You can convert the frozen graph to a TensorFLow Lite flatbuffer with quantization by | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
import logging
logging.getLogger("tensorflow").setLevel(logging.DEBUG)
try:
# %tensorflow_version only exists in Colab.
import tensorflow.compat.v2 as tf
except Exception:
pass
tf.enable_v2_behavior()
from tensorflow import keras
import numpy as np
import pathlib
Explanation: Post-training weight quantization
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lite/performance/post_training_quant"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_quant.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_quant.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Overview
TensorFlow Lite now supports
converting weights to 8 bit precision as part of model conversion from
tensorflow graphdefs to TensorFlow Lite's flat buffer format. Weight quantization
achieves a 4x reduction in the model size. In addition, TFLite supports on the
fly quantization and dequantization of activations to allow for:
Using quantized kernels for faster implementation when available.
Mixing of floating-point kernels with quantized kernels for different parts
of the graph.
The activations are always stored in floating point. For ops that
support quantized kernels, the activations are quantized to 8 bits of precision
dynamically prior to processing and are de-quantized to float precision after
processing. Depending on the model being converted, this can give a speedup over
pure floating point computation.
In contrast to
quantization aware training
, the weights are quantized post training and the activations are quantized dynamically
at inference in this method.
Therefore, the model weights are not retrained to compensate for quantization
induced errors. It is important to check the accuracy of the quantized model to
ensure that the degradation is acceptable.
This tutorial trains an MNIST model from scratch, checks its accuracy in
TensorFlow, and then converts the model into a Tensorflow Lite flatbuffer
with weight quantization. Finally, it checks the
accuracy of the converted model and compare it to the original float model.
Build an MNIST model
Setup
End of explanation
# Load MNIST dataset
mnist = keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images / 255.0
test_images = test_images / 255.0
# Define the model architecture
model = keras.Sequential([
keras.layers.InputLayer(input_shape=(28, 28)),
keras.layers.Reshape(target_shape=(28, 28, 1)),
keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation=tf.nn.relu),
keras.layers.MaxPooling2D(pool_size=(2, 2)),
keras.layers.Flatten(),
keras.layers.Dense(10, activation=tf.nn.softmax)
])
# Train the digit classification model
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(
train_images,
train_labels,
epochs=1,
validation_data=(test_images, test_labels)
)
Explanation: Train a TensorFlow model
End of explanation
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
Explanation: For the example, since you trained the model for just a single epoch, so it only trains to ~96% accuracy.
Convert to a TensorFlow Lite model
Using the Python TFLiteConverter, you can now convert the trained model into a TensorFlow Lite model.
Now load the model using the TFLiteConverter:
End of explanation
tflite_models_dir = pathlib.Path("/tmp/mnist_tflite_models/")
tflite_models_dir.mkdir(exist_ok=True, parents=True)
tflite_model_file = tflite_models_dir/"mnist_model.tflite"
tflite_model_file.write_bytes(tflite_model)
Explanation: Write it out to a tflite file:
End of explanation
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
tflite_quant_model = converter.convert()
tflite_model_quant_file = tflite_models_dir/"mnist_model_quant.tflite"
tflite_model_quant_file.write_bytes(tflite_quant_model)
Explanation: To quantize the model on export, set the optimizations flag to optimize for size:
End of explanation
!ls -lh {tflite_models_dir}
Explanation: Note how the resulting file, is approximately 1/4 the size.
End of explanation
interpreter = tf.lite.Interpreter(model_path=str(tflite_model_file))
interpreter.allocate_tensors()
interpreter_quant = tf.lite.Interpreter(model_path=str(tflite_model_quant_file))
interpreter_quant.allocate_tensors()
Explanation: Run the TFLite models
Run the TensorFlow Lite model using the Python TensorFlow Lite
Interpreter.
Load the model into an interpreter
End of explanation
test_image = np.expand_dims(test_images[0], axis=0).astype(np.float32)
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
interpreter.set_tensor(input_index, test_image)
interpreter.invoke()
predictions = interpreter.get_tensor(output_index)
import matplotlib.pylab as plt
plt.imshow(test_images[0])
template = "True:{true}, predicted:{predict}"
_ = plt.title(template.format(true= str(test_labels[0]),
predict=str(np.argmax(predictions[0]))))
plt.grid(False)
Explanation: Test the model on one image
End of explanation
# A helper function to evaluate the TF Lite model using "test" dataset.
def evaluate_model(interpreter):
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
# Run predictions on every image in the "test" dataset.
prediction_digits = []
for test_image in test_images:
# Pre-processing: add batch dimension and convert to float32 to match with
# the model's input data format.
test_image = np.expand_dims(test_image, axis=0).astype(np.float32)
interpreter.set_tensor(input_index, test_image)
# Run inference.
interpreter.invoke()
# Post-processing: remove batch dimension and find the digit with highest
# probability.
output = interpreter.tensor(output_index)
digit = np.argmax(output()[0])
prediction_digits.append(digit)
# Compare prediction results with ground truth labels to calculate accuracy.
accurate_count = 0
for index in range(len(prediction_digits)):
if prediction_digits[index] == test_labels[index]:
accurate_count += 1
accuracy = accurate_count * 1.0 / len(prediction_digits)
return accuracy
print(evaluate_model(interpreter))
Explanation: Evaluate the models
End of explanation
print(evaluate_model(interpreter_quant))
Explanation: Repeat the evaluation on the weight quantized model to obtain:
End of explanation
import tensorflow_hub as hub
resnet_v2_101 = tf.keras.Sequential([
keras.layers.InputLayer(input_shape=(224, 224, 3)),
hub.KerasLayer("https://tfhub.dev/google/imagenet/resnet_v2_101/classification/4")
])
converter = tf.lite.TFLiteConverter.from_keras_model(resnet_v2_101)
# Convert to TF Lite without quantization
resnet_tflite_file = tflite_models_dir/"resnet_v2_101.tflite"
resnet_tflite_file.write_bytes(converter.convert())
# Convert to TF Lite with quantization
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
resnet_quantized_tflite_file = tflite_models_dir/"resnet_v2_101_quantized.tflite"
resnet_quantized_tflite_file.write_bytes(converter.convert())
!ls -lh {tflite_models_dir}/*.tflite
Explanation: In this example, the compressed model has no difference in the accuracy.
Optimizing an existing model
Resnets with pre-activation layers (Resnet-v2) are widely used for vision applications.
Pre-trained frozen graph for resnet-v2-101 is available on
Tensorflow Hub.
You can convert the frozen graph to a TensorFLow Lite flatbuffer with quantization by:
End of explanation |
11,199 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ordinary Differential Equations Exercise 3
Imports
Step1: Damped, driven nonlinear pendulum
The equations of motion for a simple pendulum of mass $m$, length $l$ are
Step4: Write a function derivs for usage with scipy.integrate.odeint that computes the derivatives for the damped, driven harmonic oscillator. The solution vector at each time will be $\vec{y}(t) = (\theta(t),\omega(t))$.
Step5: Simple pendulum
Use the above functions to integrate the simple pendulum for the case where it starts at rest pointing vertically upwards. In this case, it should remain at rest with constant energy.
Integrate the equations of motion.
Plot $E/m$ versus time.
Plot $\theta(t)$ and $\omega(t)$ versus time.
Tune the atol and rtol arguments of odeint until $E/m$, $\theta(t)$ and $\omega(t)$ are constant.
Anytime you have a differential equation with a a conserved quantity, it is critical to make sure the numerical solutions conserve that quantity as well. This also gives you an opportunity to find other bugs in your code. The default error tolerances (atol and rtol) used by odeint are not sufficiently small for this problem. Start by trying atol=1e-3, rtol=1e-2 and then decrease each by an order of magnitude until your solutions are stable.
Step7: Damped pendulum
Write a plot_pendulum function that integrates the damped, driven pendulum differential equation for a particular set of parameters $[a,b,\omega_0]$.
Use the initial conditions $\theta(0)=-\pi + 0.1$ and $\omega=0$.
Decrease your atol and rtol even futher and make sure your solutions have converged.
Make a parametric plot of $[\theta(t),\omega(t)]$ versus time.
Use the plot limits $\theta \in [-2 \pi,2 \pi]$ and $\theta \in [-10,10]$
Label your axes and customize your plot to make it beautiful and effective.
Step8: Here is an example of the output of your plot_pendulum function that should show a decaying spiral.
Step9: Use interact to explore the plot_pendulum function with | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
Explanation: Ordinary Differential Equations Exercise 3
Imports
End of explanation
g = 9.81 # m/s^2
l = 0.5 # length of pendulum, in meters
tmax = 50. # seconds
t = np.linspace(0, tmax, int(100*tmax))
Explanation: Damped, driven nonlinear pendulum
The equations of motion for a simple pendulum of mass $m$, length $l$ are:
$$
\frac{d^2\theta}{dt^2} = \frac{-g}{\ell}\sin\theta
$$
When a damping and periodic driving force are added the resulting system has much richer and interesting dynamics:
$$
\frac{d^2\theta}{dt^2} = \frac{-g}{\ell}\sin\theta - a \omega - b \sin(\omega_0 t)
$$
In this equation:
$a$ governs the strength of the damping.
$b$ governs the strength of the driving force.
$\omega_0$ is the angular frequency of the driving force.
When $a=0$ and $b=0$, the energy/mass is conserved:
$$E/m =g\ell(1-\cos(\theta)) + \frac{1}{2}\ell^2\omega^2$$
Basic setup
Here are the basic parameters we are going to use for this exercise:
End of explanation
def derivs(y, t, a, b, omega0):
Compute the derivatives of the damped, driven pendulum.
Parameters
----------
y : ndarray
The solution vector at the current time t[i]: [theta[i],omega[i]].
t : float
The current time t[i].
a, b, omega0: float
The parameters in the differential equation.
Returns
-------
dy : ndarray
The vector of derviatives at t[i]: [dtheta[i],domega[i]].
theta = y[0]
omega = y[1]
dtheta = omega
domega = (-g/l)*np.sin(theta) - a*omega - b*np.sin(omega0*t)
dy = np.array([dtheta,domega])
return dy
derivs(np.array([np.pi,1.0]), 0, 1.0, 1.0, 1.0)
assert np.allclose(derivs(np.array([np.pi,1.0]), 0, 1.0, 1.0, 1.0), [1.,-1.])
def energy(y):
Compute the energy for the state array y.
The state array y can have two forms:
1. It could be an ndim=1 array of np.array([theta,omega]) at a single time.
2. It could be an ndim=2 array where each row is the [theta,omega] at single
time.
Parameters
----------
y : ndarray, list, tuple
A solution vector
Returns
-------
E/m : float (ndim=1) or ndarray (ndim=2)
The energy per mass.
if np.ndim(y)==1:
theta = y[0]
omega = y[1]
elif np.ndim(y)==2:
theta = y[:,0]
omega = y[:,1]
Em = g*l*(1 - np.cos(theta)) + 0.5*(l**2)*(omega**2)
return Em
energy(energy(np.ones((10,2))))
assert np.allclose(energy(np.array([np.pi,0])),g)
assert np.allclose(energy(np.ones((10,2))), np.ones(10)*energy(np.array([1,1])))
Explanation: Write a function derivs for usage with scipy.integrate.odeint that computes the derivatives for the damped, driven harmonic oscillator. The solution vector at each time will be $\vec{y}(t) = (\theta(t),\omega(t))$.
End of explanation
a = 0
b = 0
omega0 = 0
ic = np.array([np.pi,0.0])
soln = odeint(derivs, ic, t, args=(a, b, omega0), atol=1e-5, rtol=1e-4)
theta = soln[:,0]
omega = soln[:,1]
Em = energy(soln)
theta, omega, Em
plt.figure(figsize=(5,3))
plt.plot(t, Em)
plt.ylim(0,12)
plt.xlabel('Time')
plt.ylabel('Energy/Mass')
plt.title('Energy/Mass vs. Time');
plt.figure(figsize=(5,3))
plt.plot(t, omega)
plt.ylim(-np.pi,np.pi)
plt.yticks([-np.pi,0,np.pi],['$-\pi$',0,'$\pi$'])
plt.xlabel('Time')
plt.ylabel('Angular Velocity')
plt.title('Angular Velocity vs. Time');
plt.figure(figsize=(5,3))
plt.plot(t, theta)
plt.ylim(0,2*np.pi)
plt.yticks([0,np.pi,2*np.pi],[0,'$\pi$','2$\pi$'])
plt.xlabel('Time')
plt.ylabel('Angular Position')
plt.title('Angular Position vs. Time');
assert True # leave this to grade the two plots and their tuning of atol, rtol.
Explanation: Simple pendulum
Use the above functions to integrate the simple pendulum for the case where it starts at rest pointing vertically upwards. In this case, it should remain at rest with constant energy.
Integrate the equations of motion.
Plot $E/m$ versus time.
Plot $\theta(t)$ and $\omega(t)$ versus time.
Tune the atol and rtol arguments of odeint until $E/m$, $\theta(t)$ and $\omega(t)$ are constant.
Anytime you have a differential equation with a a conserved quantity, it is critical to make sure the numerical solutions conserve that quantity as well. This also gives you an opportunity to find other bugs in your code. The default error tolerances (atol and rtol) used by odeint are not sufficiently small for this problem. Start by trying atol=1e-3, rtol=1e-2 and then decrease each by an order of magnitude until your solutions are stable.
End of explanation
def plot_pendulum(a=0.0, b=0.0, omega0=0.0):
Integrate the damped, driven pendulum and make a phase plot of the solution.
ic = np.array([-np.pi+0.1,0.0])
soln = odeint(derivs, ic, t, args=(a, b, omega0), atol=1e-9, rtol=1e-8)
theta = soln[:,0]
omega = soln[:,1]
plt.figure(figsize=(8, 5))
plt.plot(theta,omega)
plt.xlabel('Angular Position')
plt.ylabel('Angular Velocity')
plt.xlim(-2*np.pi,2*np.pi)
plt.ylim(-10,10)
plt.title('Angular Velocity vs. Angular Position')
Explanation: Damped pendulum
Write a plot_pendulum function that integrates the damped, driven pendulum differential equation for a particular set of parameters $[a,b,\omega_0]$.
Use the initial conditions $\theta(0)=-\pi + 0.1$ and $\omega=0$.
Decrease your atol and rtol even futher and make sure your solutions have converged.
Make a parametric plot of $[\theta(t),\omega(t)]$ versus time.
Use the plot limits $\theta \in [-2 \pi,2 \pi]$ and $\theta \in [-10,10]$
Label your axes and customize your plot to make it beautiful and effective.
End of explanation
plot_pendulum(0.5, 0.0, 0.0)
Explanation: Here is an example of the output of your plot_pendulum function that should show a decaying spiral.
End of explanation
interact(plot_pendulum, a=(0.0,1.0,0.1), b=(0.0,10.0,0.1), omega0=(0.0,10.0,0.1));
Explanation: Use interact to explore the plot_pendulum function with:
a: a float slider over the interval $[0.0,1.0]$ with steps of $0.1$.
b: a float slider over the interval $[0.0,10.0]$ with steps of $0.1$.
omega0: a float slider over the interval $[0.0,10.0]$ with steps of $0.1$.
End of explanation |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.