Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
13,400 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Prepare a minimal data set.
Step1: Plotting separate data columns as separate sub-plots
Step2: Plot multiple groups (from same data columns) onto the same plot.
Step3: Jake Vanderplas | Python Code:
df = pd.DataFrame({'age':[1.,2,3,4,5,6,7,8,9],
'height':[4, 4.5, 5, 6, 7, 8, 9, 9.5, 10],
'gender':['M','F', 'F','M','M','F', 'F','M', 'F'],
#'hair color':['brown','black', 'brown', 'blonde', 'brown', 'red',
# 'brown', 'brown', 'black' ],
'hair length':[1,6,2,3,1,5,6,5,3] })
Explanation: Prepare a minimal data set.
End of explanation
def plot_2_subplots(df, x1, y1, y2, x2=None, title=None):
fig, axs = plt.subplots(2, 1, figsize=(5, 4))
colors = ['c','b']
# get the data array for x1:
x1d = df[x1]
# get the data array for x2:
if x2 is None: # use x1 as x2
x2d=df[x1]
x2=x1
# todo (?) share x axis if x2 was None?
else:
x2d=df[x2]
# get the data arrays for y1, y2:
y1d=df[y1]
y2d=df[y2]
axs[0].plot(x1d, y1d, linestyle='--', marker='o', color=colors[0]) #, label=y1)
axs[0].set_xlabel(x1)
axs[0].set_ylabel(y1)
axs[1].plot(x2d, y2d, linestyle='--', marker='o', color=colors[1]) #, label=y2)
axs[1].set_xlabel(x2)
axs[1].set_ylabel(y2)
for subplot in axs:
subplot.legend(loc='best')
axs[0].axhline(y=0, color='k')
# fill 2nd plot
axs[1].axhline(y=0, color='k')
plt.legend()
if title is not None:
plt.title(title)
plt.tight_layout()
return fig
p = plot_2_subplots(df=df, x1='age', y1='height', y2='hair length', x2=None, title=None)
Explanation: Plotting separate data columns as separate sub-plots:
End of explanation
df.plot
def plot_by_group(df, group, x, y, title=None):
fig, ax = plt.subplots(1, 1, figsize=(3.5, 2.5))
ax.set_xlabel(x)
ax.set_ylabel(y)
# todo: move title up(?)
if title is not None:
ax.set_title(title)
for tup, group_df in df.groupby(group):
# sort on the x attribute
group_df = group_df.sort_values(x)
# todo: print label in legend.
ax.plot(group_df[x], group_df[y], marker='o', label=tup[0])
print(tup)
# todo: put legend outside the figure
plt.legend()
plot_by_group(df=df, group='gender', x='age', y='height', title='this is a title, you bet.')
Explanation: Plot multiple groups (from same data columns) onto the same plot.
End of explanation
def plot_2_subplots_v2(df, x1, x2, y1, y2, title=None):
fig, axs = plt.subplots(2, 1, figsize=(5, 4))
plt_data = {1:(df[x1], df[y1]), 2:(df[x2], df[y2])}
titles = {1:x1, 2:x2}
colors = {1:'#b3cde3', 2:'#decbe4'}
for row, ax in enumerate(axs, start=1):
print(row, ax)
ax.plot(plt_data[row][0], plt_data[row][1], color=colors[row], marker='o', label=row)
ax.set_xlabel('some x')
ax.set_title(titles[row])
plt.tight_layout()
return fig
# kind of a silly example.
p = plot_2_subplots_v2(df=df, x1='age', y1='height', y2='hair length', x2='age', title=None)
Explanation: Jake Vanderplas:
plt.plot can be noticeably more efficient than plt.scatter. The reason is that plt.scatter has the capability to render a different size and/or color for each point, so the renderer must do the extra work of constructing each point individually.
http://nbviewer.jupyter.org/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/04.02-Simple-Scatter-Plots.ipynb
End of explanation |
13,401 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CSAL4243
Step1: Logistic Regression
Step2: <br>
Iris Flower Dataset
Using sepal length and width, predict the type of flower.
K - Nearest Neighbor (kNN) Classifier
Step3: <br>
Logistic Regression
Step4: Regularization Example | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn import neighbors
df = pd.read_csv('datasets/exam_dataset1.csv', encoding='utf-8')
n_neighbors = 5
X = np.array(df[['exam1','exam2']])
y = np.array(df[['admission']]).ravel()
h = .02 # step size in the mesh
# # Create color maps
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
for weights in ['uniform', 'distance']:
# we create an instance of Neighbours Classifier and fit the data.
clf = neighbors.KNeighborsClassifier(n_neighbors, weights=weights)
clf.fit(X, y)
print(clf.score(X,y))
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("2-Class classification (k = %i, weights = '%s')"
% (n_neighbors, weights))
plt.show()
Explanation: CSAL4243: Introduction to Machine Learning
Muhammad Mudassir Khan ([email protected])
Lecture 9: Logistic Regression and kNN Examples
Overview
University Admission Dataset
K - Nearest Neighbor (kNN) Classifier
Logistic-Regression
Iris Flower Dataset
K - Nearest Neighbor (kNN) Classifier
Logistic-Regression
Resources
Credits
<br>
<br>
University Admission Dataset
Find whether a student get admitted into a university based on his score in two exams taken by the university. You have historical data of previous applicants who got admitted and rejected based on their score on these two exams.
K - Nearest Neighbor (kNN) Classifier
End of explanation
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn import linear_model
df = pd.read_csv('datasets/exam_dataset1.csv', encoding='utf-8')
X = np.array(df[['exam1','exam2']])
y = np.array(df[['admission']]).ravel()
h = .02 # step size in the mesh
logreg = linear_model.LogisticRegression(C=1e5)
# we create an instance of Neighbours Classifier and fit the data.
logreg.fit(X, y)
print(logreg.score(X,y))
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Z = logreg.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(1, figsize=(4, 3))
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Paired)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=y, edgecolors='k', cmap=plt.cm.Paired)
plt.xlabel('Exam 1')
plt.ylabel('Exam 2')
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.xticks(())
plt.yticks(())
plt.show()
Explanation: Logistic Regression
End of explanation
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn import neighbors, datasets
n_neighbors = 1
# import some data to play with
iris = datasets.load_iris()
X = iris.data[:, :2] # we only take the first two features. We could
# avoid this ugly slicing by using a two-dim dataset
y = iris.target
h = .02 # step size in the mesh
# Create color maps
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
for weights in ['uniform', 'distance']:
# we create an instance of Neighbours Classifier and fit the data.
clf = neighbors.KNeighborsClassifier(n_neighbors, weights=weights)
clf.fit(X, y)
print(clf.score(X,y))
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("3-Class classification (k = %i, weights = '%s')"
% (n_neighbors, weights))
plt.show()
Explanation: <br>
Iris Flower Dataset
Using sepal length and width, predict the type of flower.
K - Nearest Neighbor (kNN) Classifier
End of explanation
# Code source: Gaël Varoquaux
# Modified for documentation by Jaques Grobler
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn import linear_model, datasets
# import some data to play with
iris = datasets.load_iris()
X = iris.data[:, :2] # we only take the first two features.
Y = iris.target
h = .02 # step size in the mesh
logreg = linear_model.LogisticRegression(C=1e5)
# we create an instance of Neighbours Classifier and fit the data.
logreg.fit(X, Y)
print(logreg.score(X,y))
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Z = logreg.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(1, figsize=(4, 3))
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Paired)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=Y, edgecolors='k', cmap=plt.cm.Paired)
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.xticks(())
plt.yticks(())
plt.show()
logreg.coef_
logreg.intercept_
Explanation: <br>
Logistic Regression
End of explanation
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn import neighbors
from matplotlib.colors import ListedColormap
from sklearn import linear_model
df_reg = pd.read_csv('datasets/example2.csv', encoding='utf-8')
X = np.array(df_reg[['x']])
y = np.array(df_reg[['y']]).ravel()
# X = np.array(df_reg[['x1','x2']])
# y = np.array(df_reg[['label']]).ravel()
plt.scatter(X,y)
plt.show()
X.shape
df_reg["x_2"] = df_reg["x"]**2
df_reg["x_3"] = df_reg["x"]**3
df_reg["x_4"] = df_reg["x"]**4
X = np.array(df_reg[['x','x_2','x_3','x_4']])
reg = linear_model.Ridge(alpha=100)
# we create an instance of Neighbours Classifier and fit the data.
reg.fit(X, y)
print(reg.score(X,y))
x_line = np.linspace(0,8,100)
x_line = np.array([x_line,x_line**2,x_line**3,x_line**4]).T
y_line = reg.predict(x_line)
reg.intercept_
plt.scatter(X[:,0],y)
plt.plot(x_line[:,0],y_line)
plt.show()
reg.coef_
Explanation: Regularization Example
End of explanation |
13,402 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic ploting
Generate coordinate vectors
Step1: Generate coordinate matrices
Step2: Compute function value
Step3: Plot contours
Step4: Plot Decision Boundary
First, generate the data and fit the classifier to the training set
Step5: Next, make a continuous grid of values and evaluate the probability of each (x, y) point in the grid
Step6: Now, plot the probability grid as a contour map and additionally show the test set samples on top of it
Step7: The logistic regression lets your classify new samples based on any threshold you want, so it doesn't inherently have one "decision boundary." But, of course, a common decision rule to use is $p = .5$. We can also just draw that contour level using the above code | Python Code:
nx, ny = 100, 100
x = np.linspace(-5, 5, nx)
y = np.linspace(-5, 5, ny)
Explanation: Basic ploting
Generate coordinate vectors
End of explanation
xx, yy = np.meshgrid(x, y, sparse=True)
Explanation: Generate coordinate matrices
End of explanation
z = np.sin(xx**2 + yy**2) / (xx**2 + yy**2)
Explanation: Compute function value
End of explanation
# filled contour
plt.contourf(x, y, z)
# unfilled contour
plt.contour(x, y, z, levels=[0.5], cmap='Greys', vmin=0, vmax=1.)
Explanation: Plot contours
End of explanation
from sklearn.linear_model import LogisticRegression
from sklearn.datasets import make_classification
X, y = make_classification(200, 2, 2, 0, weights=[.5, .5], random_state=15)
clf = LogisticRegression().fit(X[:100], y[:100])
Explanation: Plot Decision Boundary
First, generate the data and fit the classifier to the training set
End of explanation
xx, yy = np.mgrid[-5:5:.01, -5:5:.01]
grid = np.c_[xx.ravel(), yy.ravel()]
probs = clf.predict_proba(grid)[:, 1].reshape(xx.shape)
Explanation: Next, make a continuous grid of values and evaluate the probability of each (x, y) point in the grid:
End of explanation
f, ax = plt.subplots(figsize=(8, 6))
contour = ax.contourf(xx, yy, probs, 25, cmap="RdBu",
vmin=0, vmax=1)
ax_c = f.colorbar(contour)
ax_c.set_label("$P(y = 1)$")
ax_c.set_ticks([0, .25, .5, .75, 1])
ax.scatter(X[100:,0], X[100:, 1], c=y[100:], s=50,
cmap="RdBu", vmin=-.2, vmax=1.2,
edgecolor="white", linewidth=1)
ax.set(aspect="equal",
xlim=(-5, 5), ylim=(-5, 5),
xlabel="$X_1$", ylabel="$X_2$")
Explanation: Now, plot the probability grid as a contour map and additionally show the test set samples on top of it:
End of explanation
f, ax = plt.subplots(figsize=(8, 6))
ax.contour(xx, yy, probs, levels=[.5], cmap="Greys", vmin=0, vmax=.6)
ax.scatter(X[100:,0], X[100:, 1], c=y[100:], s=50,
cmap="RdBu", vmin=-.2, vmax=1.2,
edgecolor="white", linewidth=1)
ax.set(aspect="equal",
xlim=(-5, 5), ylim=(-5, 5),
xlabel="$X_1$", ylabel="$X_2$")
Explanation: The logistic regression lets your classify new samples based on any threshold you want, so it doesn't inherently have one "decision boundary." But, of course, a common decision rule to use is $p = .5$. We can also just draw that contour level using the above code:
End of explanation |
13,403 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href='http
Step1: Get the Data
Set index_col=0 to use the first column as the index.
Step2: Standardize the Variables
Because the KNN classifier predicts the class of a given test observation by identifying the observations that are nearest to it, the scale of the variables matters. Any variables that are on a large scale will have a much larger effect on the distance between the observations, and hence on the KNN classifier, than variables that are on a small scale.
Step3: Train Test Split
Step4: Using KNN
Remember that we are trying to come up with a model to predict whether someone will TARGET CLASS or not. We'll start with k=1.
Step5: Predictions and Evaluations
Let's evaluate our KNN model!
Step6: Choosing a K Value
Let's go ahead and use the elbow method to pick a good K Value
Step7: Here we can see that that after arouns K>23 the error rate just tends to hover around 0.06-0.05 Let's retrain the model with that and check the classification report! | Python Code:
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
K Nearest Neighbors with Python
You've been given a classified data set from a company! They've hidden the feature column names but have given you the data and the target classes.
We'll try to use KNN to create a model that directly predicts a class for a new data point based off of the features.
Let's grab it and use it!
Import Libraries
End of explanation
df = pd.read_csv("Classified Data",index_col=0)
df.head()
Explanation: Get the Data
Set index_col=0 to use the first column as the index.
End of explanation
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(df.drop('TARGET CLASS',axis=1))
scaled_features = scaler.transform(df.drop('TARGET CLASS',axis=1))
df_feat = pd.DataFrame(scaled_features,columns=df.columns[:-1])
df_feat.head()
Explanation: Standardize the Variables
Because the KNN classifier predicts the class of a given test observation by identifying the observations that are nearest to it, the scale of the variables matters. Any variables that are on a large scale will have a much larger effect on the distance between the observations, and hence on the KNN classifier, than variables that are on a small scale.
End of explanation
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(scaled_features,df['TARGET CLASS'],
test_size=0.30)
Explanation: Train Test Split
End of explanation
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X_train,y_train)
pred = knn.predict(X_test)
Explanation: Using KNN
Remember that we are trying to come up with a model to predict whether someone will TARGET CLASS or not. We'll start with k=1.
End of explanation
from sklearn.metrics import classification_report,confusion_matrix
print(confusion_matrix(y_test,pred))
print(classification_report(y_test,pred))
Explanation: Predictions and Evaluations
Let's evaluate our KNN model!
End of explanation
error_rate = []
# Will take some time
for i in range(1,40):
knn = KNeighborsClassifier(n_neighbors=i)
knn.fit(X_train,y_train)
pred_i = knn.predict(X_test)
error_rate.append(np.mean(pred_i != y_test))
plt.figure(figsize=(10,6))
plt.plot(range(1,40),error_rate,color='blue', linestyle='dashed', marker='o',
markerfacecolor='red', markersize=10)
plt.title('Error Rate vs. K Value')
plt.xlabel('K')
plt.ylabel('Error Rate')
Explanation: Choosing a K Value
Let's go ahead and use the elbow method to pick a good K Value:
End of explanation
# FIRST A QUICK COMPARISON TO OUR ORIGINAL K=1
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X_train,y_train)
pred = knn.predict(X_test)
print('WITH K=1')
print('\n')
print(confusion_matrix(y_test,pred))
print('\n')
print(classification_report(y_test,pred))
# NOW WITH K=23
knn = KNeighborsClassifier(n_neighbors=23)
knn.fit(X_train,y_train)
pred = knn.predict(X_test)
print('WITH K=23')
print('\n')
print(confusion_matrix(y_test,pred))
print('\n')
print(classification_report(y_test,pred))
Explanation: Here we can see that that after arouns K>23 the error rate just tends to hover around 0.06-0.05 Let's retrain the model with that and check the classification report!
End of explanation |
13,404 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced
Step1: Default Systems
Although the default empty Bundle doesn't include a system, there are available
constructors that create default systems. To create a simple binary with component tags
'binary', 'primary', and 'secondary' (as above), you could call default_binary
Step2: or for short
Step3: To build the same binary but as a contact system, you would call
Step4: For more details on dealing with contact binary systems, see the Contact Binary Hierarchy Tutorial and the Contact Binary Example Script.
Adding Components Manually
IMPORTANT
Step5: But there are also shortcut methods for add_star and add_orbit. In these cases you don't need to provide the function, but only the component tag of your star/orbit.
Any of these functions also accept values for any of the qualifiers of the created parameters.
Step6: Here we call the add_component method of the bundle with several arguments
Step7: Defining the Hierarchy
At this point all we've done is add a bunch of Parameters to our Bundle, but
we still need to specify the hierarchical setup of our system.
Here we want to place our two stars (with component tags 'primary' and 'secondary') in our
orbit (with component tag 'binary'). This can be done with several different syntaxes sent to b.set_hierarchy
Step8: or
Step9: If you access the value that this set via get_hierarchy, you'll see that it really just resulted
in a simple string representation
Step10: We could just as easily have used this string to set the hierarchy
Step11: If at any point we want to flip the primary and secondary components or make
this binary a triple, its seriously as easy as changing this hierarchy and
everything else will adjust as needed (including cross-ParameterSet constraints,
and datasets)
The Hierarchy Parameter
Setting the hierarchy just sets the value of a single parameter (although it may take some time because it also does a lot of paperwork and manages constraints between components in the system). You can access that parameter as usual
Step12: or through any of these shortcuts
Step13: This HierarchyParameter then has several methods unique to itself. You can, for instance, list the component tags of all the stars or orbits in the hierarchy via get_stars or get_orbits, respectively
Step14: Or you can ask for the component tag of the top-level item in the hierarchy via get_top.
Step15: And request the parent, children, child, or sibling of any item in the hierarchy via get_parent_of, get_children_of, or get_sibling_of.
Step16: We can also check whether a given component (by component tag) is the primary or secondary component in its parent orbit via get_primary_or_secondary. Note that here its just a coincidence (although on purpose) that the component tag is also 'secondary'. | Python Code:
#!pip install -I "phoebe>=2.4,<2.5"
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.Bundle()
Explanation: Advanced: Building a System
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
b = phoebe.Bundle.default_binary()
Explanation: Default Systems
Although the default empty Bundle doesn't include a system, there are available
constructors that create default systems. To create a simple binary with component tags
'binary', 'primary', and 'secondary' (as above), you could call default_binary:
End of explanation
b = phoebe.default_binary()
print(b.hierarchy)
Explanation: or for short:
End of explanation
b = phoebe.default_binary(contact_binary=True)
print(b.hierarchy)
Explanation: To build the same binary but as a contact system, you would call:
End of explanation
b = phoebe.Bundle()
b.add_component(phoebe.component.star, component='primary')
b.add_component('star', component='secondary')
Explanation: For more details on dealing with contact binary systems, see the Contact Binary Hierarchy Tutorial and the Contact Binary Example Script.
Adding Components Manually
IMPORTANT: in the vast majority of cases, starting with one of the default systems is sufficient. Below we will discuss the alternative method of building a system from scratch.
By default, an empty Bundle does not contain any information about our system.
So, let's first start by adding a few stars. Here we'll call the generic add_component method. This method works for any type of component in the system - stars, orbits, planets, disks, rings, spots, etc. The first argument needs to be a callable or the name of a callable in phoebe.parameters.component which include the following options:
orbit
star
envelope
add_component also takes a keyword argument for the 'component' tag. Here we'll give them component tags 'primary' and 'secondary' - but note that these are merely convenience labels and do not hold any special roles. Some tags, however, are forbidden if they clash with other tags or reserved values - so if you get error stating the component tag is forbidden, try using a different string.
End of explanation
b.add_star('extrastarforfun', teff=6000)
Explanation: But there are also shortcut methods for add_star and add_orbit. In these cases you don't need to provide the function, but only the component tag of your star/orbit.
Any of these functions also accept values for any of the qualifiers of the created parameters.
End of explanation
b.add_orbit('binary')
Explanation: Here we call the add_component method of the bundle with several arguments:
a function (or the name of a function) in phoebe.parameters.component. This
function tells the bundle what parameters need to be added.
component: the tag that we want to give this component for future reference.
any additional keyword arguments: you can also provide initial values for Parameters
that you know will be created. In the last example you can see that the
effective temperature will already be set to 6000 (in default units which is K).
and then we'll do the same to add an orbit:
End of explanation
b.set_hierarchy(phoebe.hierarchy.binaryorbit, b['binary'], b['primary'], b['secondary'])
Explanation: Defining the Hierarchy
At this point all we've done is add a bunch of Parameters to our Bundle, but
we still need to specify the hierarchical setup of our system.
Here we want to place our two stars (with component tags 'primary' and 'secondary') in our
orbit (with component tag 'binary'). This can be done with several different syntaxes sent to b.set_hierarchy:
End of explanation
b.set_hierarchy(phoebe.hierarchy.binaryorbit(b['binary'], b['primary'], b['secondary']))
Explanation: or
End of explanation
b.get_hierarchy()
Explanation: If you access the value that this set via get_hierarchy, you'll see that it really just resulted
in a simple string representation:
End of explanation
b.set_hierarchy('orbit:binary(star:primary, star:secondary)')
Explanation: We could just as easily have used this string to set the hierarchy:
End of explanation
b['hierarchy@system']
Explanation: If at any point we want to flip the primary and secondary components or make
this binary a triple, its seriously as easy as changing this hierarchy and
everything else will adjust as needed (including cross-ParameterSet constraints,
and datasets)
The Hierarchy Parameter
Setting the hierarchy just sets the value of a single parameter (although it may take some time because it also does a lot of paperwork and manages constraints between components in the system). You can access that parameter as usual:
End of explanation
b.get_hierarchy()
b.hierarchy
Explanation: or through any of these shortcuts:
End of explanation
print(b.hierarchy.get_stars())
print(b.hierarchy.get_orbits())
Explanation: This HierarchyParameter then has several methods unique to itself. You can, for instance, list the component tags of all the stars or orbits in the hierarchy via get_stars or get_orbits, respectively:
End of explanation
print(b.hierarchy.get_top())
Explanation: Or you can ask for the component tag of the top-level item in the hierarchy via get_top.
End of explanation
print(b.hierarchy.get_parent_of('primary'))
print(b.hierarchy.get_children_of('binary'))
print(b.hierarchy.get_child_of('binary', 0)) # here 0 means primary component, 1 means secondary
print(b.hierarchy.get_sibling_of('primary'))
Explanation: And request the parent, children, child, or sibling of any item in the hierarchy via get_parent_of, get_children_of, or get_sibling_of.
End of explanation
print(b.hierarchy.get_primary_or_secondary('secondary'))
Explanation: We can also check whether a given component (by component tag) is the primary or secondary component in its parent orbit via get_primary_or_secondary. Note that here its just a coincidence (although on purpose) that the component tag is also 'secondary'.
End of explanation |
13,405 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Language Translation
Student
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing
Step8: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step10: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step12: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
Step15: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below
Step18: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch.
Step21: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
Step24: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
Step27: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
Step30: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note
Step33: Build the Neural Network
Apply the functions you implemented above to
Step34: Neural Network Training
Hyperparameters
Tune the following parameters
Step36: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
Step41: Training observations
Better results could be achieved if at least two different keep_prob parameters are used for training. e.g., a higher accuracy was obtained when dropout was removed in the encoding_layer function (it was tested either on its RNN layers or at the output). That's why a high keep_prob=0.85 was selected.
A slightly better accuracy can also be achieved by increasing RNN_size but at the expense of a much higher computational cost.
No advantage was found when using cloud computing with a high-end Nvidia Tesla K80 GPU in this project. Same wall times were achieved with a GeForce GTX 960 GPU.
Save Parameters
Save the batch_size and save_path parameters for inference.
Step43: Checkpoint
Step46: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
Step48: Translate
This will translate translate_sentence from English to French. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
Explanation: Language Translation
Student: Angel Martinez-Tenor <br>
Deep Learning Nanodegree Foundation - Udacity <br>
April 17, 2017 <br>
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
source_id_text, target_id_text = [], []
source_sentences, target_sentences = source_text.split("\n"), target_text.split("\n")
assert len(source_sentences) == len(target_sentences), "Different number of source and target sentences"
for source, target in zip(source_sentences, target_sentences):
source_id_text.append([source_vocab_to_int[w] for w in source.split()])
target_with_EOS = target + ' <EOS>'
target_id_text.append([target_vocab_to_int[w] for w in target_with_EOS.split()])
return source_id_text, target_id_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
def model_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
# TODO: Implement Function
input_text = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None])
learning_rate = tf.placeholder(tf.float32)
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return input_text, targets, learning_rate, keep_prob
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoding_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)
End of explanation
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for decoding
:param target_data: Target Placeholder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
# From Sequence to Sequence lesson:
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
processed_target = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
return processed_target
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_decoding_input(process_decoding_input)
Explanation: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch.
End of explanation
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
# TODO: Implement Function
# Notes:
# 1 - Dropout location not specified: applied to each RNN layer here
# 2 - No RNN initial state is given here
rnn = tf.contrib.rnn.BasicLSTMCell(rnn_size)
drop = tf.contrib.rnn.DropoutWrapper(rnn, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
_, encoder_state = tf.nn.dynamic_rnn(cell, rnn_inputs, dtype=tf.float32)
return encoder_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
End of explanation
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
# TODO: Implement Function
# Dropout applied to decoder RNN
dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob)
train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, train_decoder_fn,
dec_embed_input, sequence_length,
scope=decoding_scope)
train_logits = output_fn(train_pred)
return train_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
Explanation: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
End of explanation
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: The maximum allowed time steps to decode
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
# TODO: Implement Function
# Reviewer hint: There is no need to apply dropout here since it doesn't matter for inference
# dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob)
infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(output_fn, encoder_state, dec_embeddings,
start_of_sequence_id, end_of_sequence_id,
maximum_length - 1, vocab_size)
inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn,
scope=decoding_scope)
return inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
Explanation: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
End of explanation
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
dec_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)
with tf.variable_scope("decoding") as decoding_scope:
output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope)
# Dropout applied to decoding_layer_train function
train_logits = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length,
decoding_scope, output_fn, keep_prob)
decoding_scope.reuse_variables()
inference_logits = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings,
target_vocab_to_int['<GO>'],
target_vocab_to_int['<EOS>'], sequence_length, vocab_size,
decoding_scope, output_fn, keep_prob)
return train_logits, inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
# Apply embedding to the input
enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size)
# Encode the input - Dropout applied here
encoder_state = encoding_layer(enc_embed_input, rnn_size, num_layers, keep_prob)
# Process target data
dec_input = process_decoding_input(target_data, target_vocab_to_int, batch_size)
# Apply embedding to the target data
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size])) # tf variable needed
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
# Decode the encoded input - Dropout also applied here
return decoding_layer(dec_embed_input, dec_embeddings, encoder_state, target_vocab_size, sequence_length,
rnn_size, num_layers, target_vocab_to_int, keep_prob)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Apply embedding to the input data for the encoder.
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
Apply embedding to the target data for the decoder.
Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
End of explanation
# Both Accuracy and Wall Time were measured to tune the hyperparameters
# Number of Epochs
epochs = 2 # >90% accuracy can be achieved in 1 epoch easily by removing dropout to some RNN layers
# Batch Size
batch_size = 128 # tested from 64 to 1024
# RNN Size
rnn_size = 256 # tested from 128 to 1024
# Number of Layers
num_layers = 1 # tested from 1 to 3
# Embedding Size
encoding_embedding_size = 1024 # tested from 12 to 4096
decoding_embedding_size = 1024 # tested from 12 to 4096
# Learning Rate
learning_rate = 0.001 # tested from 0.01 to 0.0001
# Dropout Keep Probability
keep_probability = 0.85 # tested from 0.5 to 0.9
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_source_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
%%time
DON'T MODIFY ANYTHING IN THIS CELL
import time
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
if batch_i % 200 == 0:
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
Explanation: Training observations
Better results could be achieved if at least two different keep_prob parameters are used for training. e.g., a higher accuracy was obtained when dropout was removed in the encoding_layer function (it was tested either on its RNN layers or at the output). That's why a high keep_prob=0.85 was selected.
A slightly better accuracy can also be achieved by increasing RNN_size but at the expense of a much higher computational cost.
No advantage was found when using cloud computing with a high-end Nvidia Tesla K80 GPU in this project. Same wall times were achieved with a GeForce GTX 960 GPU.
Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
Explanation: Checkpoint
End of explanation
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
# reviewer improvement:
return [vocab_to_int.get(word, vocab_to_int['<UNK>']) for word in sentence.lower().split()]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation |
13,406 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
WNixalo
2018/2/11 17
Step1: 1.
Consider the polynomial $p(x) = (x-2)^9 = x^9 - 18x^8 + 144x^7 - 672x^6 + 2016x^5 - 4032x^4 + 5376x^3 - 4608x^2 + 2304x - 512$
a. Plot $p(x)$ for $x=1.920,\,1.921,\,1.922,\ldots,2.080$ evaluating $p$ via its coefficients $1,\,,-18,\,144,\ldots$
b. Plot the same plot again, now evaluating $p$ via the expression $(x-2)^9$.
c. Explain the difference.
(The numpy method linspace will be useful for this)
Step2: WNx
Step3: WNx
Step4: a, b
Step5: WNx
Step6: Main memory reference takes 100x longer than an L1 cache lookup.
Disk seek takes 30,000x longer than a main memory reference.
L1 cache
Step7: WNx | Python Code:
%matplotlib inline
import numpy as np
import torch as pt
import matplotlib.pyplot as plt
plt.style.use('seaborn')
Explanation: WNixalo
2018/2/11 17:51
Homework No.1
End of explanation
def p(x, mode=0):
if mode == 0:
return x**9 - 18*x**8 + 144*x**7 - 672*x**6 + 2016*x**5 - 4032*x**4 + 5376*x**3 - 4608*x**2 + 2304*x - 512
else:
return (x-2)**9
Explanation: 1.
Consider the polynomial $p(x) = (x-2)^9 = x^9 - 18x^8 + 144x^7 - 672x^6 + 2016x^5 - 4032x^4 + 5376x^3 - 4608x^2 + 2304x - 512$
a. Plot $p(x)$ for $x=1.920,\,1.921,\,1.922,\ldots,2.080$ evaluating $p$ via its coefficients $1,\,,-18,\,144,\ldots$
b. Plot the same plot again, now evaluating $p$ via the expression $(x-2)^9$.
c. Explain the difference.
(The numpy method linspace will be useful for this)
End of explanation
# Signature: np.linspace(start, stop, num=50, endpoint=True, retstep=False, dtype=None)
np.linspace(1.92, 2.08, num=161)
# np.arange(1.92, 2.08, 0.001)
start = 1.92
stop = 2.08
num = int((stop-start)/0.001 + 1) # =161
x = np.linspace(start, stop, num)
def p_cœf(x):
return x - 18*x + 144*x - 672*x + 2016*x - 4032*x + 5376*x - 4608*x + 2304*x - 512
def p_cœf_alt(x):
return p(x,0)
def p_ex9(x):
return p(x,1)
Explanation: WNx: wait, what does it mean to evaluate a function by its coefficients? How is that different than just evaluating it?
--> does she mean to ignore the exponents? Because that would make b. make more sense.. I .. think.*
End of explanation
vec_pcf = np.vectorize(p_cœf)
vec_pcf_alt = np.vectorize(p_cœf_alt)
vec_px9 = np.vectorize(p_ex9)
y_cf = vec_pcf(x)
y_cf_alt = vec_pcf_alt(x)
y_x9 = vec_px9(x)
y = p(x)
Explanation: WNx: huh.. this is a thing.
Init signature: np.vectorize(pyfunc, otypes=None, doc=None, excluded=None, cache=False, signature=None)
End of explanation
fig = plt.figure(1, figsize=(12,12))
ax = fig.add_subplot(3,3,1)
ax.set_title('Coefficients')
ax.plot(y_cf)
ax = fig.add_subplot(3,3,2)
ax.set_title('$(x - 2)^2$')
ax.plot(y_x9)
ax = fig.add_subplot(3,3,3)
ax.set_title('$p(x)$')
ax.plot(y)
ax = fig.add_subplot(3,3,4)
ax.set_title('Coefficients (Alternate)')
ax.plot(y_cf_alt)
ax = fig.add_subplot(3,3,5)
ax.set_title('All')
# ax.plot(y_cf)
ax.plot(y_x9)
ax.plot(y_cf_alt)
ax.plot(y);
Explanation: a, b:
End of explanation
3e-3/1e-7
Explanation: WNx: I think my original interpretation of what "evaluate p by its coefficients" meant was wrong, so I'm leaving it out of the final "All" plot, it just drowns everything else out.
c:
WNx: $p(x) = (x-2)^9$ is the 'general' version of the Coefficient interpretation of $p$. It captures the overall trend of $p$ without all the detail. Kind of an average -- gives you the overall picture of what's going on. For instance you'd compresss signal $p$ to its $(x-2)^9$ form, instead of saving its full coeff form.
2.
2. How many different double-precision numbers are there? Express your answer using powers of 2
WNx: $2^{64} - (2^{53} - 2^0$) for IEEE 754 64-bit Double. See: Quora Link
3.
3. Using the updated Numbers Every Programmer Should Know, how much longer does a main memory reference take than an L1 cache look-up? How much longer does a disk seek take than a main memory reference?
End of explanation
### some tests:
# Q is I
Q = np.eye(3)
A = np.random.randint(-10,10,(3,3))
A
Q@[email protected]
# random orthogonal matrix Q
# ref: https://stackoverflow.com/a/38426572
from scipy.stats import ortho_group
Q = ortho_group.rvs(dim=3)
Explanation: Main memory reference takes 100x longer than an L1 cache lookup.
Disk seek takes 30,000x longer than a main memory reference.
L1 cache: 1e-9s.
MMRef: 1e-7s.
DS: 3e-3s
4.
4. From the Halide Video, what are 4 ways to traverse a 2d array?
WNx:
Scanline Order: Sequentially in Y, within that: Sequentially in X. (row-maj walk)
(or): Transpose X&Y and do a Column-Major Traversal. (walk down cols first)
Serial Y, Vectorize X by n: walk down x in increments (vectors)
Parallel Y, Vectorize X by n: distribute scanlines into parallel threads
Split X & Y by tiles (Tile-Traversal). Split X by n, Y by n. Serial Y_outer, Serial X_outer, Serial Y_inner, Serial X_inner
See: Halide Video section
5.
5. Using the animations --- (source), explain what the benefits and pitfalls of each approach. Green squares indicate that a value is being read; red indicates a value is being written. Your answers should be longer in length (give more detail) than just two words.
WNx:
1) Parallelizable across scanlines. Entire input computed before output computation. \ Poor Locality.
Loading is slow and limited by system memory bandwidth. By the time the blurred in y stage goes to read some intermediate data, it's probably been evicted from cache.
2) Parallelizable across scanlines. Locality. \ Redundant Computation.
Each point in blurred in x is recomputed 3 times.
3) Locality & No redundant computation. \ Serial Dependence --> Poor Parallelism.
Introduction of a serial dependence in the scanlines of the output. Relying on having to compute scanline N-1 before computing scanline N. We ca
6.
6. Prove that if $A = Q B Q^T$ for some orthogonal matrix $Q$, the $A$ and $B$ have the same singular values.
Orthogonal Matrix: $Q^TQ = QQ^T = I \iff Q^T = Q^{-1}$
So.. if you put matrix $B$ in between $Q$ and $Q^T$, what your doing is performing a transformation on $B$ and then performing the inverse of that transformation on it. ie: Returning $B$ to what it was originally. $\Rightarrow$ if $B$ is ultimately unchanged and $A=QBQ^T$ then $A=B$ (or at least same sing.vals?) This -- seems to me -- an inherent property of the orthogonal matrix $Q$.
edit: ahhh, Singluar Values are not just the values of a matrix. Like Eigen Values, they tell something special about it Mathematics StackEx link
End of explanation
# setting A & B
B = np.random.randint(-100,100,(3,3))
A = Q@[email protected]
Ua, sa, Va = np.linalg.svd(A, full_matrices=False)
Ub, sb, Vb = np.linalg.svd(B, full_matrices=False)
# sa & sb are the singular values of A and B
np.isclose(sa, sb)
sa, sb
Explanation: WNx: gonna have to do SVD to find the singular values of $A$. Then make a matrix $B$ st. $A=QBQ^T$. Then check that A.σ == B.σ. C.Mellon U. page on SVD
From the Lesson 2 notebook, I think I'll start with $B$ and compute $A$ acc. to the eqn, then check σ's of both.
Aha. So σ is s is S. The diagonal matrix of singular values. Everyone's using different names for the same thing. bastards.
End of explanation |
13,407 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
This notebook is the computational appendix of arXiv
Step1: Checking criticial visibility
The question whether a qubit or qutrit POVM is simulable by projective measurements boils down to an SDP feasibility problem. Few SDP solvers can handle feasibility problems, so from a computational point of view, it is always easier to phrase the question as an SDP optimization, which would return the critical visibility, below which the amount of depolarizing noise would allow simulability. Recall Eq. (8) from the paper that defines the noisy POVM that we obtain subjecting a POVM $\mathbf{M}$ to a depolarizing channel $\Phi_t$
Step2: Next we check with an SDP whether it is simulable by projective measurements. A four outcome qubit POVM $\mathbf{M} \in\mathcal{P}(2,4)$ is simulable if and only if
$M_{1}=N_{12}^{+}+N_{13}^{+}+N_{14}^{+},$
$M_{2}=N_{12}^{-}+N_{23}^{+}+N_{24}^{+},$
$M_{3}=N_{13}^{-}+N_{23}^{-}+N_{34}^{+},$
$M_{4}=N_{14}^{-}+N_{24}^{-}+N_{34}^{-},$
where Hermitian operators $N_{ij}^{\pm}$ satisfy $N_{ij}^{\pm}\geq0$ and $N_{ij}^{+}+N_{ij}^{-}=p_{ij}\mathbb{1}$, where $i<j$ , $i,j=1,2,3,4$ and $p_{ij}\geq0$ as well as $\sum_{i<j}p_{ij}=1$, that is, the $p_{ij}$ values form a probability vector. This forms an SDP feasibility problem, which we can rephrase as an optimization problem by adding depolarizing noise to the left-hand side of the above equations and maximizing the visibility $t$
Step3: Qutrit example
We take the qutrit POVM from Section 4.1 in arXiv
Step4: The SDP to be solved is more intricate than the qubit case. Using the notations of Lemma 14, let $\mathbf{M}\in\mathcal{P}(d,n)$ be $n$-outcome measurement on $d$ dimensional Hilbert space. Let $m\leq n$. The critical visibility $t_{m}(\mathbf{M})$ can be computed via the following SPD programme
$\max_{t\in[0,1]} t$
Such that
$t\mathbf{M}+(1-t)\left(\mathrm{tr}(M_1)\frac{\mathbb{1}}{d},\ldots,\mathrm{tr}(M_n)\frac{\mathbb{1}}{d}\right)=\mathbf{M}=\sum_{X\in[n]2} \mathbf{N}_X + \sum{Y\in[n]_3} \mathbf{N}_Y$,
$\left{\mathbf{N}X\right}{X\in[n]3}\ ,\ \left{p_X\right}{X\in[n]2}\ , \left{\mathbf{N}_Y\right}{Y\in[n]3}\ ,\ \left{p_Y\right}{X\in[n]_3}$,
$\mathbf{M}=\sum_{X\in[n]2} \mathbf{N}_X + \sum{Y\in[n]_3} \mathbf{N}_Y$,
$[\mathbf{N}_X]_i \geq 0\ ,\ [\mathbf{N}_Y]_i \geq 0\,\ \ i=1,\ldots,n$,
$[\mathbf{N}_X]_i = 0$ for $i\notin{X}, [\mathbf{N}_Y]_i = 0$ for $i\notin{Y}$,
$\mathrm{tr}\left([\mathbf{N}_Y]_i\right) = p_Y$ for $i\in{Y}$,
$\sum_{i=1}^n [\mathbf{N}X]_i= p_X \mathbb{1}\ , \sum{i=1}^n[\mathbf{N}_Y]_i= p_Y \mathbb{1}$
$p_X \geq 0\ ,\ p_Y\geq 0\ ,\ \sum_{X\in[n]2} p_X+\sum{Y\in[n]_3} p_Y=1$.
Solving this SDP for the qutrit POVM above, we see that the visibility is far from one
Step5: We can also look only for the visibility needed to decompose the POVM into general 3-outcome POVMs.
Step6: Next we look at a projective-simulable POVM
Step7: The result is very close to one
Step8: External polytopes approximating $\mathcal{P}(2,4)$
Here we repeat some of the theory we presented in Appendix D. We would like to find an external polytope that tightly approximates $\mathcal{P}(2,4)$ and then check how much we have to "shrink" it until it fits inside the set of simulable POVMs. Since the operators ${\mathbb{1}, \sigma_x, \sigma_y, \sigma_z}$ form is a basis for the real space of Hermitian matrices $\mathrm{Herm}(\mathbb{C}^2)$, we can write any matrix in this set as $M = \alpha \mathbb{1} + x \sigma_x + y \sigma_y + z \sigma_z$ for some $\alpha, x, y, z$ real numbers. We relax the positivity condition of the measurement effects by requiring $\mathrm{tr}(M|\psi_j\rangle\langle\psi_j|)\geq0$ for some collection of pure states ${|\psi_j\rangle\langle\psi_j|}_{j=1}^N$, which in turn can always be expressed as $|\psi_j\rangle\langle\psi_j|=(1/2)(\mathbb{1}-\vec{v}_j \cdot \vec{\sigma})$ for a vector $\vec{v}_j$ from a unit sphere in $\mathbb{R}^3$. Thus, with the measurement effects also expressed in the same basis, we can write the relaxed positivity conditions as
$(x,y,z)\cdot v_j \leq \alpha,\ i=1,\ldots,N$,
where "$\cdot$" denotes the standard inner product in $\mathbb{R}^3$. We describe the approximating polytope as
$\begin{eqnarray}
&\alpha_i \geq 0, \ i=1,...,4, \sum_i{\alpha_i} = 1\
&\sum_i{x_i} = \sum_i{y_i} = \sum_i{z_i} = 0.
\end{eqnarray}$
This yields sixteen real parameters, which would be expensive to treat computationally. We can, however, exploit certain properties that reduce the number of parameters. First of all, since the effects add up to the identity, we can drop the last four parameters. Then, due to invariance properties and the characteristics of extremal POVMs, one parameter is sufficient for the first effect, and three are enough for the second. In total, we are left with eight parameters.
We would like to define inequalities defining the facets of the polytope, run a vertex enumeration algorithm, and refine the polytope further. From a computational perspective, a critical point is enumerating the vertices given the inequalities. For this, two major implementations are available that use fundamentally different algorithms
Step9: Next, we would like to constrain the third effect $M_3$, and we start this approximation by defining a polytope approximating the unit sphere in $\mathbb{R}^3$. Similiarly to the previous case, the approximation is defined by the points of tangency to the sphere, provided by the stereographic projection of a set of rational points contained in $[-1, 1]\times[-1, 1]$.
Step10: The following constraints ensure that the third operator $M_3$ of the measurement is a quasi-effect, with the approximation given by $v$
Step11: The next set of constraints ensures the same condition for the last effect, which we express by $\mathbb{1} - M_1 - M_2 - M_3$
Step12: We need that $\alpha_0, \alpha_1, \alpha_2 \geq 0$
Step13: We also require that $\alpha_0+\alpha_1+\alpha_2 \leq 1$, which corresponds to expressing the previous constraint for the last effect
Step14: Finally, we need that $\alpha_0 \geq \alpha_1 \geq \alpha_2 \geq 1-\alpha_0-\alpha_1-\alpha_2$, a condition that we can impose without lost of generality due to relabeling. Once we have the last constraints, we stack the vectors in a single array.
Step15: We enumerate the vertices, which is a time-consuming operation
Step16: As the last step, we iterate the SDP optimization described in Section "Checking criticial visibility" over all vertices to get the best shrinking factor. This takes several hours to complete. Parallel computations do not work from a notebook, but they do when the script is executed in a console. For this reason, here we disable parallel computations.
Step17: External polytopes approximating $\mathcal{P}_{\mathrm{cov}}(3,9)$
Following the same reasoning applied to approximate $\mathcal{P}(2,4)$ we now approximate the set of qutrit covariant POVMs $\mathcal{P}{\mathrm{cov}}(3,9)$ regarding the discrete Heinsenberg group. This task is relatively simple, since we need only to consider a quasi-positive seed $M$ and derive the effects from it by conjugating by the unitaries $D{ij}$, which rotate the space by symmetric directions.
Step18: We use the the approximation to the set of positive semi-definite operators given by $tr(M\Psi_i)\geq0$, for some finite set of rank-one projectors ${\Psi_i}$. We generate random projectors and rotate them by the unitaries $D_{ij}$ above in order to obtain a more regular distribution in the space of projectors.
Step19: We now translate each of the trace constraints above by writing each $M = (m_{ab})$ as a 8-dimensional real vector $(m_{00}, re(m_{01}), im(m_{01}),..., re(m_{12}), im(m_{12}))$, where $m_{22} = 1/3 -m_{00} -m_{11}$ since its trace is fixed.
Step20: We then construct the polytope generated by these inequalities.
Step21: To have an idea on how good the approximation is, we can translate each vertice obtained into a quasi-POVM and check how negative its eigenvalues are.
Step22: We then optimise over the extremal points of the polytope.
Step23: Decomposition of qutrit three-outcome, trace-one POVMS into projective measurements
We implemented the constructive strategy to find a projective decomposition for trace-one qutrit measurements $\mathbf{M}\in\mathcal{P}_1(3,3)$ we described in Appendix D.
First we define a function to generate a random trace-1 POVM. This is the only step that requires an additional dependency compared to the ones we loaded in the beginning of the notebook. The dependency is QuTiP.
Step24: Then we decompose a random POVM following the cascade of "rank reductions" described in Appendix D, and check the ranks
Step25: As a sanity check, we look at the ranks of the effects of the individual projective measurements. We must point out that the numerical calculations occasionally fail, and we set the tolerance in rank calculations high.
Step26: We show that the projective measurements indeed return the POVM | Python Code:
from __future__ import print_function, division
from fractions import Fraction
import numpy as np
import numpy.linalg
import random
import time
from povm_tools import basis, check_ranks, complex_cross_product, dag, decomposePovmToProjective, \
enumerate_vertices, find_best_shrinking_factor, get_random_qutrit, \
get_visibility, Pauli, truncatedicosahedron
Explanation: Introduction
This notebook is the computational appendix of arXiv:1609.06139. We demonstrate how to use some convenience functions to decide whether a qubit or qutrit POVM is simulable and how much noise is needed to make it simulable. We also give the details how to reproduce the numerical results shown in the paper. Furthermore, we show how to decompose a simulable POVM into a convex combination of projective measurements.
To improve readability of this notebook, we placed the supporting functions to a separate file; please download this in the same folder as the notebook if you would like to evaluate it. The following dependencies must also be available: the Python Interface for Conic Optimization Software Picos and its dependency cvxopt, at least one SDP solver (SDPA as an executable in the path or Mosek with its Python interface installed; cvxopt as a solver is not recommended), and a vertex enumerator (cdd with its Python interface or lrs/plrs as an executable in the path).
First, we import everything we will need:
End of explanation
def dp(v):
result = np.eye(2, dtype=np.complex128)
for i in range(3):
result += v[i]*Pauli[i]
return result
b = [np.array([ 1, 1, 1])/np.sqrt(3),
np.array([-1, -1, 1])/np.sqrt(3),
np.array([-1, 1, -1])/np.sqrt(3),
np.array([ 1, -1, -1])/np.sqrt(3)]
M = [dp(bj)/4 for bj in b]
Explanation: Checking criticial visibility
The question whether a qubit or qutrit POVM is simulable by projective measurements boils down to an SDP feasibility problem. Few SDP solvers can handle feasibility problems, so from a computational point of view, it is always easier to phrase the question as an SDP optimization, which would return the critical visibility, below which the amount of depolarizing noise would allow simulability. Recall Eq. (8) from the paper that defines the noisy POVM that we obtain subjecting a POVM $\mathbf{M}$ to a depolarizing channel $\Phi_t$:
$\left[\Phi_t\left(\mathbf{M}\right)\right]_i := t M_i + (1-t)\frac{\mathrm{tr}(M_i)}{d} \mathbb{1}$.
If this visibility $t\in[0,1]$ is one, the POVM $\mathbf{M}$ is simulable.
Qubit example
As an example, we study the tetrahedron measurement (see Appendix B in arXiv:quant-ph/0702021):
End of explanation
get_visibility(M)
Explanation: Next we check with an SDP whether it is simulable by projective measurements. A four outcome qubit POVM $\mathbf{M} \in\mathcal{P}(2,4)$ is simulable if and only if
$M_{1}=N_{12}^{+}+N_{13}^{+}+N_{14}^{+},$
$M_{2}=N_{12}^{-}+N_{23}^{+}+N_{24}^{+},$
$M_{3}=N_{13}^{-}+N_{23}^{-}+N_{34}^{+},$
$M_{4}=N_{14}^{-}+N_{24}^{-}+N_{34}^{-},$
where Hermitian operators $N_{ij}^{\pm}$ satisfy $N_{ij}^{\pm}\geq0$ and $N_{ij}^{+}+N_{ij}^{-}=p_{ij}\mathbb{1}$, where $i<j$ , $i,j=1,2,3,4$ and $p_{ij}\geq0$ as well as $\sum_{i<j}p_{ij}=1$, that is, the $p_{ij}$ values form a probability vector. This forms an SDP feasibility problem, which we can rephrase as an optimization problem by adding depolarizing noise to the left-hand side of the above equations and maximizing the visibility $t$:
$\max_{t\in[0,1]} t$
such that
$t\,M_{1}+(1-t)\,\mathrm{tr}(M_{1})\frac{\mathbb{1}}{2}=N_{12}^{+}+N_{13}^{+}+N_{14}^{+},$
$t\,M_{2}+(1-t)\,\mathrm{tr}(M_{2})\frac{\mathbb{1}}{2}=N_{12}^{-}+N_{23}^{+}+N_{24}^{+},$
$t\,M_{3}+(1-t)\,\mathrm{tr}(M_{3})\frac{\mathbb{1}}{2}=N_{13}^{-}+N_{23}^{-}+N_{34}^{+},$
$t\,M_{4}+(1-t)\,\mathrm{tr}(M_{4})\frac{\mathbb{1}}{2}=N_{14}^{-}+N_{24}^{-}+N_{34}^{-}$.
If it is, the critical visibility is one, we have a simulable measurement. We solve this SDP with the function get_visibility for the tetrahedron measurement, indicating that it is not simulable:
End of explanation
psi0 = np.array([[1/np.sqrt(2)], [1/np.sqrt(2)], [0]])
omega = np.exp(2*np.pi*1j/3)
D = [[omega**(j*k/2) * sum(np.power(omega, j*m) * np.kron(basis((k+m) % 3), basis(m).T)
for m in range(3)) for k in range(1, 4)] for j in range(1, 4)]
psi = [[D[j][k].dot(psi0) for k in range(3)] for j in range(3)]
M = [np.kron(psi[k][j], psi[k][j].conj().T)/3 for k in range(3) for j in range(3)]
Explanation: Qutrit example
We take the qutrit POVM from Section 4.1 in arXiv:quant-ph/0310013:
End of explanation
get_visibility(M, solver=None, proj=True)
Explanation: The SDP to be solved is more intricate than the qubit case. Using the notations of Lemma 14, let $\mathbf{M}\in\mathcal{P}(d,n)$ be $n$-outcome measurement on $d$ dimensional Hilbert space. Let $m\leq n$. The critical visibility $t_{m}(\mathbf{M})$ can be computed via the following SPD programme
$\max_{t\in[0,1]} t$
Such that
$t\mathbf{M}+(1-t)\left(\mathrm{tr}(M_1)\frac{\mathbb{1}}{d},\ldots,\mathrm{tr}(M_n)\frac{\mathbb{1}}{d}\right)=\mathbf{M}=\sum_{X\in[n]2} \mathbf{N}_X + \sum{Y\in[n]_3} \mathbf{N}_Y$,
$\left{\mathbf{N}X\right}{X\in[n]3}\ ,\ \left{p_X\right}{X\in[n]2}\ , \left{\mathbf{N}_Y\right}{Y\in[n]3}\ ,\ \left{p_Y\right}{X\in[n]_3}$,
$\mathbf{M}=\sum_{X\in[n]2} \mathbf{N}_X + \sum{Y\in[n]_3} \mathbf{N}_Y$,
$[\mathbf{N}_X]_i \geq 0\ ,\ [\mathbf{N}_Y]_i \geq 0\,\ \ i=1,\ldots,n$,
$[\mathbf{N}_X]_i = 0$ for $i\notin{X}, [\mathbf{N}_Y]_i = 0$ for $i\notin{Y}$,
$\mathrm{tr}\left([\mathbf{N}_Y]_i\right) = p_Y$ for $i\in{Y}$,
$\sum_{i=1}^n [\mathbf{N}X]_i= p_X \mathbb{1}\ , \sum{i=1}^n[\mathbf{N}_Y]_i= p_Y \mathbb{1}$
$p_X \geq 0\ ,\ p_Y\geq 0\ ,\ \sum_{X\in[n]2} p_X+\sum{Y\in[n]_3} p_Y=1$.
Solving this SDP for the qutrit POVM above, we see that the visibility is far from one:
End of explanation
get_visibility(M, solver=None, proj=False)
Explanation: We can also look only for the visibility needed to decompose the POVM into general 3-outcome POVMs.
End of explanation
psi = [get_random_qutrit()]
psi.append(complex_cross_product(psi[0], np.array([[0], [0], [1]])))
psi.append(complex_cross_product(psi[0], psi[1]))
phi = [get_random_qutrit()]
phi.append(complex_cross_product(phi[0], np.array([[0], [0], [1]])))
phi.append(complex_cross_product(phi[0], phi[1]))
M = [0.5*np.kron(psi[0], psi[0].conj().T),
0.5*np.kron(psi[1], psi[1].conj().T),
0.5*np.kron(psi[2], psi[2].conj().T) + 0.5*np.kron(phi[0], phi[0].conj().T),
0.5*np.kron(phi[1], phi[1].conj().T),
0.5*np.kron(phi[2], phi[2].conj().T),
np.zeros((3, 3), dtype=np.float64),
np.zeros((3, 3), dtype=np.float64),
np.zeros((3, 3), dtype=np.float64),
np.zeros((3, 3), dtype=np.float64)]
Explanation: Next we look at a projective-simulable POVM:
End of explanation
get_visibility(M)
Explanation: The result is very close to one:
End of explanation
n = 25
# crit is an approximation of 1/(1+sqrt2), the point whose 2D
# stereographic projection is (1/sqrt2, 1/sqrt2)
crit = Fraction(4142, 10000)
# for the interval [crit, 1] the projection from the pole P = (-1, 0)
# approximates "well" the circle
nn = Fraction(1 - crit, n)
# u discretizes the quarter of circle where x, y \geq 0
u = []
for r in range(1, n + 1):
# P = (0, -1), x \in [crit, 1]
u.append([Fraction(2*(crit + r*nn), (crit + r*nn)**2 + 1),
Fraction(2, (crit + r*nn)**2 + 1) - 1])
# P = (-1, 0), y \in [crit, 1]
u.append([Fraction(2, (crit + r*nn)**2 + 1) - 1,
Fraction(2*(crit + r*nn), (crit + r*nn)**2 + 1)])
u = np.array(u)
# u1 discretizes the quarter of circle where x \leq 0, y \geq 0
u1 = np.column_stack((-u[:, 0], u[:, 1]))
u = np.row_stack((u, u1))
# W1 encodes the polyhedron given by the tangency points in u
W1 = np.zeros((u.shape[0] + 1, 9), dtype=fractions.Fraction)
for i in range(u.shape[0]):
W1[i, 2:5] = np.array([1, -u[i, 0], -u[i, 1]])
# This constraint is to get only the half polygon with positive y2
W1[u.shape[0], 4] = 1
Explanation: External polytopes approximating $\mathcal{P}(2,4)$
Here we repeat some of the theory we presented in Appendix D. We would like to find an external polytope that tightly approximates $\mathcal{P}(2,4)$ and then check how much we have to "shrink" it until it fits inside the set of simulable POVMs. Since the operators ${\mathbb{1}, \sigma_x, \sigma_y, \sigma_z}$ form is a basis for the real space of Hermitian matrices $\mathrm{Herm}(\mathbb{C}^2)$, we can write any matrix in this set as $M = \alpha \mathbb{1} + x \sigma_x + y \sigma_y + z \sigma_z$ for some $\alpha, x, y, z$ real numbers. We relax the positivity condition of the measurement effects by requiring $\mathrm{tr}(M|\psi_j\rangle\langle\psi_j|)\geq0$ for some collection of pure states ${|\psi_j\rangle\langle\psi_j|}_{j=1}^N$, which in turn can always be expressed as $|\psi_j\rangle\langle\psi_j|=(1/2)(\mathbb{1}-\vec{v}_j \cdot \vec{\sigma})$ for a vector $\vec{v}_j$ from a unit sphere in $\mathbb{R}^3$. Thus, with the measurement effects also expressed in the same basis, we can write the relaxed positivity conditions as
$(x,y,z)\cdot v_j \leq \alpha,\ i=1,\ldots,N$,
where "$\cdot$" denotes the standard inner product in $\mathbb{R}^3$. We describe the approximating polytope as
$\begin{eqnarray}
&\alpha_i \geq 0, \ i=1,...,4, \sum_i{\alpha_i} = 1\
&\sum_i{x_i} = \sum_i{y_i} = \sum_i{z_i} = 0.
\end{eqnarray}$
This yields sixteen real parameters, which would be expensive to treat computationally. We can, however, exploit certain properties that reduce the number of parameters. First of all, since the effects add up to the identity, we can drop the last four parameters. Then, due to invariance properties and the characteristics of extremal POVMs, one parameter is sufficient for the first effect, and three are enough for the second. In total, we are left with eight parameters.
We would like to define inequalities defining the facets of the polytope, run a vertex enumeration algorithm, and refine the polytope further. From a computational perspective, a critical point is enumerating the vertices given the inequalities. For this, two major implementations are available that use fundamentally different algorithms:
cdd and its Python interface.
lrs and its parallel variant plrs. We developed a simple Python wrapper for this implementation.
Using cdd results in fewer vertices, but lrs and plrs run at least a magnitude faster. The function enumerate_vertices abstracts away the implementation, and the user can choose between cdd, lrs, and plrs. Note that format of inequalities is $b+Ax\geq 0$, where $b$ is the constant vector and $A$ is the coefficient matrix. Thus a line in our parametrization is of the form $[b, \alpha_1, \alpha_2, \alpha_3, \alpha_4, \alpha_5, \alpha_6, \alpha_7, \alpha_8]$, corresponding to an inequality $b +\alpha_1 x_1 + \alpha_2 x_2 + \alpha_3 x_3 + \alpha_4 x_4 + \alpha_5 x_5 + \alpha_6 x_6 + \alpha_7 x_7 + \alpha_8 x_8 \geq 0$.
Since $M_2$ lies in the plane, we consider an approximation to the circle, rather than the sphere. Furthermore, we can always assume that this vector lies in the $y$-positive semi-plane, and take a polygon approximating the semi-circle from the outside, defined by the points of tangency to the semi-circle. In order to obtain a reliable polytope, given only by rational coordinates, we set these points to be the image of a net of 100 points of the interval $[-1,1]$ via the stereographic projection. By dealing only with rational points, we ensure that we can go back and forth from inequalities to vertices recovering the same initial set.
End of explanation
m1 = 2
m2 = 1
# crit is the same as above
mm1 = Fraction(1, m1)
mm2 = Fraction(crit, m2)
# v1 discretizes the positive octant of the sphere
v1 = []
# P = (0, 0, -1), x, y \in [0, 1]
for rx in range(1, m1 + 1):
for ry in range(1, m1 + 1):
v1.append([Fraction(2*(rx*mm1), (rx*mm1)**2 + (ry*mm1)**2 + 1),
Fraction(2*(ry*mm1), (rx*mm1)**2 + (ry*mm1)**2 + 1),
1 - Fraction(2, (rx*mm1)**2 + (ry*mm1)**2 + 1)])
# a second round to improve the approximation around the pole
# P = (0, 0, -1), x, y \in [0, crit]
for rx in range(1, m2 + 1):
for ry in range(1, m2 + 1):
v1.append([Fraction(2*(rx*mm2), (rx*mm2)**2 + (ry*mm2)**2 + 1),
Fraction(2*(ry*mm2), (rx*mm2)**2 + (ry*mm2)**2 + 1),
1 - Fraction(2, (rx*mm2)**2 + (ry*mm2)**2 + 1)])
v1 = np.array(v1)
# we now reflect the positive octant to construct the whole sphere
v1a = np.column_stack((-v1[:, 0], v1[:, 1], v1[:, 2]))
v1 = np.row_stack((v1, v1a))
v1b = np.column_stack((v1[:, 0], -v1[:, 1], v1[:, 2]))
v1 = np.row_stack((v1, v1b))
v1c = np.column_stack((v1[:, 0], v1[:, 1], -v1[:, 2]))
v1 = np.row_stack((v1, v1c))
# the following discretizes the quarters of equators where x, y, z > 0,
# corresponding to the case where rx, ry = 0 above, around the origin
yz = []
xz = []
xy = []
for r in range(1, m1+1):
# P = [0, 0, -1], x = 0, y \in [0, 1]
yz.append([0,
Fraction(2*(r*m1), (r*m1)**2 + 1),
1 - Fraction(2, (r*m1)**2 + 1)])
# P = [0, 0,-1], y = 0, x \in [0, 1]
xz.append([Fraction(2*(r*m1), (r*m1)**2 + 1),
0,
1 - Fraction(2, (r*m1)**2 + 1)])
# P = [0, -1, 0], z = 0, x \in [0, 1]
xy.append([Fraction(2*(r*m1), (r*m1)**2 + 1),
1 - Fraction(2, (r*m1)**2 + 1),
0])
yz = np.array(yz)
xz = np.array(xz)
xy = np.array(xy)
yz1 = np.column_stack((yz[:, 0], -yz[:, 1], yz[:, 2]))
yz2 = np.column_stack((yz[:, 0], yz[:, 1], -yz[:, 2]))
yz3 = np.column_stack((yz[:, 0], -yz[:, 1], -yz[:, 2]))
yz = np.row_stack((yz, yz1, yz2, yz3))
xz1 = np.column_stack((-xz[:, 0], xz[:, 1], xz[:, 2]))
xz2 = np.column_stack((xz[:, 0], xz[:, 1], -xz[:, 2]))
xz3 = np.column_stack((-xz[:, 0], xz[:, 1], -xz[:, 2]))
xz = np.row_stack((xz, xz1, xz2, xz3))
xy1 = np.column_stack((-xy[:, 0], xy[:, 1], xy[:, 2]))
xy2 = np.column_stack((xy[:, 0], -xy[:, 1], xy[:, 2]))
xy3 = np.column_stack((-xy[:, 0], -xy[:, 1], xy[:, 2]))
xy = np.row_stack((xy, xy1, xy2, xy3))
v2 = np.row_stack((yz, xz, xy))
v = np.row_stack((v1, v2))
Explanation: Next, we would like to constrain the third effect $M_3$, and we start this approximation by defining a polytope approximating the unit sphere in $\mathbb{R}^3$. Similiarly to the previous case, the approximation is defined by the points of tangency to the sphere, provided by the stereographic projection of a set of rational points contained in $[-1, 1]\times[-1, 1]$.
End of explanation
W2 = np.zeros((v.shape[0], 9), dtype=fractions.Fraction)
for i in range(v.shape[0]):
W2[i, 5:] = np.array([1, -v[i, 0], -v[i, 1], -v[i, 2]])
Explanation: The following constraints ensure that the third operator $M_3$ of the measurement is a quasi-effect, with the approximation given by $v$:
End of explanation
W3 = np.zeros((v.shape[0], 9))
for i in range(v.shape[0]):
W3[i] = [1, -1+v[i, 0], -1, v[i, 0], v[i, 1], -1,
v[i, 0], v[i, 1], v[i, 2]]
Explanation: The next set of constraints ensures the same condition for the last effect, which we express by $\mathbb{1} - M_1 - M_2 - M_3$:
End of explanation
W4 = np.array([[0, 1, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 0, 0, 0]])
Explanation: We need that $\alpha_0, \alpha_1, \alpha_2 \geq 0$:
End of explanation
W5 = np.array([[1, -1, -1, 0, 0, -1, 0, 0, 0]])
Explanation: We also require that $\alpha_0+\alpha_1+\alpha_2 \leq 1$, which corresponds to expressing the previous constraint for the last effect:
End of explanation
W6 = np.array([[0, 1, -1, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, -1, 0, 0, 0],
[-1, 1, 1, 0, 0, 2, 0, 0, 0]])
hull = np.row_stack((W1, W2, W3, W4, W5, W6))
Explanation: Finally, we need that $\alpha_0 \geq \alpha_1 \geq \alpha_2 \geq 1-\alpha_0-\alpha_1-\alpha_2$, a condition that we can impose without lost of generality due to relabeling. Once we have the last constraints, we stack the vectors in a single array.
End of explanation
time0 = time.time()
ext = enumerate_vertices(hull, method="plrs", verbose=1)
print("Vertex enumeration in %d seconds" % (time.time()-time0))
Explanation: We enumerate the vertices, which is a time-consuming operation:
End of explanation
time0 = time.time()
alphas = find_best_shrinking_factor(ext, 2, solver="mosek", parallel=False)
print("\n Found in %d seconds" % (time.time()-time0))
Explanation: As the last step, we iterate the SDP optimization described in Section "Checking criticial visibility" over all vertices to get the best shrinking factor. This takes several hours to complete. Parallel computations do not work from a notebook, but they do when the script is executed in a console. For this reason, here we disable parallel computations.
End of explanation
w = np.cos(2*np.pi/3) + 1j*np.sin(2*np.pi/3)
x = np.matrix([[1, 0, 0], [0, 1, 0], [0, 0, 1]])
D = [[], [], []]
for j in range(3):
for k in range(3):
D[j].append(np.matrix((w**(j*k/2))*(
sum(w**(j*m)*x[:, np.mod(k + m, 3)]*dag(x[:, m]) for m in
range(3)))))
Explanation: External polytopes approximating $\mathcal{P}_{\mathrm{cov}}(3,9)$
Following the same reasoning applied to approximate $\mathcal{P}(2,4)$ we now approximate the set of qutrit covariant POVMs $\mathcal{P}{\mathrm{cov}}(3,9)$ regarding the discrete Heinsenberg group. This task is relatively simple, since we need only to consider a quasi-positive seed $M$ and derive the effects from it by conjugating by the unitaries $D{ij}$, which rotate the space by symmetric directions.
End of explanation
# Discretization of the set of PSD matrices with 9*N elements
N = 2
disc = []
for _ in range(N):
psi = np.matrix(qutip.Qobj.full(qutip.rand_ket(3)))
for j in range(3):
for k in range(3):
disc.append(D[j][k]*(psi*dag(psi))*dag(D[j][k]))
Explanation: We use the the approximation to the set of positive semi-definite operators given by $tr(M\Psi_i)\geq0$, for some finite set of rank-one projectors ${\Psi_i}$. We generate random projectors and rotate them by the unitaries $D_{ij}$ above in order to obtain a more regular distribution in the space of projectors.
End of explanation
hull = []
for i in range(9*N):
# each row of hull ensures tr(M*disc[i])>= 0
hull.append([np.real(disc[i][2, 2])/3,
np.real(disc[i][0, 0]) - np.real(disc[i][2, 2]),
2*np.real(disc[i][1, 0]), -2*np.imag(disc[i][1, 0]),
2*np.real(disc[i][2, 0]), -2*np.imag(disc[i][2, 0]),
np.real(disc[i][1, 1]) - np.real(disc[i][2, 2]),
2*np.real(disc[i][2, 1]), -2*np.imag(disc[i][2, 1])])
Explanation: We now translate each of the trace constraints above by writing each $M = (m_{ab})$ as a 8-dimensional real vector $(m_{00}, re(m_{01}), im(m_{01}),..., re(m_{12}), im(m_{12}))$, where $m_{22} = 1/3 -m_{00} -m_{11}$ since its trace is fixed.
End of explanation
cov_ext = enumerate_vertices(np.array(hull), method="plrs")
Explanation: We then construct the polytope generated by these inequalities.
End of explanation
# Converting vectors into covariant POVMs
povms = []
for i in range(cov_ext.shape[0]):
eff = np.matrix([[cov_ext[i, 1],
cov_ext[i, 2] + cov_ext[i, 3]*1j,
cov_ext[i, 4] + cov_ext[i, 5]*1j],
[cov_ext[i, 2] - cov_ext[i, 3]*1j,
cov_ext[i, 6],
cov_ext[i, 7] + cov_ext[i, 8]*1j],
[cov_ext[i, 4] - cov_ext[i, 5]*1j,
cov_ext[i, 7] - cov_ext[i, 8]*1j,
1/3 - cov_ext[i, 1] - cov_ext[i, 6]]])
M = []
for j in range(3):
for k in range(3):
M.append(D[j][k]*eff*dag(D[j][k]))
povms.append(M)
# Finding the least eigenvalues
A = np.zeros((cov_ext.shape[0]))
for i in range(cov_ext.shape[0]):
A[i] = min(numpy.linalg.eigvalsh(povms[i][0]))
a = min(A)
Explanation: To have an idea on how good the approximation is, we can translate each vertice obtained into a quasi-POVM and check how negative its eigenvalues are.
End of explanation
alphas = find_best_shrinking_factor(cov_ext, 3, parallel=True)
Explanation: We then optimise over the extremal points of the polytope.
End of explanation
from qutip import rand_unitary
def get_random_trace_one_povm(dim=3):
U = rand_unitary(dim)
M = [U[:, i]*dag(U[:, i]) for i in range(dim)]
for _ in range(dim-1):
U = rand_unitary(dim)
r = random.random()
for i in range(dim):
M[i] = r*M[i] + (1-r)*U[:, i]*dag(U[:, i])
return M
Explanation: Decomposition of qutrit three-outcome, trace-one POVMS into projective measurements
We implemented the constructive strategy to find a projective decomposition for trace-one qutrit measurements $\mathbf{M}\in\mathcal{P}_1(3,3)$ we described in Appendix D.
First we define a function to generate a random trace-1 POVM. This is the only step that requires an additional dependency compared to the ones we loaded in the beginning of the notebook. The dependency is QuTiP.
End of explanation
M = get_random_trace_one_povm()
print("Rank of POVM: ", check_ranks(M))
coefficients, projective_measurements = decomposePovmToProjective(M)
Explanation: Then we decompose a random POVM following the cascade of "rank reductions" described in Appendix D, and check the ranks:
End of explanation
print("Ranks of projective measurements: ")
for measurement in projective_measurements:
print(check_ranks(measurement, tolerance=0.01))
Explanation: As a sanity check, we look at the ranks of the effects of the individual projective measurements. We must point out that the numerical calculations occasionally fail, and we set the tolerance in rank calculations high.
End of explanation
N = coefficients[0]*projective_measurements[0] + \
coefficients[1]*(coefficients[2]*projective_measurements[1] +
coefficients[3]*(coefficients[4]*(coefficients[6]*projective_measurements[2] +
coefficients[7]*projective_measurements[3]) +
coefficients[5]*(coefficients[8]*projective_measurements[4] +
coefficients[9]*projective_measurements[5])))
not np.any(M - N > 10e-10)
Explanation: We show that the projective measurements indeed return the POVM:
End of explanation |
13,408 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters
Step8: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
Step9: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, this process can have sharply diminishing returns and can waste computational resources if you use too many iterations. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. The ideal number of iterations would be a level that stops shortly after the validation loss is no longer decreasing.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. Normally a good choice to start at is 0.1; however, if you effectively divide the learning rate by n_records, try starting out with a learning rate of 1. In either case, if the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
In a model where all the weights are optimized, the more hidden nodes you have, the more accurate the predictions of the model will be. (A fully optimized model could have weights of zero, after all.) However, the more hidden nodes you have, the harder it will be to optimize the weights of the model, and the more likely it will be that suboptimal weights will lead to overfitting. With overfitting, the model will memorize the training data instead of learning the true pattern, and won't generalize well to unseen data.
Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. You'll generally find that the best number of hidden nodes to use ends up being between the number of input and output nodes.
Step10: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
self.activation_function = lambda x : 0 # Replace 0 with your sigmoid calculation.
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
#def sigmoid(x):
# return 0 # Replace 0 with your sigmoid calculation here
#self.activation_function = sigmoid
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = None # signals into hidden layer
hidden_outputs = None # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = None # signals into final output layer
final_outputs = None # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = None # Output layer error is the difference between desired target and actual output.
# TODO: Calculate the backpropagated error term (delta) for the output
output_error_term = None
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = None
# TODO: Calculate the backpropagated error term (delta) for the hidden layer
hidden_error_term = None
# Weight step (input to hidden)
delta_weights_i_h += None
# Weight step (hidden to output)
delta_weights_h_o += None
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += None # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += None # update input-to-hidden weights with gradient descent step
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = None # signals into hidden layer
hidden_outputs = None # signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = None # signals into final output layer
final_outputs = None # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
End of explanation
import sys
### Set the hyperparameters here ###
iterations = 100
learning_rate = 0.1
hidden_nodes = 2
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, this process can have sharply diminishing returns and can waste computational resources if you use too many iterations. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. The ideal number of iterations would be a level that stops shortly after the validation loss is no longer decreasing.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. Normally a good choice to start at is 0.1; however, if you effectively divide the learning rate by n_records, try starting out with a learning rate of 1. In either case, if the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
In a model where all the weights are optimized, the more hidden nodes you have, the more accurate the predictions of the model will be. (A fully optimized model could have weights of zero, after all.) However, the more hidden nodes you have, the harder it will be to optimize the weights of the model, and the more likely it will be that suboptimal weights will lead to overfitting. With overfitting, the model will memorize the training data instead of learning the true pattern, and won't generalize well to unseen data.
Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. You'll generally find that the best number of hidden nodes to use ends up being between the number of input and output nodes.
End of explanation
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation |
13,409 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2A.eco - Mise en pratique des séances 1 et 2 - Utilisation de pandas et visualisation - correction
Correction de l'exercice 3 et disponibilités des velibs.
Step1: Exercice 3 - Disponibilité des vélibs
Durée
Step2: Importer le fichier sous forme d'un dataframe
Step3: Tracer la latitude et la longitude (elles sont dans un dico | Python Code:
%matplotlib inline
from jyquickhelper import add_notebook_menu
add_notebook_menu()
from pyensae.datasource import download_data
files = download_data("td2a_eco_exercices_de_manipulation_de_donnees.zip",
url="https://github.com/sdpython/ensae_teaching_cs/raw/master/_doc/notebooks/td2a_eco/data/")
files
Explanation: 2A.eco - Mise en pratique des séances 1 et 2 - Utilisation de pandas et visualisation - correction
Correction de l'exercice 3 et disponibilités des velibs.
End of explanation
import pandas as pd
import json
import matplotlib.pyplot as plt
Explanation: Exercice 3 - Disponibilité des vélibs
Durée : 30 minutes
Importer les données sous la forme d'un dataFrame
velib_t1.txt - avec les données des stations à un instant $t$
velib_t2.txt - avec les données des stations à un instant $t + 1$
Représenter la localisation des stations vélib dans Paris
représenter les stations en fonction du nombre de places avec un gradient
Comparer pour une station donnée l'évolution de la disponibilité (en fusionnant les deux bases $t$ et $t+1$)
représenter les stations qui ont connu une évolution significative (plus de 5 changements) avec un gradient de couleurs
End of explanation
with open("velib_t1.txt") as f:
dic = json.load(f)
df = pd.DataFrame.from_dict(dic)
df.head(n=2)
Explanation: Importer le fichier sous forme d'un dataframe
End of explanation
df['lat']=df['position'].map(lambda x : x['lat'])
df['lng']=df['position'].map(lambda x : x['lng'])
plt.scatter(df.lng, df.lat)
plt.scatter(2.342529,48.892931, color="#CA9912") # Maison
plt.scatter(2.329119,48.872243, color="r") #Bureau
plt.title("Les stations de vélib")
#plt.show()
plt.scatter(df.lng, df.lat, c=df.bike_stands, cmap="YlOrRd")
plt.colorbar()
plt.title("Les plus grosses stations de vélib")
import json
import pandas as pd
file = 'velib_t2.txt'
with open(file) as file_stations_t2:
data_stations_t2 = json.loads(file_stations_t2.read())
df_stations_t2 = pd.DataFrame(data_stations_t2)
liste_stations = df_stations_t2['number'].tolist()
evolutions_stations = pd.merge(left = df, right = df_stations_t2, on = 'number')
evolutions_stations.columns
evolutions_stations = evolutions_stations[['address_x', 'number',
'available_bike_stands_x','available_bike_stands_y','lng','lat']]
evolutions_stations['variation'] = evolutions_stations['available_bike_stands_x'] - \
evolutions_stations['available_bike_stands_y']
lng_var = evolutions_stations[evolutions_stations['variation']>5]["lng"].tolist()
lat_var = evolutions_stations[evolutions_stations['variation']>5]["lat"].tolist()
valeurs = evolutions_stations[evolutions_stations['variation']>5]["variation"].tolist()
labels = evolutions_stations[evolutions_stations['variation']>5]["address_x"].tolist()
lng_var = evolutions_stations[evolutions_stations['variation']>5]["lng"].tolist()
lat_var = evolutions_stations[evolutions_stations['variation']>5]["lat"].tolist()
valeurs = evolutions_stations[evolutions_stations['variation']>5]["variation"].tolist()
labels = evolutions_stations[evolutions_stations['variation']>5]["address_x"].tolist()
scaled_z = []
for value in valeurs :
scaled_z.append((value - min(valeurs)) / (max(valeurs) - min(valeurs)))
colors = plt.cm.coolwarm(scaled_z)
fig = plt.figure(figsize=(20,15))
for label, x, y in zip(labels, lng_var, lat_var):
plt.annotate(label, xy = (x, y),)
plt.scatter(x = lng_var , y = lat_var, c = colors, marker = "o", s = 100)
valeurs
Explanation: Tracer la latitude et la longitude (elles sont dans un dico : il faut les récupérer)
End of explanation |
13,410 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MNIST
Step1: Dataset description
Datasource
Step2: Distribution of class frequencies
Step3: Chisquare test on class frequencies
Step4: Display a few sample images
Step5: View different variations of a digit
Step6: Feature scaling
Step7: Applying logistic regression classifier
Step8: Display wrong predictions
Step9: Applying SGD classifier | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn import *
import scipy
%matplotlib inline
Explanation: MNIST
End of explanation
training = pd.read_csv("/data/MNIST/mnist_train.csv", header = None)
testing = pd.read_csv("/data/MNIST/mnist_test.csv", header = None)
X_train, y_train = training.iloc[:, 1:].values, training.iloc[:, 0].values
X_test, y_test = testing.iloc[:, 1:].values, testing.iloc[:, 0].values
print("Shape of X_train: ", X_train.shape, "shape of y_train: ", y_train.shape)
print("Shape of X_test: ", X_test.shape, "shape of y_test: ", y_test.shape)
Explanation: Dataset description
Datasource: http://yann.lecun.com/exdb/mnist/
The training dataset consists of 60,000 training digits and the test set contains 10,000 samples, respectively. The images in the MNIST dataset consist of pixels, and each pixel is represented by a gray scale intensity value. Here, we unroll the pixels into 1D row vectors, which represent the rows in our image array (784 per row or image). The second array (labels) returned by the load_mnist function contains the corresponding target variable, the class labels (integers 0-9) of the handwritten digits.
Csv version of the files are available in the following links.
CSV training set http://www.pjreddie.com/media/files/mnist_train.csv
CSV test set http://www.pjreddie.com/media/files/mnist_test.csv
End of explanation
label_counts = pd.DataFrame({
"train": pd.Series(y_train).value_counts().sort_index(),
"test": pd.Series(y_test).value_counts().sort_index()
})
(label_counts / label_counts.sum()).plot.bar()
plt.xlabel("Class")
plt.ylabel("Frequency (normed)")
Explanation: Distribution of class frequencies
End of explanation
scipy.stats.chisquare(label_counts.train, label_counts.test)
Explanation: Chisquare test on class frequencies
End of explanation
fig, axes = plt.subplots(5, 5, figsize = (15, 9))
for i, ax in enumerate(fig.axes):
img = X_train[i, :].reshape(28, 28)
ax.imshow(img, cmap = "Greys", interpolation="nearest")
ax.set_title("True: %i" % y_train[i])
plt.tight_layout()
Explanation: Display a few sample images
End of explanation
fig, axes = plt.subplots(10, 5, figsize = (15, 20))
for i, ax in enumerate(fig.axes):
img = X_train[y_train == 7][i, :].reshape(28, 28)
ax.imshow(img, cmap = "Greys", interpolation="nearest")
plt.tight_layout()
Explanation: View different variations of a digit
End of explanation
scaler = preprocessing.StandardScaler()
X_train_std = scaler.fit_transform(X_train.astype(np.float64))
X_test_std = scaler.transform(X_test.astype(np.float64))
Explanation: Feature scaling
End of explanation
%%time
lr = linear_model.LogisticRegression()
lr.fit(X_train_std, y_train)
print("accuracy:", lr.score(X_test_std, y_test))
Explanation: Applying logistic regression classifier
End of explanation
y_test_pred = lr.predict(X_test_std)
miss_indices = (y_test != y_test_pred)
misses = X_test[miss_indices]
print("No of miss: ", misses.shape[0])
fig, axes = plt.subplots(10, 5, figsize = (15, 20))
misses_actual = y_test[miss_indices]
misses_pred = y_test_pred[miss_indices]
for i, ax in enumerate(fig.axes):
img = misses[i].reshape(28, 28)
ax.imshow(img, cmap = "Greys", interpolation="nearest")
ax.set_title("A: %s, P: %d" % (misses_actual[i], misses_pred[i]))
plt.tight_layout()
Explanation: Display wrong predictions
End of explanation
inits = np.random.randn(10, 784)
inits = inits / np.std(inits, axis=1).reshape(10, -1)
%%time
est = linear_model.SGDClassifier(n_jobs=4, tol=1e-5, eta0 = 0.15,
learning_rate = "invscaling",
alpha = 0.01, max_iter= 100)
est.fit(X_train_std, y_train, inits)
print("accuracy", est.score(X_test_std, y_test), "iterations:", est.n_iter_)
fig, _ = plt.subplots(3, 4, figsize = (15, 10))
for i, ax in enumerate(fig.axes):
if i < est.coef_.shape[0]:
ax.imshow(est.coef_[i, :].reshape(28, 28), cmap = "bwr", interpolation="nearest")
else:
ax.remove()
pd.DataFrame(est.coef_[0, :].reshape(28, 28))
Explanation: Applying SGD classifier
End of explanation |
13,411 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fire up graphlab create
Step1: Load some house value vs. crime rate data
Dataset is from Philadelphia, PA and includes average house sales price in a number of neighborhoods. The attributes of each neighborhood we have include the crime rate ('CrimeRate'), miles from Center City ('MilesPhila'), town name ('Name'), and county name ('County').
Step2: Exploring the data
The house price in a town is correlated with the crime rate of that town. Low crime towns tend to be associated with higher house prices and vice versa.
Step3: Fit the regression model using crime as the feature
Step4: Let's see what our fit looks like
Matplotlib is a Python plotting library that is also useful for plotting. You can install it with
Step5: Above
Step6: Refit our simple regression model on this modified dataset
Step7: Look at the fit
Step8: Compare coefficients for full-data fit versus no-Center-City fit
Visually, the fit seems different, but let's quantify this by examining the estimated coefficients of our original fit and that of the modified dataset with Center City removed.
Step9: Above
Step10: Do the coefficients change much? | Python Code:
%matplotlib inline
import graphlab
Explanation: Fire up graphlab create
End of explanation
sales = graphlab.SFrame.read_csv('Philadelphia_Crime_Rate_noNA.csv/')
sales
Explanation: Load some house value vs. crime rate data
Dataset is from Philadelphia, PA and includes average house sales price in a number of neighborhoods. The attributes of each neighborhood we have include the crime rate ('CrimeRate'), miles from Center City ('MilesPhila'), town name ('Name'), and county name ('County').
End of explanation
graphlab.canvas.set_target('ipynb')
sales.show(view="Scatter Plot", x="CrimeRate", y="HousePrice")
Explanation: Exploring the data
The house price in a town is correlated with the crime rate of that town. Low crime towns tend to be associated with higher house prices and vice versa.
End of explanation
crime_model = graphlab.linear_regression.create(sales, target='HousePrice', features=['CrimeRate'],validation_set=None,verbose=False)
Explanation: Fit the regression model using crime as the feature
End of explanation
import matplotlib.pyplot as plt
plt.plot(sales['CrimeRate'],sales['HousePrice'],'.',
sales['CrimeRate'],crime_model.predict(sales),'-')
Explanation: Let's see what our fit looks like
Matplotlib is a Python plotting library that is also useful for plotting. You can install it with:
'pip install matplotlib'
End of explanation
sales_noCC = sales[sales['MilesPhila'] != 0.0]
sales_noCC.show(view="Scatter Plot", x="CrimeRate", y="HousePrice")
Explanation: Above: blue dots are original data, green line is the fit from the simple regression.
Remove Center City and redo the analysis
Center City is the one observation with an extremely high crime rate, yet house prices are not very low. This point does not follow the trend of the rest of the data very well. A question is how much including Center City is influencing our fit on the other datapoints. Let's remove this datapoint and see what happens.
End of explanation
crime_model_noCC = graphlab.linear_regression.create(sales_noCC, target='HousePrice', features=['CrimeRate'],validation_set=None, verbose=False)
Explanation: Refit our simple regression model on this modified dataset:
End of explanation
plt.plot(sales_noCC['CrimeRate'],sales_noCC['HousePrice'],'.',
sales_noCC['CrimeRate'],crime_model.predict(sales_noCC),'-')
Explanation: Look at the fit:
End of explanation
crime_model.get('coefficients')
crime_model_noCC.get('coefficients')
Explanation: Compare coefficients for full-data fit versus no-Center-City fit
Visually, the fit seems different, but let's quantify this by examining the estimated coefficients of our original fit and that of the modified dataset with Center City removed.
End of explanation
sales_nohighend = sales_noCC[sales_noCC['HousePrice'] < 350000]
crime_model_nohighend = graphlab.linear_regression.create(sales_nohighend, target='HousePrice', features=['CrimeRate'],validation_set=None, verbose=False)
Explanation: Above: We see that for the "no Center City" version, per unit increase in crime, the predicted decrease in house prices is 2,287. In contrast, for the original dataset, the drop is only 576 per unit increase in crime. This is significantly different!
High leverage points:
Center City is said to be a "high leverage" point because it is at an extreme x value where there are not other observations. As a result, recalling the closed-form solution for simple regression, this point has the potential to dramatically change the least squares line since the center of x mass is heavily influenced by this one point and the least squares line will try to fit close to that outlying (in x) point. If a high leverage point follows the trend of the other data, this might not have much effect. On the other hand, if this point somehow differs, it can be strongly influential in the resulting fit.
Influential observations:
An influential observation is one where the removal of the point significantly changes the fit. As discussed above, high leverage points are good candidates for being influential observations, but need not be. Other observations that are not leverage points can also be influential observations (e.g., strongly outlying in y even if x is a typical value).
Remove high-value outlier neighborhoods and redo analysis
Based on the discussion above, a question is whether the outlying high-value towns are strongly influencing the fit. Let's remove them and see what happens.
End of explanation
crime_model_noCC.get('coefficients')
crime_model_nohighend.get('coefficients')
Explanation: Do the coefficients change much?
End of explanation |
13,412 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Welcome to the 'First steps with pandas'!
After this workshop you can (hopefully) call yourselves Data Scientists!
Gitter
Step1: What is pandas?
pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language.
Why to use it?
It has ready solutions for most of data-related problems
fast development
no reinventing wheel
fewer mistakes/bugs
Step2: It is easy to pick up
few simple concepts that are very powerful
easy, standardized API
good code readability
It is reasonably fast
Step3: It has a very cool name.
https
Step4: DataFrame
DataFrame is a 2-dimensional labeled data structure with columns of potentially different types. You can think of it like a spreadsheet or SQL table, or a dict of Series objects.
Creating
Step5: Meta data
Step6: Selecting
[string] --> Series
[ list of strings ] --> DataFrame
Step7: Chaining (most of operations on DataFrame returns new DataFrame or Series)
Step8: EXERCISE
Create DataFrame presented below in 3 different ways
movie_title imdb_score
0 Avatar 7.9
1 Pirates of the Caribbean
Step9: With list of dicts
Step10: With list of Series
Step11: I/O part I
Reading popular formats / data sources
Step12: EXERCISE
Load movies from data/movies.csv to variable called movies
Step13: Analyze what dimensions and columns it has
Step14: Filtering
Step15: Boolean indexing
Step16: Multiple conditions
Step17: Negation
~ is a negation operator
Step18: Filtering for containing one of many values (SQL's IN)
Step19: EXERCISE
What movies has been directed by Clint Eastwood?
Step20: What movies have earned above $500m?
Step21: Are there any Polish movies?
Step22: What are really popular great movies? (> 100k FB likes, > 8.5 IMDB score)
Step23: In what movies main role was played by brutals like "Jason Statham", "Sylvester Stallone" or god ("Morgan Freeman")?
Step24: I/O part O
As numpy array
Step25: As (list) of dicts
Step26: As popular data format
Step27: EXERCISE
Create a csv with movie titles and cast (actors) of movies with budget above $200m
Step28: Create a list of dicts with movie titles and facebook likes of all Christopher Nolan's movies
Step29: New columns
Step30: Creating new column
Step31: Vector operations
Step32: Map, apply, applymap, str
Step33: Cheatsheet
map
Step34: EXERCISE
What are 10 most profitable movies? (ratio between gross and budget)
Step35: Create a column 'first_genre'. What is the distribution of values in this column?
Step36: Visualizing data
Step37: Basic stats
Step38: Plotting
Step39: Histogram
Step40: DataFrames everywhere.. are easy to plot
Step41: EXERCISE
Create a chart presenting grosses of movies directed by Christopher Nolan
Step42: What are typical durations of the movies?
Step43: What is percentage distribution of first genre? (cake)
Step44: Aggregation
Grouping
https
Step45: Aggregating
Step46: EXERCISE
What was maximal gross in each year?
Step47: Which director earns the most on average?
Step48: Index related operations
Data alignment on Index
Step49: Setting index
Step50: Merging two DFs (a'la SQL join)
Step51: DateTime operations
Step52: Filtering
Step53: Resampling (downsampling and upsampling)
Step54: EXERCISE
Read Star Wars
Step55: How many tweets did Star Wars
Step56: What were the most popular tweeting times of the day for that movie?
Step57: Advanced topics + Advanced exercises
Filling missing data
Step58: Scikit-learn
Step59: More models to try
Step60: Create a method discovering movies with plot keywords similar to the given list of keywords (i.e. ['magic', 'harry', 'wizard'])
Step61: Integration with Flask
In the file flask_exercise.py you'll find the scaffolding for Flask app.<br/>Create endpoints returning | Python Code:
import platform
print('Python: ' + platform.python_version())
import numpy as np
print('numpy: ' + np.__version__)
import pandas as pd
print('pandas: ' + pd.__version__)
import scipy
print('scipy: ' + scipy.__version__)
import sklearn
print('scikit-learn: ' + sklearn.__version__)
import matplotlib as plt
print('matplotlib: ' + plt.__version__)
import flask
print('flask: ' + flask.__version__)
Explanation: Welcome to the 'First steps with pandas'!
After this workshop you can (hopefully) call yourselves Data Scientists!
Gitter: https://gitter.im/first-steps-with-pandas-workshop
Before coding, let's check whether we have proper versions of libraries
End of explanation
# In case of no Internet, use:
# pd.read_json('data/cached_Python.json')
(
pd.read_json('https://raw.githubusercontent.com/Nozdi/first-steps-with-pandas-workshop/master/data/cached_python.json')
.resample('1W')
.mean()
['daily_views']
)
Explanation: What is pandas?
pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language.
Why to use it?
It has ready solutions for most of data-related problems
fast development
no reinventing wheel
fewer mistakes/bugs
End of explanation
some_data = [ list(range(1,100)) for x in range(1,1000) ]
some_df = pd.DataFrame(some_data)
def standard_way(data):
return [[col*2 for col in row] for row in data]
def pandas_way(df):
return df * 2
%timeit standard_way(some_data)
%timeit pandas_way(some_df)
Explanation: It is easy to pick up
few simple concepts that are very powerful
easy, standardized API
good code readability
It is reasonably fast
End of explanation
strengths = pd.Series([400, 200, 300, 400, 500])
strengths
names = pd.Series(["Batman", "Robin", "Spiderman", "Robocop", "Terminator"])
names
Explanation: It has a very cool name.
https://c1.staticflickr.com/5/4058/4466498508_35a8172ac1_b.jpg
Library highlights
http://pandas.pydata.org/#library-highlights<br/>
http://pandas.pydata.org/pandas-docs/stable/api.html
Data structures
Series
Series is a one-dimensional labeled array capable of holding any data type (integers, strings, floating point numbers, Python objects, etc.).
End of explanation
heroes = pd.DataFrame({
'hero': names,
'strength': strengths
})
heroes
other_heroes = pd.DataFrame([
dict(hero="Hercules", strength=800),
dict(hero="Conan")
])
other_heroes
another_heroes = pd.DataFrame([
pd.Series(["Wonder Woman", 10, 3], index=["hero", "strength", "cookies"]),
pd.Series(["Xena", 20, 0], index=["hero", "strength", "cookies"])
])
another_heroes
Explanation: DataFrame
DataFrame is a 2-dimensional labeled data structure with columns of potentially different types. You can think of it like a spreadsheet or SQL table, or a dict of Series objects.
Creating
End of explanation
another_heroes.columns
another_heroes.shape
another_heroes.info()
Explanation: Meta data
End of explanation
another_heroes['cookies']
another_heroes.cookies
another_heroes[ ['hero', 'cookies'] ]
Explanation: Selecting
[string] --> Series
[ list of strings ] --> DataFrame
End of explanation
another_heroes[['hero', 'cookies']][['cookies']]
another_heroes[['hero', 'cookies']][['cookies']]['cookies']
Explanation: Chaining (most of operations on DataFrame returns new DataFrame or Series)
End of explanation
# Solution here
titles = pd.Series(["Avatar", "Pirates of the Caribbean: At World's End", "Spectre"])
imdb_scores = pd.Series([7.9, 7.1, 6.8])
pd.DataFrame({'movie_title': titles, 'imdb_score': imdb_scores})
Explanation: EXERCISE
Create DataFrame presented below in 3 different ways
movie_title imdb_score
0 Avatar 7.9
1 Pirates of the Caribbean: At World's End 7.1
2 Spectre 6.8
Help: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#from-dict-of-series-or-dicts
With dict of Series
End of explanation
# Solution here
pd.DataFrame([
dict(movie_title="Avatar", imdb_score=7.9),
dict(movie_title="Pirates of the Caribbean: At World's End", imdb_score=7.1),
dict(movie_title="Spectre", imdb_score=6.8),
])
Explanation: With list of dicts
End of explanation
# Solution here
pd.DataFrame([
pd.Series(["Avatar", 7.9], index=['movie_title', 'imdb_score']),
pd.Series(["Pirates of the Caribbean: At World's End", 7.1], index=['movie_title', 'imdb_score']),
pd.Series(["Spectre", 6.8], index=['movie_title', 'imdb_score'])
])
Explanation: With list of Series
End of explanation
# Uncomment and press tab..
# pd.read_
# SQL, csv, hdf
# pd.read_csv?
# executing bash in jupyter notebook
!head -c 500 data/cached_python.json
pd.read_json('data/cached_python.json')
Explanation: I/O part I
Reading popular formats / data sources
End of explanation
# Solution here
movies = pd.read_csv('data/movies.csv')
movies.head()
Explanation: EXERCISE
Load movies from data/movies.csv to variable called movies
End of explanation
# Solution here
print(movies.shape)
print(movies.columns)
Explanation: Analyze what dimensions and columns it has
End of explanation
heroes
Explanation: Filtering
End of explanation
heroes['strength'] == 400
heroes[heroes['strength'] == 400]
heroes[heroes['strength'] > 400]
Explanation: Boolean indexing
End of explanation
try:
heroes[200 < heroes['strength'] < 400]
except ValueError:
print("This cool Python syntax ain't work :(")
heroes[
(heroes['strength'] > 200) &
(heroes['strength'] < 400)
]
heroes[
(heroes['strength'] <= 200) |
(heroes['strength'] >= 400)
]
Explanation: Multiple conditions
End of explanation
~(heroes['strength'] == 400)
heroes['strength'] != 400
heroes[~(
(heroes['strength'] <= 200) |
(heroes['strength'] >= 400)
)]
Explanation: Negation
~ is a negation operator
End of explanation
heroes[
heroes['hero'].isin(['Batman', 'Robin'])
]
Explanation: Filtering for containing one of many values (SQL's IN)
End of explanation
# Solution here
movies[movies['director_name'] == "Clint Eastwood"]
Explanation: EXERCISE
What movies has been directed by Clint Eastwood?
End of explanation
# Solution here
movies[movies['gross'] > 500e6]['movie_title']
Explanation: What movies have earned above $500m?
End of explanation
# Solution here
movies[movies['language'] == 'Polish']['movie_title']
Explanation: Are there any Polish movies?
End of explanation
# Solution here
movies[
(movies['movie_facebook_likes'] > 100000) &
(movies['imdb_score'] > 8.5)
]['movie_title']
Explanation: What are really popular great movies? (> 100k FB likes, > 8.5 IMDB score)
End of explanation
# Solution here
brutals = ["Jason Statham", "Sylvester Stallone"]
god = "Morgan Freeman"
movies[
(movies['actor_1_name'].isin(brutals)) |
(movies['actor_1_name'] == god)
]['movie_title'].head()
Explanation: In what movies main role was played by brutals like "Jason Statham", "Sylvester Stallone" or god ("Morgan Freeman")?
End of explanation
heroes.values
Explanation: I/O part O
As numpy array
End of explanation
heroes.to_dict()
heroes.to_dict('records')
Explanation: As (list) of dicts
End of explanation
heroes.to_json()
heroes.to_json(orient='records')
heroes.to_csv()
heroes.to_csv(index=False)
heroes.to_csv('data/heroes.csv', index=False)
Explanation: As popular data format
End of explanation
# Solution here
cols = [
'movie_title',
'actor_1_name',
'actor_2_name',
'actor_3_name',
'budget'
]
movies[movies['budget'] > 200e6][cols].to_csv("data/expensive-cast.csv", index=False)
Explanation: EXERCISE
Create a csv with movie titles and cast (actors) of movies with budget above $200m
End of explanation
# Solution here
cols = [
'movie_title',
'movie_facebook_likes'
]
movies[movies['director_name'] == 'Christopher Nolan'][cols].to_dict('r')
Explanation: Create a list of dicts with movie titles and facebook likes of all Christopher Nolan's movies
End of explanation
heroes
Explanation: New columns
End of explanation
heroes['health'] = np.NaN
heroes.head()
heroes['health'] = 100
heroes.head()
heroes['height'] = [180, 170, 175, 190, 185]
heroes
heroes['is_hungry'] = pd.Series([True, False, False, True, True])
heroes
Explanation: Creating new column
End of explanation
heroes['strength'] * 2
heroes['strength'] / heroes['height']
heroes['strength_per_cm'] = heroes['strength'] / heroes['height']
heroes
Explanation: Vector operations
End of explanation
pd.Series([1, 2, 3]).map(lambda x: x**3)
pd.Series(['Batman', 'Robin']).map(lambda x: x[:2])
# however, more idiomatic approach for strings is to do..
pd.Series(['Batman', 'Robin']).str[:2]
pd.Series(['Batman', 'Robin']).str.lower()
pd.Series([
['Batman', 'Robin'],
['Robocop']
]).map(len)
heroes['code'] = heroes['hero'].map(lambda name: name[:2])
heroes
heroes['effective_strength'] = heroes.apply(
lambda row: (not row['is_hungry']) * row['strength'],
axis=1
)
heroes.head()
heroes[['health', 'strength']] = heroes[['health', 'strength']].applymap(
lambda x: x + 100
)
heroes
Explanation: Map, apply, applymap, str
End of explanation
heroes['strength'].value_counts()
heroes.sort_values('strength')
heroes.sort_values(
['is_hungry', 'code'],
ascending=[False, True]
)
Explanation: Cheatsheet
map: 1 => 1
apply: n => 1
applymap: n => n
Sorting and value counts (bonus skill)
End of explanation
# Solution here
movies['profitability'] = movies['gross'] / movies['budget']
movies.sort_values('profitability', ascending=False).head(10)
Explanation: EXERCISE
What are 10 most profitable movies? (ratio between gross and budget)
End of explanation
# Solution here
movies['first_genre'] = movies['genres'].str.split('|').str[0]
movies.head()
Explanation: Create a column 'first_genre'. What is the distribution of values in this column?
End of explanation
heroes
Explanation: Visualizing data
End of explanation
heroes.describe()
Explanation: Basic stats
End of explanation
%matplotlib inline
pd.Series([1, 2, 3]).plot()
pd.Series([1, 2, 3], index=['Batman', 'Robin', 'Rambo']).plot()
pd.Series([1, 2, 3], index=['Batman', 'Robin', 'Rambo']).plot(kind='bar')
pd.Series([1, 2, 3], index=['Batman', 'Robin', 'Rambo']).plot(
kind='bar',
figsize=(15, 6)
)
pd.Series([1, 2, 3], index=['Batman', 'Robin', 'Rambo']).plot(kind='pie')
heroes.plot()
indexed_heroes = heroes.set_index('hero')
indexed_heroes
indexed_heroes.plot()
indexed_heroes.plot(kind='barh')
indexed_heroes.plot(kind='bar', subplots=True, figsize=(15, 15))
indexed_heroes[['height', 'strength']].plot(kind='bar')
heroes.plot(x='hero', y=['height', 'strength'], kind='bar')
# alternative to subplots
heroes.plot(
x='hero',
y=['height', 'strength'],
kind='bar',
secondary_y='strength',
figsize=(10,8)
)
heroes.plot(
x='hero',
y=['height', 'strength'],
kind='bar',
secondary_y='strength',
title='Super plot of super heroes',
figsize=(10,8)
)
Explanation: Plotting
End of explanation
heroes.hist(figsize=(10, 10))
heroes.hist(
figsize=(10, 10),
bins=2
)
Explanation: Histogram
End of explanation
heroes.describe()['strength'].plot(kind='bar')
Explanation: DataFrames everywhere.. are easy to plot
End of explanation
# Solution here
nolan_movies = movies[movies['director_name'] == 'Christopher Nolan']
nolan_movies = nolan_movies.set_index('movie_title')
nolan_movies['gross'].plot(kind='bar')
Explanation: EXERCISE
Create a chart presenting grosses of movies directed by Christopher Nolan
End of explanation
# Solution here
movies['duration'].hist(bins=25)
Explanation: What are typical durations of the movies?
End of explanation
# Solution here
movies['first_genre'].value_counts().plot(
kind='pie',
figsize=(15,15)
)
Explanation: What is percentage distribution of first genre? (cake)
End of explanation
movie_heroes = pd.DataFrame({
'hero': ['Batman', 'Robin', 'Spiderman', 'Robocop', 'Lex Luthor', 'Dr Octopus'],
'movie': ['Batman', 'Batman', 'Spiderman', 'Robocop', 'Spiderman', 'Spiderman'],
'strength': [400, 100, 400, 560, 89, 300],
'speed': [100, 10, 200, 1, 20, None],
})
movie_heroes = movie_heroes.set_index('hero')
movie_heroes
movie_heroes.groupby('movie')
list(movie_heroes.groupby('movie'))
Explanation: Aggregation
Grouping
https://www.safaribooksonline.com/library/view/learning-pandas/9781783985128/graphics/5128OS_09_01.jpg
End of explanation
movie_heroes.groupby('movie').size()
movie_heroes.groupby('movie').count()
movie_heroes.groupby('movie')['speed'].sum()
movie_heroes.groupby('movie').mean()
movie_heroes.groupby('movie').apply(
lambda group: group['strength'] / group['strength'].max()
)
movie_heroes.groupby('movie').agg({
'speed': 'mean',
'strength': 'max',
})
movie_heroes = movie_heroes.reset_index()
movie_heroes
movie_heroes.groupby(['movie', 'hero']).mean()
Explanation: Aggregating
End of explanation
# Solution here
movies.groupby('title_year')['gross'].max().tail(10).plot(kind='bar')
Explanation: EXERCISE
What was maximal gross in each year?
End of explanation
# Solution here
(
movies
.groupby('director_name')
['gross']
.mean()
.sort_values(ascending=False)
.head(3)
)
Explanation: Which director earns the most on average?
End of explanation
movie_heroes
apetite = pd.DataFrame([
dict(hero='Spiderman', is_hungry=True),
dict(hero='Robocop', is_hungry=False)
])
apetite
movie_heroes['is_hungry'] = apetite['is_hungry']
movie_heroes
apetite.index = [2, 3]
movie_heroes['is_hungry'] = apetite['is_hungry']
movie_heroes
Explanation: Index related operations
Data alignment on Index
End of explanation
indexed_movie_heroes = movie_heroes.set_index('hero')
indexed_movie_heroes
indexed_apetite = apetite.set_index('hero')
indexed_apetite
# and alignment works well automagically..
indexed_movie_heroes['is_hungry'] = indexed_apetite['is_hungry']
indexed_movie_heroes
Explanation: Setting index
End of explanation
movie_heroes
apetite
# couple of other arguments available here
pd.merge(
movie_heroes[['hero', 'speed']],
apetite,
on=['hero'],
how='outer'
)
Explanation: Merging two DFs (a'la SQL join)
End of explanation
spiderman_meals = pd.DataFrame([
dict(time='2016-10-15 10:00', calories=300),
dict(time='2016-10-15 13:00', calories=900),
dict(time='2016-10-15 15:00', calories=1200),
dict(time='2016-10-15 21:00', calories=700),
dict(time='2016-10-16 07:00', calories=1600),
dict(time='2016-10-16 13:00', calories=600),
dict(time='2016-10-16 16:00', calories=900),
dict(time='2016-10-16 20:00', calories=500),
dict(time='2016-10-16 21:00', calories=300),
dict(time='2016-10-17 08:00', calories=900),
])
spiderman_meals
spiderman_meals.dtypes
spiderman_meals['time'] = pd.to_datetime(spiderman_meals['time'])
spiderman_meals.dtypes
spiderman_meals
spiderman_meals = spiderman_meals.set_index('time')
spiderman_meals
spiderman_meals.index
Explanation: DateTime operations
End of explanation
spiderman_meals["2016-10-15"]
spiderman_meals["2016-10-16 10:00":]
spiderman_meals["2016-10-16 10:00":"2016-10-16 20:00"]
spiderman_meals["2016-10"]
Explanation: Filtering
End of explanation
spiderman_meals.resample('1D').sum()
spiderman_meals.resample('1H').mean()
spiderman_meals.resample('1H').ffill()
spiderman_meals.resample('1D').first()
Explanation: Resampling (downsampling and upsampling)
End of explanation
# Solution here
force_awakens_tweets = pd.read_csv(
'data/theforceawakens_tweets.csv',
parse_dates=['created_at'],
index_col='created_at'
)
force_awakens_tweets.head()
Explanation: EXERCISE
Read Star Wars: The Force Awakens's tweets from data/theforceawakens_tweets.csv. Create DateTimeIndex from created_at column.
End of explanation
# Solution here
force_awakens_tweets.resample('1D').count()
Explanation: How many tweets did Star Wars: The Force Awakens have in each of last days?
End of explanation
# Solution here
(
force_awakens_tweets
.resample('4H')
.count()
.plot(figsize=(15, 5))
)
(
force_awakens_tweets["2016-09-29":]
.resample('1H')
.count()
.plot(figsize=(15, 5))
)
Explanation: What were the most popular tweeting times of the day for that movie?
End of explanation
heroes_with_missing = pd.DataFrame([
('Batman', None, None),
('Robin', None, 100),
('Spiderman', 400, 90),
('Robocop', 500, 95),
('Terminator', 600, None)
], columns=['hero', 'strength', 'health'])
heroes_with_missing
heroes_with_missing.dropna()
heroes_with_missing.fillna(0)
heroes_with_missing.fillna({'strength': 10, 'health': 20})
heroes_with_missing.fillna(heroes_with_missing.min())
heroes_with_missing.fillna(heroes_with_missing.median())
Explanation: Advanced topics + Advanced exercises
Filling missing data
End of explanation
pd.DataFrame({'x': [1, 2], 'y': [10, 20]}).plot(x='x', y='y', kind='scatter')
from sklearn.linear_model import LinearRegression
X=[ [1], [2] ]
y=[ 10, 20 ]
clf = LinearRegression()
clf.fit(X, y)
clf.predict([ [0.5], [2], [4] ])
X = np.array([ [1], [2] ])
y = np.array([ 10, 20 ])
X
clf = LinearRegression()
clf.fit(X, y)
clf.predict( np.array([ [0.5], [2], [4] ]) )
train_df = pd.DataFrame([
(1, 10),
(2, 20),
], columns=['x', 'y'])
train_df
clf = LinearRegression()
clf.fit(train_df[['x']], train_df['y'])
clf.predict([[0.5]])
test_df = pd.DataFrame({'x': [0.5, 1.5, 4]})
test_df
clf.predict(test_df[['x']])
test_df['y'] = clf.predict(test_df[['x']])
test_df
train_df['color'] = 'blue'
test_df['color'] = 'red'
all_df = train_df.append(test_df)
all_df.plot(x='x', y='y', kind='scatter', figsize=(10, 8), color=all_df['color'])
Explanation: Scikit-learn
End of explanation
# Solution here
from sklearn.linear_model import LinearRegression
FEATURES = ['num_voted_users', 'imdb_score']
TARGET = 'gross'
movies_with_data = movies[FEATURES + [TARGET]].dropna()
X = movies_with_data[FEATURES].values
y = movies_with_data[TARGET].values
clf = LinearRegression()
clf.fit(X, y)
clf.predict([
[800000, 8.0],
[400000, 8.0],
[400000, 4.0],
[ 40000, 8.0],
])
Explanation: More models to try: http://scikit-learn.org/stable/supervised_learning.html#supervised-learning
EXERCISE
Integration with scikit-learn: Create a model that tries to predict gross of movie. Use any features of the movies dataset.
End of explanation
# Solution here
def discover_similar_plot(target_keywords, threshold=0.5):
movies_with_plot = movies.dropna(
subset=['plot_keywords']
).copy()
movies_with_plot['plot_keywords_set'] = movies_with_plot[
'plot_keywords'
].str.split('|').map(set)
movies_with_plot['match_count'] = movies_with_plot[
'plot_keywords_set'
].map(
lambda keywords: len(keywords.intersection(target_keywords))
)
return movies_with_plot[
(movies_with_plot['match_count'] >= threshold*len(target_keywords))
]
discover_similar_plot(['magic', 'harry', 'wizard'])['movie_title']
Explanation: Create a method discovering movies with plot keywords similar to the given list of keywords (i.e. ['magic', 'harry', 'wizard'])
End of explanation
# Solution in flask_exercise.py
Explanation: Integration with Flask
In the file flask_exercise.py you'll find the scaffolding for Flask app.<br/>Create endpoints returning:
- all movie titles available in the movies dataset
- 10 worst rated movies ever
- 10 best rated (imdb_score) movies in a given year
End of explanation |
13,413 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Budyko Transport for Energy Balance Models
In this document an Energy Balance Model (EBM) is set up with the energy tranport parametrized through the the budyko type parametrization term (instead of the default diffusion term), which characterizes the local energy flux through the difference between local temperature and global mean temperature.
$$H(\varphi) = - b [T(\varphi) - \bar{T}]$$
where $T(\varphi)$ is the surface temperature across the latitude $\varphi$, $\bar{T}$ the global mean temperature and $H(\varphi)$ is the transport of energy in an Energy Budget noted as
Step1: Model Creation
An EBM model instance is created through
Step2: The model is set up by default with a meridional diffusion term.
Step3: Create new subprocess
The creation of a subprocess needs some information from the model, especially on which model state the subprocess should be defined on.
Step4: Note that the model's whole state dictionary is given as input to the subprocess. In case only the temperature field ebm_budyko.state['Ts'] is given, a new state dictionary would be created which holds the surface temperature with the key 'default'. That raises an error as the budyko transport process refers the temperature with key 'Ts'.
Now the new transport subprocess has to be merged into the model. The diffusion subprocess has to be removed.
Step5: Model integration & Plotting
To visualize the model state at beginning of integration we first integrate the model only for one timestep
Step6: The following code plots the current surface temperature, albedo and energy budget
Step7: The two right sided plots show that the model is not in equilibrium. The net radiation reveals that the model currently gains heat and therefore warms up at the poles and loses heat at the equator. From the Energy plot we can see that latitudinal energy balance is not met.
Now we integrate the model as long there are no more changes in the surface temperature and the model reached equilibrium
Step8: Now we can see that the latitudinal energy balance is statisfied. Each latitude gains as much heat (net radiation) as is transported out of it (diffusion transport). There is a net radiation surplus in the equator region, so more shortwave radiation is absorbed there than is emitted through longwave radiation. At the poles there is a net radiation deficit. That imbalance is compensated by the diffusive energy transport term.
Global mean temperature
We use climlab to compute the global mean temperature and print the ice edge latitude | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import climlab
from climlab import constants as const
Explanation: Budyko Transport for Energy Balance Models
In this document an Energy Balance Model (EBM) is set up with the energy tranport parametrized through the the budyko type parametrization term (instead of the default diffusion term), which characterizes the local energy flux through the difference between local temperature and global mean temperature.
$$H(\varphi) = - b [T(\varphi) - \bar{T}]$$
where $T(\varphi)$ is the surface temperature across the latitude $\varphi$, $\bar{T}$ the global mean temperature and $H(\varphi)$ is the transport of energy in an Energy Budget noted as:
$$C(\varphi) \frac{dT(\varphi)}{dt} = R\downarrow (\varphi) - R\uparrow (\varphi) + H(\varphi)$$
End of explanation
# model creation
ebm_budyko = climlab.EBM()
Explanation: Model Creation
An EBM model instance is created through
End of explanation
# print model states and suprocesses
print(ebm_budyko)
Explanation: The model is set up by default with a meridional diffusion term.
End of explanation
# create Budyko subprocess
budyko_transp = climlab.dynamics.BudykoTransport(b=3.81,
state=ebm_budyko.state,
**ebm_budyko.param)
Explanation: Create new subprocess
The creation of a subprocess needs some information from the model, especially on which model state the subprocess should be defined on.
End of explanation
# add the new transport subprocess
ebm_budyko.add_subprocess('budyko_transport',budyko_transp)
# remove the old diffusion subprocess
ebm_budyko.remove_subprocess('diffusion')
print(ebm_budyko)
Explanation: Note that the model's whole state dictionary is given as input to the subprocess. In case only the temperature field ebm_budyko.state['Ts'] is given, a new state dictionary would be created which holds the surface temperature with the key 'default'. That raises an error as the budyko transport process refers the temperature with key 'Ts'.
Now the new transport subprocess has to be merged into the model. The diffusion subprocess has to be removed.
End of explanation
# integrate model for a single timestep
ebm_budyko.step_forward()
Explanation: Model integration & Plotting
To visualize the model state at beginning of integration we first integrate the model only for one timestep:
End of explanation
# creating plot figure
fig = plt.figure(figsize=(15,10))
# Temperature plot
ax1 = fig.add_subplot(221)
ax1.plot(ebm_budyko.lat,ebm_budyko.Ts)
ax1.set_xticks([-90,-60,-30,0,30,60,90])
ax1.set_xlim([-90,90])
ax1.set_title('Surface Temperature', fontsize=14)
ax1.set_ylabel('(degC)', fontsize=12)
ax1.grid()
# Albedo plot
ax2 = fig.add_subplot(223, sharex = ax1)
ax2.plot(ebm_budyko.lat,ebm_budyko.albedo)
ax2.set_title('Albedo', fontsize=14)
ax2.set_xlabel('latitude', fontsize=10)
ax2.set_ylim([0,1])
ax2.grid()
# Net Radiation plot
ax3 = fig.add_subplot(222, sharex = ax1)
ax3.plot(ebm_budyko.lat, ebm_budyko.OLR, label='OLR',
color='cyan')
ax3.plot(ebm_budyko.lat, ebm_budyko.ASR, label='ASR',
color='magenta')
ax3.plot(ebm_budyko.lat, ebm_budyko.ASR-ebm_budyko.OLR,
label='net radiation',
color='red')
ax3.set_title('Net Radiation', fontsize=14)
ax3.set_ylabel('(W/m$^2$)', fontsize=12)
ax3.legend(loc='best')
ax3.grid()
# Energy Balance plot
net_rad = ebm_budyko.net_radiation
transport = ebm_budyko.subprocess['budyko_transport'].heating_rate['Ts']
ax4 = fig.add_subplot(224, sharex = ax1)
ax4.plot(ebm_budyko.lat, net_rad, label='net radiation',
color='red')
ax4.plot(ebm_budyko.lat, transport, label='heat transport',
color='blue')
ax4.plot(ebm_budyko.lat, net_rad+transport, label='balance',
color='black')
ax4.set_title('Energy', fontsize=14)
ax4.set_xlabel('latitude', fontsize=10)
ax4.set_ylabel('(W/m$^2$)', fontsize=12)
ax4.legend(loc='best')
ax4.grid()
plt.show()
Explanation: The following code plots the current surface temperature, albedo and energy budget:
End of explanation
# integrate model until solution converges
ebm_budyko.integrate_converge()
# creating plot figure
fig = plt.figure(figsize=(15,10))
# Temperature plot
ax1 = fig.add_subplot(221)
ax1.plot(ebm_budyko.lat,ebm_budyko.Ts)
ax1.set_xticks([-90,-60,-30,0,30,60,90])
ax1.set_xlim([-90,90])
ax1.set_title('Surface Temperature', fontsize=14)
ax1.set_ylabel('(degC)', fontsize=12)
ax1.grid()
# Albedo plot
ax2 = fig.add_subplot(223, sharex = ax1)
ax2.plot(ebm_budyko.lat,ebm_budyko.albedo)
ax2.set_title('Albedo', fontsize=14)
ax2.set_xlabel('latitude', fontsize=10)
ax2.set_ylim([0,1])
ax2.grid()
# Net Radiation plot
ax3 = fig.add_subplot(222, sharex = ax1)
ax3.plot(ebm_budyko.lat, ebm_budyko.OLR, label='OLR',
color='cyan')
ax3.plot(ebm_budyko.lat, ebm_budyko.ASR, label='ASR',
color='magenta')
ax3.plot(ebm_budyko.lat, ebm_budyko.ASR-ebm_budyko.OLR,
label='net radiation',
color='red')
ax3.set_title('Net Radiation', fontsize=14)
ax3.set_ylabel('(W/m$^2$)', fontsize=12)
ax3.legend(loc='best')
ax3.grid()
# Energy Balance plot
net_rad = ebm_budyko.net_radiation
transport = ebm_budyko.subprocess['budyko_transport'].heating_rate['Ts']
ax4 = fig.add_subplot(224, sharex = ax1)
ax4.plot(ebm_budyko.lat, net_rad, label='net radiation',
color='red')
ax4.plot(ebm_budyko.lat, transport, label='heat transport',
color='blue')
ax4.plot(ebm_budyko.lat, net_rad+transport, label='balance',
color='black')
ax4.set_title('Energy', fontsize=14)
ax4.set_xlabel('latitude', fontsize=10)
ax4.set_ylabel('(W/m$^2$)', fontsize=12)
ax4.legend(loc='best')
ax4.grid()
plt.show()
Explanation: The two right sided plots show that the model is not in equilibrium. The net radiation reveals that the model currently gains heat and therefore warms up at the poles and loses heat at the equator. From the Energy plot we can see that latitudinal energy balance is not met.
Now we integrate the model as long there are no more changes in the surface temperature and the model reached equilibrium:
End of explanation
print('The global mean temperature is %.2f deg C.' %climlab.global_mean(ebm_budyko.Ts))
print('The modeled ice edge is at %.2f deg latitude.' %np.max(ebm_budyko.icelat))
Explanation: Now we can see that the latitudinal energy balance is statisfied. Each latitude gains as much heat (net radiation) as is transported out of it (diffusion transport). There is a net radiation surplus in the equator region, so more shortwave radiation is absorbed there than is emitted through longwave radiation. At the poles there is a net radiation deficit. That imbalance is compensated by the diffusive energy transport term.
Global mean temperature
We use climlab to compute the global mean temperature and print the ice edge latitude:
End of explanation |
13,414 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: Analyzing public cloud and hybrid networks
Public cloud and hybrid networks can be hard to debug and secure. Many of the standard tools (e.g., traceroute) do not work in the cloud setting, even though the types of paths that can emerge are highly complicated, depending on whether the endpoints are in the same region, different regions, across physical and virtual infrastructure, or whether public or private IPs of cloud instances are being used.
At same time, the fast pace of evolution of these networks, where new subnets and instances can be spun up rapidly by different groups of people, creates a significant security risk. Network engineers need tools than can provide comprehensive guaratnees that services and applications are available and secure as intended at all possible times.
In this notebook, we show how Batfish can predict and help debug network paths for cloud and hybrid networks and how it can guarantee that the network's availability and security posture is exactly as desired.
Step3: Initializing the Network and Snapshot
SNAPSHOT_PATH below can be updated to point to a custom snapshot directory. See instructions for how to package data for analysis.
Step4: The network snapshot that we just initialized is illustrated below. It has a datacenter network with the standard leaf-spine design on the left. Though not strictly necessary, we have included a host srv-101 in this network to enable end-to-end analysis. The exit gateway of the datacenter connects to an Internet service provider (ASN 65200) that we call isp_dc.
The AWS network is shown on the right. It is spread across two regions, us-east-2 and us-west-2. Each region has two VPCs, one of which is meant to host Internet-facing services and the other is meant to host only private services. Subnets in the public-facing VPCs use an Internet gateway to send and receive traffic outside of AWS. The two VPCs in a region peer via a transit gateway. Each VPC has two subnets, and we have some instances running as well.
The physical network connects to the AWS network using IPSec tunnels, shown in pink, between exitgw and the two transit gateways. BGP sessions run atop these tunnels to make endpoints aware of prefixes on the other side.
You can view configuration files that we used here. The AWS portion of the configuration is in the aws_configs subfolder. It has JSON files obtained via AWS APIs. An example script that packages AWS data into a Batfish snapshot is here.
Analyzing network paths
Batfish can help analyze cloud and hybrid networks by showing how exactly traffic flows (or not) in the network, which can help debug and fix configuration errors. Batfish can also help ensure that the network is configured exactly as desired, with respect to reachability and security policies. We illustrate these types of analysis below.
First, lets define a couple of maps to help with the analysis.
Step5: Paths across VPCs within an AWS region
To see how traffic flows between two instances in the same region but across different VPCs, say, from hosts["east2_private"] to hosts["east2_public"], we can run a traceroute query across them as follows.
In the query below, we use the name of the instance as the destination for the traceroute. This makes Batfish pick the instance's private (i.e., non-Elastic) IP (10.20.1.207). It does not pick the public IP because that those IPs do not reside on instances but are used by the Internet gateway to NAT instance's traffic in and out of AWS (see documentation). If an instance has multiple private IPs, Batfish will pick one at random. To make Batfish use a specific IP, supply that IP as the argument to the dstIps parameter.
Step6: The trace above shows how traffic goes from host["east2_private"] to host["east2_public"] -- via the source subnet and VPC, then to the transit gateway, and finally to the destination VPC and subnet. Along the way, it also shows where the flow encounters security groups (at both instances) and network ACLs (at subnets). In this instance, all security groups and network ACLs permit this particular flow.
This type of insight into traffic paths, which helps understand and debug network configuration, is difficult to obtain otherwise. Traceroutes on the live AWS network do not yield any information if the flow does not make it through, and do not show why or where a packet is dropped.
Paths across AWS regions
The traceroute query below shows paths across instances in two different regions.
Step7: We see that such traffic does not reach the destination but instead is dropped by the AWS backbone (ASN 16509). This happens because, in our network, there is no (transit gateway or VPC) peering between VPCs in different regions. So, the source subnet is unaware of the address space of the destination subnet, which makes it use the default route that points to the Internet gateway (igw-02fd68f94367a67c7). The Internet gateway forwards the packet to aws-backbone, after NAT'ing its source IP. The packet is eventually dropped as it is using a private address as destination. Recall that using the instance name as destination amounts to using its private IP.
The behavior is different if we use the public IP instead, as shown below.
Step8: This traceroute starts out like the previous one, up until the AWS backbone (isp_16509) -- from source subnet to the Internet gateway which forwards it to the backbone, after source NAT'ing the packet. The backbone carries it to the internet gateway in the destination region (igw-0a8309f3192e7cea3), and this gateway NATs the packet's destination from the public IP to the instance's private IP.
Connectivity between DC and AWS
A common mode to connect to AWS is using VPNs and BGP, that is, establish IPSec tunnels between exit gateways on the physical side and AWS gateways and run BGP on top of these tunnels to exchange prefixes. Incompatibility in either IPSec or BGP settings on the two sides means that connectivity between the DC and AWS will not work.
Batfish can determine if the two sides are compatibly configured with respect to IPSec and BGP settings and if those sessions will come up.
The query below lists the status of all IPSec sessions between the exitgw and AWS transit gateways (specified using the regular expression ^tgw- that matches those node names). This filtering lets us ignore any other IPSec sessions that may exist in our network and focus on DC-AWS connectivity.
Step9: In the output above, we see all expected tunnels. Each transit gateways has two established sessions to exitgw. The default AWS behavior is to have two IPSec tunnels between gateways and physical nodes.
Now that we know IPSec tunnels are working, we can check BGP sessions. The query below lists the status of all BGP sessions where one end is an AWS transit gateway.
Step10: The output above shows that all BGP sessions are established as expected.
Paths from the DC to AWS
Finally, lets look at paths from the datacenter to AWS. The query below does that using the private IP of the public instance in us-east-2 region.
Step11: We see that this traffic travels on the IPSec links between the datacenter's exitgw and the transit gateway in the destination region (tgw-06b348adabd13452d), and then makes it to the destination instance after making it successfully past the network ACL on the subnet node and the security group on the instance.
A different path emerges if we use the public IP of the same instance, as shown below.
Step12: We now see that the traffic traverses the Internet via isp_65200 and the Internet gateway (igw-02fd68f94367a67c7), which NATs the destination address of the packet from the public to the private IP.
Evaluating the network's availability and security
In addition to helping you understand and debug network paths, Batfish can also help ensure that the network is correctly configured with respect to its availability and security policies.
As examples, the queries below evaluate which instances are or are not reachable from the Internet.
Step13: We see that Batfish correctly computes that the two instances in the public subnets are accessible from the Internet, and the other two are not.
We can compare the answers produced by Batfish to what is expected based on network policy. This comparison can ensure that all instances that are expected to host public-facing services are indeed reachable from the Internet, and all instances that are expecpted to host private services are indeed not accessible from the Internet.
We can similarly compute which instances are reachable from hosts in the datacenter, using the query like the following.
Step14: We see that all four instances are accessible from the datacenter host.
Batfish allows a finer-grained evaluation of security policy as well. In our network, our intent is that the public instances should only allow SSH traffic. Let us see if this invariant actually holds.
Step15: We see that, against our policy, the public-facing instance allows non-SSH traffic. To see examples of such traffic, we can run the following query.
Step16: We thus see that our misconfigured public instance allows TCP traffic to port 3306 (MySQL).
In this and earlier reachability queries, we are not specifying anything about the flow to Batfish. It automatically figures out that the flow from the Internet that can reach hosts["east2_public"] must have 13.59.144.125 as its destination address, which after NAT'ing becomes the private IP of the instance. Such exhaustive analysis over all possible header spaces is unique to Batfish, which makes it an ideal tool for comprehensive availability and security analysis.
Batfish can also diagnose why certain traffic makes it past security groups and network ACLs. For example, we can run the testFilters question as below to reveal why the flow above made it past the security group on hosts["east2_public"]. | Python Code:
# Import packages
%run startup.py
bf = Session(host="localhost")
def show_first_trace(trace_answer_frame):
Prints the first trace in the answer frame.
In the presence of multipath routing, Batfish outputs all traces
from the source to destination. This function picks the first one.
if len(trace_answer_frame) == 0:
print("No flows found")
else:
show("Flow: {}".format(trace_answer_frame.iloc[0]['Flow']))
show(trace_answer_frame.iloc[0]['Traces'][0])
def is_reachable(start_location, end_location, headers=None):
Checks if the start_location can reach the end_location using specified packet headers.
All possible headers are considered if headers is None.
ans = bf.q.reachability(pathConstraints=PathConstraints(startLocation=start_location,
endLocation=end_location),
headers=headers).answer()
return len(ans.frame()) > 0
Explanation: Analyzing public cloud and hybrid networks
Public cloud and hybrid networks can be hard to debug and secure. Many of the standard tools (e.g., traceroute) do not work in the cloud setting, even though the types of paths that can emerge are highly complicated, depending on whether the endpoints are in the same region, different regions, across physical and virtual infrastructure, or whether public or private IPs of cloud instances are being used.
At same time, the fast pace of evolution of these networks, where new subnets and instances can be spun up rapidly by different groups of people, creates a significant security risk. Network engineers need tools than can provide comprehensive guaratnees that services and applications are available and secure as intended at all possible times.
In this notebook, we show how Batfish can predict and help debug network paths for cloud and hybrid networks and how it can guarantee that the network's availability and security posture is exactly as desired.
End of explanation
# Initialize a network and snapshot
NETWORK_NAME = "hybrid-cloud"
SNAPSHOT_NAME = "snapshot"
SNAPSHOT_PATH = "networks/hybrid-cloud"
bf.set_network(NETWORK_NAME)
bf.init_snapshot(SNAPSHOT_PATH, name=SNAPSHOT_NAME, overwrite=True)
Explanation: Initializing the Network and Snapshot
SNAPSHOT_PATH below can be updated to point to a custom snapshot directory. See instructions for how to package data for analysis.
End of explanation
#Instances in AWS in each region and VPC type (public, private)
hosts = {}
hosts["east2_private"] = "i-04cd3db5124a05ee6"
hosts["east2_public"] = "i-01602d9efaed4409a"
hosts["west2_private"] = "i-0a5d64b8b58c6dd09"
hosts["west2_public"] = "i-02cae6eaa9edeed70"
#Public IPs of instances in AWS
public_ips = {}
public_ips["east2_public"] = "13.59.144.125" # of i-01602d9efaed4409a
public_ips["west2_public"] = "54.191.42.182" # of i-02cae6eaa9edeed70
Explanation: The network snapshot that we just initialized is illustrated below. It has a datacenter network with the standard leaf-spine design on the left. Though not strictly necessary, we have included a host srv-101 in this network to enable end-to-end analysis. The exit gateway of the datacenter connects to an Internet service provider (ASN 65200) that we call isp_dc.
The AWS network is shown on the right. It is spread across two regions, us-east-2 and us-west-2. Each region has two VPCs, one of which is meant to host Internet-facing services and the other is meant to host only private services. Subnets in the public-facing VPCs use an Internet gateway to send and receive traffic outside of AWS. The two VPCs in a region peer via a transit gateway. Each VPC has two subnets, and we have some instances running as well.
The physical network connects to the AWS network using IPSec tunnels, shown in pink, between exitgw and the two transit gateways. BGP sessions run atop these tunnels to make endpoints aware of prefixes on the other side.
You can view configuration files that we used here. The AWS portion of the configuration is in the aws_configs subfolder. It has JSON files obtained via AWS APIs. An example script that packages AWS data into a Batfish snapshot is here.
Analyzing network paths
Batfish can help analyze cloud and hybrid networks by showing how exactly traffic flows (or not) in the network, which can help debug and fix configuration errors. Batfish can also help ensure that the network is configured exactly as desired, with respect to reachability and security policies. We illustrate these types of analysis below.
First, lets define a couple of maps to help with the analysis.
End of explanation
# traceroute between instances in the same region, using SSH
ans = bf.q.traceroute(startLocation=hosts["east2_private"],
headers=HeaderConstraints(dstIps=hosts["east2_public"],
applications="ssh")).answer()
show_first_trace(ans.frame())
Explanation: Paths across VPCs within an AWS region
To see how traffic flows between two instances in the same region but across different VPCs, say, from hosts["east2_private"] to hosts["east2_public"], we can run a traceroute query across them as follows.
In the query below, we use the name of the instance as the destination for the traceroute. This makes Batfish pick the instance's private (i.e., non-Elastic) IP (10.20.1.207). It does not pick the public IP because that those IPs do not reside on instances but are used by the Internet gateway to NAT instance's traffic in and out of AWS (see documentation). If an instance has multiple private IPs, Batfish will pick one at random. To make Batfish use a specific IP, supply that IP as the argument to the dstIps parameter.
End of explanation
# traceroute between instances across region using the destination's private IP
ans = bf.q.traceroute(startLocation=hosts["east2_public"],
headers=HeaderConstraints(dstIps=hosts["west2_public"],
applications="ssh")).answer()
show_first_trace(ans.frame())
Explanation: The trace above shows how traffic goes from host["east2_private"] to host["east2_public"] -- via the source subnet and VPC, then to the transit gateway, and finally to the destination VPC and subnet. Along the way, it also shows where the flow encounters security groups (at both instances) and network ACLs (at subnets). In this instance, all security groups and network ACLs permit this particular flow.
This type of insight into traffic paths, which helps understand and debug network configuration, is difficult to obtain otherwise. Traceroutes on the live AWS network do not yield any information if the flow does not make it through, and do not show why or where a packet is dropped.
Paths across AWS regions
The traceroute query below shows paths across instances in two different regions.
End of explanation
# traceroute betwee instances across region using the destination's public IP
ans = bf.q.traceroute(startLocation=hosts["east2_public"],
headers=HeaderConstraints(dstIps=public_ips["west2_public"],
applications="ssh")).answer()
show_first_trace(ans.frame())
Explanation: We see that such traffic does not reach the destination but instead is dropped by the AWS backbone (ASN 16509). This happens because, in our network, there is no (transit gateway or VPC) peering between VPCs in different regions. So, the source subnet is unaware of the address space of the destination subnet, which makes it use the default route that points to the Internet gateway (igw-02fd68f94367a67c7). The Internet gateway forwards the packet to aws-backbone, after NAT'ing its source IP. The packet is eventually dropped as it is using a private address as destination. Recall that using the instance name as destination amounts to using its private IP.
The behavior is different if we use the public IP instead, as shown below.
End of explanation
# show the status of all IPSec tunnels between exitgw and AWS transit gateways
ans = bf.q.ipsecSessionStatus(nodes="exitgw", remoteNodes="/^tgw-/").answer()
show(ans.frame())
Explanation: This traceroute starts out like the previous one, up until the AWS backbone (isp_16509) -- from source subnet to the Internet gateway which forwards it to the backbone, after source NAT'ing the packet. The backbone carries it to the internet gateway in the destination region (igw-0a8309f3192e7cea3), and this gateway NATs the packet's destination from the public IP to the instance's private IP.
Connectivity between DC and AWS
A common mode to connect to AWS is using VPNs and BGP, that is, establish IPSec tunnels between exit gateways on the physical side and AWS gateways and run BGP on top of these tunnels to exchange prefixes. Incompatibility in either IPSec or BGP settings on the two sides means that connectivity between the DC and AWS will not work.
Batfish can determine if the two sides are compatibly configured with respect to IPSec and BGP settings and if those sessions will come up.
The query below lists the status of all IPSec sessions between the exitgw and AWS transit gateways (specified using the regular expression ^tgw- that matches those node names). This filtering lets us ignore any other IPSec sessions that may exist in our network and focus on DC-AWS connectivity.
End of explanation
# show the status of all BGP sessions between exitgw and AWS transit gateways
ans = bf.q.bgpSessionStatus(nodes="exitgw", remoteNodes="/^tgw-/").answer()
show(ans.frame())
Explanation: In the output above, we see all expected tunnels. Each transit gateways has two established sessions to exitgw. The default AWS behavior is to have two IPSec tunnels between gateways and physical nodes.
Now that we know IPSec tunnels are working, we can check BGP sessions. The query below lists the status of all BGP sessions where one end is an AWS transit gateway.
End of explanation
# traceroute from DC host to an instances using private IP
ans = bf.q.traceroute(startLocation="srv-101",
headers=HeaderConstraints(dstIps=hosts["east2_public"],
applications="ssh")).answer()
show_first_trace(ans.frame())
Explanation: The output above shows that all BGP sessions are established as expected.
Paths from the DC to AWS
Finally, lets look at paths from the datacenter to AWS. The query below does that using the private IP of the public instance in us-east-2 region.
End of explanation
# traceroute from DC host to an instances using public IP
ans = bf.q.traceroute(startLocation="srv-101",
headers=HeaderConstraints(dstIps=public_ips["east2_public"],
applications="ssh")).answer()
show_first_trace(ans.frame())
Explanation: We see that this traffic travels on the IPSec links between the datacenter's exitgw and the transit gateway in the destination region (tgw-06b348adabd13452d), and then makes it to the destination instance after making it successfully past the network ACL on the subnet node and the security group on the instance.
A different path emerges if we use the public IP of the same instance, as shown below.
End of explanation
# compute which instances are open to the Internet
reachable_from_internet = [key for (key, value) in hosts.items() if is_reachable("internet", value)]
print("\nInstances reachable from the Internet: {}".format(sorted(reachable_from_internet)))
# compute which instances are NOT open to the Internet
unreachable_from_internet = [key for (key, value) in hosts.items() if not is_reachable("internet", value)]
print("\nInstances NOT reachable from the Internet: {}".format(sorted(unreachable_from_internet)))
Explanation: We now see that the traffic traverses the Internet via isp_65200 and the Internet gateway (igw-02fd68f94367a67c7), which NATs the destination address of the packet from the public to the private IP.
Evaluating the network's availability and security
In addition to helping you understand and debug network paths, Batfish can also help ensure that the network is correctly configured with respect to its availability and security policies.
As examples, the queries below evaluate which instances are or are not reachable from the Internet.
End of explanation
# compute which instances are reachable from data center
reachable_from_dc = [key for (key,value) in hosts.items() if is_reachable("srv-101", value)]
print("\nInstances reachable from the DC: {}".format(sorted(reachable_from_dc)))
Explanation: We see that Batfish correctly computes that the two instances in the public subnets are accessible from the Internet, and the other two are not.
We can compare the answers produced by Batfish to what is expected based on network policy. This comparison can ensure that all instances that are expected to host public-facing services are indeed reachable from the Internet, and all instances that are expecpted to host private services are indeed not accessible from the Internet.
We can similarly compute which instances are reachable from hosts in the datacenter, using the query like the following.
End of explanation
tcp_non_ssh = HeaderConstraints(ipProtocols="tcp", dstPorts="!22")
reachable_from_internet_non_ssh = [key for (key, value) in hosts.items()
if is_reachable("internet", value, tcp_non_ssh)]
print("\nInstances reachable from the Internet with non-SSH traffic: {}".format(
sorted(reachable_from_internet_non_ssh)))
Explanation: We see that all four instances are accessible from the datacenter host.
Batfish allows a finer-grained evaluation of security policy as well. In our network, our intent is that the public instances should only allow SSH traffic. Let us see if this invariant actually holds.
End of explanation
ans = bf.q.reachability(pathConstraints=PathConstraints(startLocation="internet",
endLocation=hosts["east2_public"]),
headers=tcp_non_ssh).answer()
show_first_trace(ans.frame())
Explanation: We see that, against our policy, the public-facing instance allows non-SSH traffic. To see examples of such traffic, we can run the following query.
End of explanation
flow=ans.frame().iloc[0]['Flow'] # the rogue flow uncovered by Batfish above
ans = bf.q.testFilters(nodes=hosts["east2_public"],
filters="~INGRESS_ACL~eni-01997085076a9b98a",
headers=HeaderConstraints(srcIps=flow.srcIp,
dstIps="10.20.1.207", # destination IP after the NAT at Step 3 above
srcPorts=flow.srcPort,
dstPorts=flow.dstPort,
ipProtocols=flow.ipProtocol)).answer()
show(ans.frame())
Explanation: We thus see that our misconfigured public instance allows TCP traffic to port 3306 (MySQL).
In this and earlier reachability queries, we are not specifying anything about the flow to Batfish. It automatically figures out that the flow from the Internet that can reach hosts["east2_public"] must have 13.59.144.125 as its destination address, which after NAT'ing becomes the private IP of the instance. Such exhaustive analysis over all possible header spaces is unique to Batfish, which makes it an ideal tool for comprehensive availability and security analysis.
Batfish can also diagnose why certain traffic makes it past security groups and network ACLs. For example, we can run the testFilters question as below to reveal why the flow above made it past the security group on hosts["east2_public"].
End of explanation |
13,415 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
k-Nearest Neighbor (kNN) exercise
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.
The kNN classifier consists of two stages
Step1: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps
Step2: Inline Question #1
Step3: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5
Step5: You should expect to see a slightly better performance than with k = 1.
Step6: Cross-validation
We have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation. | Python Code:
# Run some setup code for this notebook.
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
# This is a bit of magic to make matplotlib figures appear inline in the notebook
# rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
# Load the raw CIFAR-10 data.
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print('Training data shape: ', X_train.shape)
print('Training labels shape: ', y_train.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Subsample the data for more efficient code execution in this exercise
num_training = 5000
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
num_test = 500
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
# Reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
print(X_train.shape, X_test.shape)
from cs231n.classifiers import KNearestNeighbor
# Create a kNN classifier instance.
# Remember that training a kNN classifier is a noop:
# the Classifier simply remembers the data and does no further processing
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
Explanation: k-Nearest Neighbor (kNN) exercise
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.
The kNN classifier consists of two stages:
During training, the classifier takes the training data and simply remembers it
During testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples
The value of k is cross-validated
In this exercise you will implement these steps and understand the basic Image Classification pipeline, cross-validation, and gain proficiency in writing efficient, vectorized code.
End of explanation
# Open cs231n/classifiers/k_nearest_neighbor.py and implement
# compute_distances_two_loops.
# Test your implementation:
dists = classifier.compute_distances_two_loops(X_test)
print(dists.shape)
# We can visualize the distance matrix: each row is a single test example and
# its distances to training examples
plt.imshow(dists, interpolation='none')
plt.show()
Explanation: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps:
First we must compute the distances between all test examples and all train examples.
Given these distances, for each test example we find the k nearest examples and have them vote for the label
Lets begin with computing the distance matrix between all training and test examples. For example, if there are Ntr training examples and Nte test examples, this stage should result in a Nte x Ntr matrix where each element (i,j) is the distance between the i-th test and j-th train example.
First, open cs231n/classifiers/k_nearest_neighbor.py and implement the function compute_distances_two_loops that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time.
End of explanation
# Now implement the function predict_labels and run the code below:
# We use k = 1 (which is Nearest Neighbor).
y_test_pred = classifier.predict_labels(dists, k=1)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
Explanation: Inline Question #1: Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.)
What in the data is the cause behind the distinctly bright rows?
What causes the columns?
Your Answer: fill this in.
End of explanation
y_test_pred = classifier.predict_labels(dists, k=5)
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
Explanation: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5:
End of explanation
# Now lets speed up distance matrix computation by using partial vectorization
# with one loop. Implement the function compute_distances_one_loop and run the
# code below:
dists_one = classifier.compute_distances_one_loop(X_test)
# To ensure that our vectorized implementation is correct, we make sure that it
# agrees with the naive implementation. There are many ways to decide whether
# two matrices are similar; one of the simplest is the Frobenius norm. In case
# you haven't seen it before, the Frobenius norm of two matrices is the square
# root of the squared sum of differences of all elements; in other words, reshape
# the matrices into vectors and compute the Euclidean distance between them.
difference = np.linalg.norm(dists - dists_one, ord='fro')
print('Difference was: %f' % (difference, ))
if difference < 0.001:
print('Good! The distance matrices are the same')
else:
print('Uh-oh! The distance matrices are different')
# Now implement the fully vectorized version inside compute_distances_no_loops
# and run the code
dists_two = classifier.compute_distances_no_loops(X_test)
# check that the distance matrix agrees with the one we computed before:
difference = np.linalg.norm(dists - dists_two, ord='fro')
print('Difference was: %f' % (difference, ))
if difference < 0.001:
print('Good! The distance matrices are the same')
else:
print('Uh-oh! The distance matrices are different')
# Let's compare how fast the implementations are
def time_function(f, *args):
Call a function f with args and return the time (in seconds) that it took to execute.
import time
tic = time.time()
f(*args)
toc = time.time()
return toc - tic
two_loop_time = time_function(classifier.compute_distances_two_loops, X_test)
print('Two loop version took %f seconds' % two_loop_time)
one_loop_time = time_function(classifier.compute_distances_one_loop, X_test)
print('One loop version took %f seconds' % one_loop_time)
no_loop_time = time_function(classifier.compute_distances_no_loops, X_test)
print('No loop version took %f seconds' % no_loop_time)
# you should see significantly faster performance with the fully vectorized implementation
Explanation: You should expect to see a slightly better performance than with k = 1.
End of explanation
num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
X_train_folds = []
y_train_folds = []
################################################################################
# TODO: #
# Split up the training data into folds. After splitting, X_train_folds and #
# y_train_folds should each be lists of length num_folds, where #
# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #
# Hint: Look up the numpy array_split function. #
################################################################################
X_train_folds = np.array_split(X_train, 5)
y_train_folds = np.array_split(y_train, 5)
################################################################################
# END OF YOUR CODE #
################################################################################
# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = {}
################################################################################
# TODO: #
# Perform k-fold cross validation to find the best value of k. For each #
# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #
# where in each case you use all but one of the folds as training data and the #
# last fold as a validation set. Store the accuracies for all fold and all #
# values of k in the k_to_accuracies dictionary. #
################################################################################
for k in k_choices:
k_to_accuracies[k] = []
for i in range(5):
tmp_X_train = np.concatenate(X_train_folds[-5:i] + X_train_folds[i + 1:])
tmp_y_train = np.concatenate(y_train_folds[-5:i] + y_train_folds[i + 1:])
tmp_X_test = X_train_folds[i]
tmp_y_test = y_train_folds[i]
classifier.train(tmp_X_train, tmp_y_train)
dists = classifier.compute_distances_no_loops(tmp_X_test)
y_test_pred = classifier.predict_labels(dists, k)
num_correct = np.sum(y_test_pred == tmp_y_test)
accuracy = float(num_correct) / tmp_y_test.shape[0]
k_to_accuracies[k].append(accuracy)
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out the computed accuracies
for k in sorted(k_to_accuracies):
for accuracy in k_to_accuracies[k]:
print('k = %d, accuracy = %f' % (k, accuracy))
# plot the raw observations
for k in k_choices:
accuracies = k_to_accuracies[k]
plt.scatter([k] * len(accuracies), accuracies)
# plot the trend line with error bars that correspond to standard deviation
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
plt.show()
# Based on the cross-validation results above, choose the best value for k,
# retrain the classifier using all the training data, and test it on the test
# data. You should be able to get above 28% accuracy on the test data.
best_k = 5
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
y_test_pred = classifier.predict(X_test, k=best_k)
# Compute and display the accuracy
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
Explanation: Cross-validation
We have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.
End of explanation |
13,416 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
pupil_new explanation (in detail)
This is a notebook on explaining deeplabcut workflow (Detailed version)
Let's import pupil first (and datajoint)
Step1: OK, now let's see what is under pupil module. Simplest way of understanding this module is calling dj.ERD
Step2: There are 3 particular tables we want to pay attention
Step3: ConfigDeeplabcut is a table used to load configuration settings specific to DeepLabCut (DLC) model. Whenever we update our model for some reason (which is going to be Donnie most likely), one needs to ensure that the new config_path with appropriate shuffle and trainingsetindex is provided into this table.
For now, there is only one model (i.e. one model configuration), therefore only 1 entry.
Now let's look at TrackedLabelsDeeplabcut
TrackedLabelDeeplabcut
Step4: First thing first. TrackedLabelsDeeplabcut takes ConfigDeeplabcut as a foreign key (as you can see from dj.ERD)
Under TrackedLabelsDeeplabcut, there are 3 part tables
Also, TrackedLabelsDeeplabcut is a complex table that performs the following
Step5: Basically, given a specific video, it creates a tracking directory, add a symlink to the original video inside the tracking directory. Then it creates 2 sub direcotories, short and compressed_cropped. The reason we make such hierarchy is that
1. we want to compress the video (not over time but only over space) such that we reduce the size of the video while DLC can still predict reliably
2. we do not want to predict on the entire video, but only around the pupil area, hence we need to crop
3. In order to crop, we need to know where the pupil is, hence make a 5 sec long (or short video). Then using DLC model, find appropriate cropping coordinates.
One can actually see a real example by looking at case 20892_10_10, one of the entires in the table
Step6: .pickle and .h5 files are generated by DLC model and are used to predict labels. We will talk about it very soon, but for now, notice that under tracking_dir, we have the behavior video, 20892_9_10_beh.avi. This is, however, only a symlink to the actual video. Hence, even if we accidentally delete it, no harm to the actual video itself
Step7: 2. Make a 5 sec long short clip starting from the middle of the original video via make_short_video
Step8: This function is quite straightforward. Using the symlink, we access the original video, then find the middle frame, which then is converted into actual time (in format of hr
Step9: 3. Using DLC model, predict/ find labels on short video via predict_labels
Step10: Using DLC model, we predict on short video that was made from step 2. Quite straightforward here.
4. From the labels on short video, obtain coordinates to be used to crop the original video via obtain_cropping_coords
Step11: To fully understand what is going on here, a bit of backgound on deeplabcut (DLC) is needed. When DLC predicts a label, it returns a likelihood of a label (value between 0 and 1.0). Here, I used 0.9 as a threshold to filter out whether the predicted label is accurate or not.
For example, we can take a quick look on how DLC predicted on short video clip
Step12: 0.90 is probably more than good enough given how confident DLC thinks about the locations of bodyparts.
But sometimes, as any DL models, DLC can predict at somewhere completely wrong with high confidence. To filter those potential outliers, we only retain values within 1 std. from mean. Then, we find min and max values in x and y coordinates from 5 second long video.
Btw, we only look at the eyelid_top, eyelid_bottom, eyelid_left, eyelid_right as they are, in theory, extremes of the bounding box to draw.
5. Add additional pixels on cropping coordinates via add_pixels
Step13: Now that we have coords to crop around, we add addtional pixels (100 specifically) on top.
In my experience, 100 pixels were enough to ensure that even in drastic eyelid movements (i.e. eyes being super wide open), all the body parts are within the cropping coordinates.
6. Using the coordinates from step 5, crop and compress original video via make_compressed_cropped_video
Step14: Using cropping coordinates from step 5, we compress and crop the original video via ffmpeg and save it under compressed_cropped directory. This takes around 15-25 minutes.
In CompressedCroppedVideo part table, one can see the cropping coords (after adding added_pixels), how many pixels added, and video_path to compressed_cropped video
Step15: 7. Predict on compressed_cropped_video
Step16: Same as step 3, but this time, using cropped_compressed video. MAKE SURE YOU HAVE GPU AVAILABLE. Otherwise, this will take significantly long time. With GPU enabled, this takes around 20-40 minutes.
FittedContourDeeplabcut
Now that we have a tracked labels, now it is time to fit. Here, we fit both a circle and and ellipse.
Step17: Circle
Step18: For circle, we save center coordinates in a tuple format, radius in float, and visible_portion in float. visible_portion is defined as the following
Step19: Ellipse table is very similar to that of a circle table
Ellipse | Python Code:
import datajoint as dj
from pipeline import pupil
Explanation: pupil_new explanation (in detail)
This is a notebook on explaining deeplabcut workflow (Detailed version)
Let's import pupil first (and datajoint)
End of explanation
dj.ERD(pupil.schema)
Explanation: OK, now let's see what is under pupil module. Simplest way of understanding this module is calling dj.ERD
End of explanation
pupil.ConfigDeeplabcut()
pupil.ConfigDeeplabcut.heading
Explanation: There are 3 particular tables we want to pay attention:
1. ConfigDeeplabcut (dj.Manual)
2. TrackedLabelsDeeplabcut (dj.Computed)
3. FittedContourDeeplabcut (dj.Computed)
let's look at ConfigDeeplabcut first
ConfigDeeplabcut
End of explanation
pupil.TrackedLabelsDeeplabcut()
pupil.TrackedLabelsDeeplabcut.heading
Explanation: ConfigDeeplabcut is a table used to load configuration settings specific to DeepLabCut (DLC) model. Whenever we update our model for some reason (which is going to be Donnie most likely), one needs to ensure that the new config_path with appropriate shuffle and trainingsetindex is provided into this table.
For now, there is only one model (i.e. one model configuration), therefore only 1 entry.
Now let's look at TrackedLabelsDeeplabcut
TrackedLabelDeeplabcut
End of explanation
pupil.TrackedLabelsDeeplabcut.create_tracking_directory?
Explanation: First thing first. TrackedLabelsDeeplabcut takes ConfigDeeplabcut as a foreign key (as you can see from dj.ERD)
Under TrackedLabelsDeeplabcut, there are 3 part tables
Also, TrackedLabelsDeeplabcut is a complex table that performs the following:
Given a specific key (i.e. aniaml_id, session, scan_idx), it creates a needed directory structure by calling create_tracking_directory.
Make a 5 sec long short clip starting from the middle of the original video via make_short_video
Using DLC model, predict/ find labels on short video via predict_labels
From the labels on short video, obtain coordinates to be used to crop the original video via obtain_cropping_coords
Add additional pixels on cropping coordinates via add_pixels
Using the coordinates from step 4, crop and compress original video via make_compressed_cropped_video
Predict on compressed_cropped_video
I know it is alot to digest, so let's look at it one by one
1. Given a specific key (i.e. aniaml_id, session, scan_idx), it creates a needed directory structure by calling create_tracking_directory.
Let's call create_tracking_directory? and see what that does
End of explanation
# Uncomment this cell to see
import os
key = dict(animal_id = 20892, session=10, scan_idx=10)
tracking_dir = (pupil_new.TrackedLabelsDeeplabcut & key).fetch1('tracking_dir')
print(os.listdir(tracking_dir))
print(os.listdir(os.path.join(tracking_dir, 'short')))
print(os.listdir(os.path.join(tracking_dir, 'compressed_cropped')))
Explanation: Basically, given a specific video, it creates a tracking directory, add a symlink to the original video inside the tracking directory. Then it creates 2 sub direcotories, short and compressed_cropped. The reason we make such hierarchy is that
1. we want to compress the video (not over time but only over space) such that we reduce the size of the video while DLC can still predict reliably
2. we do not want to predict on the entire video, but only around the pupil area, hence we need to crop
3. In order to crop, we need to know where the pupil is, hence make a 5 sec long (or short video). Then using DLC model, find appropriate cropping coordinates.
One can actually see a real example by looking at case 20892_10_10, one of the entires in the table
End of explanation
pupil.TrackedLabelsDeeplabcut.OriginalVideo()
Explanation: .pickle and .h5 files are generated by DLC model and are used to predict labels. We will talk about it very soon, but for now, notice that under tracking_dir, we have the behavior video, 20892_9_10_beh.avi. This is, however, only a symlink to the actual video. Hence, even if we accidentally delete it, no harm to the actual video itself :)
Also, some original video info are saved in the part table, OriginalVideo. I think both the primary and secondary keys are self-explantory
End of explanation
pupil.TrackedLabelsDeeplabcut.make_short_video?
Explanation: 2. Make a 5 sec long short clip starting from the middle of the original video via make_short_video
End of explanation
pupil.TrackedLabelsDeeplabcut.ShortVideo()
Explanation: This function is quite straightforward. Using the symlink, we access the original video, then find the middle frame, which then is converted into actual time (in format of hr:min:sec). Then, using ffmpeg, we extract 5 second long video and save it under short directory.
For ShortVideo part table, it saves both the path to the short video (video_path) and starting_frame. starting_frame indicates the middle frame number of the original video.
End of explanation
pupil.TrackedLabelsDeeplabcut.predict_labels?
Explanation: 3. Using DLC model, predict/ find labels on short video via predict_labels
End of explanation
pupil.TrackedLabelsDeeplabcut.obtain_cropping_coords?
Explanation: Using DLC model, we predict on short video that was made from step 2. Quite straightforward here.
4. From the labels on short video, obtain coordinates to be used to crop the original video via obtain_cropping_coords
End of explanation
import pandas as pd
df_short = pd.read_hdf(os.path.join(tracking_dir,'short', '20892_10_00010_beh_shortDeepCut_resnet50_pupil_trackFeb12shuffle1_600000.h5'))
df_short.head()
Explanation: To fully understand what is going on here, a bit of backgound on deeplabcut (DLC) is needed. When DLC predicts a label, it returns a likelihood of a label (value between 0 and 1.0). Here, I used 0.9 as a threshold to filter out whether the predicted label is accurate or not.
For example, we can take a quick look on how DLC predicted on short video clip
End of explanation
pupil.TrackedLabelsDeeplabcut.add_pixels?
Explanation: 0.90 is probably more than good enough given how confident DLC thinks about the locations of bodyparts.
But sometimes, as any DL models, DLC can predict at somewhere completely wrong with high confidence. To filter those potential outliers, we only retain values within 1 std. from mean. Then, we find min and max values in x and y coordinates from 5 second long video.
Btw, we only look at the eyelid_top, eyelid_bottom, eyelid_left, eyelid_right as they are, in theory, extremes of the bounding box to draw.
5. Add additional pixels on cropping coordinates via add_pixels
End of explanation
pupil.TrackedLabelsDeeplabcut.make_compressed_cropped_video?
Explanation: Now that we have coords to crop around, we add addtional pixels (100 specifically) on top.
In my experience, 100 pixels were enough to ensure that even in drastic eyelid movements (i.e. eyes being super wide open), all the body parts are within the cropping coordinates.
6. Using the coordinates from step 5, crop and compress original video via make_compressed_cropped_video
End of explanation
pupil.TrackedLabelsDeeplabcut.CompressedCroppedVideo()
Explanation: Using cropping coordinates from step 5, we compress and crop the original video via ffmpeg and save it under compressed_cropped directory. This takes around 15-25 minutes.
In CompressedCroppedVideo part table, one can see the cropping coords (after adding added_pixels), how many pixels added, and video_path to compressed_cropped video
End of explanation
pupil.TrackedLabelsDeeplabcut.predict_labels?
Explanation: 7. Predict on compressed_cropped_video
End of explanation
pupil.FittedContourDeeplabcut()
Explanation: Same as step 3, but this time, using cropped_compressed video. MAKE SURE YOU HAVE GPU AVAILABLE. Otherwise, this will take significantly long time. With GPU enabled, this takes around 20-40 minutes.
FittedContourDeeplabcut
Now that we have a tracked labels, now it is time to fit. Here, we fit both a circle and and ellipse.
End of explanation
print(key)
(pupil.FittedContourDeeplabcut & key).Circle.heading
(pupil.FittedContourDeeplabcut & key).Circle()
Explanation: Circle
End of explanation
from pipeline.utils import DLC_tools
DLC_tools.PupilFitting.detect_visible_pupil_area?
Explanation: For circle, we save center coordinates in a tuple format, radius in float, and visible_portion in float. visible_portion is defined as the following: Given a fitted circle or an ellipse, subtract the area that is occuluded by eyelids, and return the portion of visible pupil area w.r.t. the fitted area. In theory, the value ranges from 0 (pupil is completely invisible) to 1 (pupil is completely visible). However there are cases where visible portion cannot be calculated:
DLC failed to predict all eyelid labels, hence visible region cannot be obtained (evaluated to -1)
Because the number of predicted pupil labels are less than 3 for circle (and 6 for ellipse), fitting did not happen. Hence we do not know the area of the pupil as well as visible region (evaluated to -2)
Both case 1 and 2 happened (evaluated to -3)
In the beginning of the videos, we have black screens, hence both eyelids and pupil labels are not predicted, which evaluated to -3.
As visible_portion comment indicates, one can find the same information from DLC_tools.PupilFitting.detect_visible_pupil_area
End of explanation
print(key)
(pupil.FittedContourDeeplabcut & key).Ellipse.heading
(pupil.FittedContourDeeplabcut & key).Ellipse()
Explanation: Ellipse table is very similar to that of a circle table
Ellipse
End of explanation |
13,417 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using Convolutional Neural Networks
Welcome to the first week of the first deep learning certificate! We're going to use convolutional neural networks (CNNs) to allow our computer to see - something that is only possible thanks to deep learning.
Introduction to this week's task
Step1: Define path to data
Step2: A few basic libraries that we'll need for the initial exercises
Step3: We have created a file most imaginatively called 'utils.py' to store any little convenience functions we'll want to use. We will discuss these as we use them.
Step4: Use a pretrained VGG model with our Vgg16 class
Our first step is simply to use a model that has been fully created for us, which can recognise a wide variety (1,000 categories) of images. We will use 'VGG', which won the 2014 Imagenet competition, and is a very simple model to create and understand. The VGG Imagenet team created both a larger, slower, slightly more accurate model (VGG 19) and a smaller, faster model (VGG 16). We will be using VGG 16 since the much slower performance of VGG19 is generally not worth the very minor improvement in accuracy.
We have created a python class, Vgg16, which makes using the VGG 16 model very straightforward.
The punchline
Step5: The code above will work for any image recognition task, with any number of categories! All you have to do is to put your images into one folder per category, and run the code above.
Let's take a look at how this works, step by step...
Use Vgg16 for basic image recognition
Let's start off by using the Vgg16 class to recognise the main imagenet category for each image.
We won't be able to enter the Cats vs Dogs competition with an Imagenet model alone, since 'cat' and 'dog' are not categories in Imagenet - instead each individual breed is a separate category. However, we can use it to see how well it can recognise the images, which is a good first step.
First, create a Vgg16 object
Step6: Vgg16 is built on top of Keras (which we will be learning much more about shortly!), a flexible, easy to use deep learning library that sits on top of Theano or Tensorflow. Keras reads groups of images and labels in batches, using a fixed directory structure, where images from each category for training must be placed in a separate folder.
Let's grab batches of data from our training folder
Step7: (BTW, when Keras refers to 'classes', it doesn't mean python classes - but rather it refers to the categories of the labels, such as 'pug', or 'tabby'.)
Batches is just a regular python iterator. Each iteration returns both the images themselves, as well as the labels.
Step8: As you can see, the labels for each image are an array, containing a 1 in the first position if it's a cat, and in the second position if it's a dog. This approach to encoding categorical variables, where an array containing just a single 1 in the position corresponding to the category, is very common in deep learning. It is called one hot encoding.
The arrays contain two elements, because we have two categories (cat, and dog). If we had three categories (e.g. cats, dogs, and kangaroos), then the arrays would each contain two 0's, and one 1.
Step9: We can now pass the images to Vgg16's predict() function to get back probabilities, category indexes, and category names for each image's VGG prediction.
Step10: The category indexes are based on the ordering of categories used in the VGG model - e.g here are the first four
Step11: (Note that, other than creating the Vgg16 object, none of these steps are necessary to build a model; they are just showing how to use the class to view imagenet predictions.)
Use our Vgg16 class to finetune a Dogs vs Cats model
To change our model so that it outputs "cat" vs "dog", instead of one of 1,000 very specific categories, we need to use a process called "finetuning". Finetuning looks from the outside to be identical to normal machine learning training - we provide a training set with data and labels to learn from, and a validation set to test against. The model learns a set of parameters based on the data provided.
However, the difference is that we start with a model that is already trained to solve a similar problem. The idea is that many of the parameters should be very similar, or the same, between the existing model, and the model we wish to create. Therefore, we only select a subset of parameters to train, and leave the rest untouched. This happens automatically when we call fit() after calling finetune().
We create our batches just like before, and making the validation set available as well. A 'batch' (or mini-batch as it is commonly known) is simply a subset of the training data - we use a subset at a time when training or predicting, in order to speed up training, and to avoid running out of memory.
Step12: Calling finetune() modifies the model such that it will be trained based on the data in the batches provided - in this case, to predict either 'dog' or 'cat'.
Step13: Finally, we fit() the parameters of the model using the training data, reporting the accuracy on the validation set after every epoch. (An epoch is one full pass through the training data.)
Step14: That shows all of the steps involved in using the Vgg16 class to create an image recognition model using whatever labels you are interested in. For instance, this process could classify paintings by style, or leaves by type of disease, or satellite photos by type of crop, and so forth.
Next up, we'll dig one level deeper to see what's going on in the Vgg16 class.
Create a VGG model from scratch in Keras
For the rest of this tutorial, we will not be using the Vgg16 class at all. Instead, we will recreate from scratch the functionality we just used. This is not necessary if all you want to do is use the existing model - but if you want to create your own models, you'll need to understand these details. It will also help you in the future when you debug any problems with your models, since you'll understand what's going on behind the scenes.
Model setup
We need to import all the modules we'll be using from numpy, scipy, and keras
Step15: Let's import the mappings from VGG ids to imagenet category ids and descriptions, for display purposes later.
Step16: Here's a few examples of the categories we just imported
Step17: Model creation
Creating the model involves creating the model architecture, and then loading the model weights into that architecture. We will start by defining the basic pieces of the VGG architecture.
VGG has just one type of convolutional block, and one type of fully connected ('dense') block. Here's the convolutional block definition
Step18: ...and here's the fully-connected definition.
Step19: When the VGG model was trained in 2014, the creators subtracted the average of each of the three (R,G,B) channels first, so that the data for each channel had a mean of zero. Furthermore, their software that expected the channels to be in B,G,R order, whereas Python by default uses R,G,B. We need to preprocess our data to make these two changes, so that it is compatible with the VGG model
Step20: Now we're ready to define the VGG model architecture - look at how simple it is, now that we have the basic blocks defined!
Step21: We'll learn about what these different blocks do later in the course. For now, it's enough to know that
Step22: As well as the architecture, we need the weights that the VGG creators trained. The weights are the part of the model that is learnt from the data, whereas the architecture is pre-defined based on the nature of the problem.
Downloading pre-trained weights is much preferred to training the model ourselves, since otherwise we would have to download the entire Imagenet archive, and train the model for many days! It's very helpful when researchers release their weights, as they did here.
Step23: Getting imagenet predictions
The setup of the imagenet model is now complete, so all we have to do is grab a batch of images and call predict() on them.
Step24: Keras provides functionality to create batches of data from directories containing images; all we have to do is to define the size to resize the images to, what type of labels to create, whether to randomly shuffle the images, and how many images to include in each batch. We use this little wrapper to define some helpful defaults appropriate for imagenet data
Step25: From here we can use exactly the same steps as before to look at predictions from the model.
Step26: The VGG model returns 1,000 probabilities for each image, representing the probability that the model assigns to each possible imagenet category for each image. By finding the index with the largest probability (with np.argmax()) we can find the predicted label. | Python Code:
%matplotlib inline
#change image dim ordering?
# from keras import backend
# backend.set_image_dim_ordering('th')
Explanation: Using Convolutional Neural Networks
Welcome to the first week of the first deep learning certificate! We're going to use convolutional neural networks (CNNs) to allow our computer to see - something that is only possible thanks to deep learning.
Introduction to this week's task: 'Dogs vs Cats'
We're going to try to create a model to enter the Dogs vs Cats competition at Kaggle. There are 25,000 labelled dog and cat photos available for training, and 12,500 in the test set that we have to try to label for this competition. According to the Kaggle web-site, when this competition was launched (end of 2013): "State of the art: The current literature suggests machine classifiers can score above 80% accuracy on this task". So if we can beat 80%, then we will be at the cutting edge as of 2013!
Basic setup
There isn't too much to do to get started - just a few simple configuration steps.
This shows plots in the web page itself - we always wants to use this when using jupyter notebook:
End of explanation
# path = "data/dogscats/"
path = "data/dogscats/sample/"
Explanation: Define path to data: (It's a good idea to put it in a subdirectory of your notebooks folder, and then exclude that directory from git control by adding it to .gitignore.)
End of explanation
from __future__ import division,print_function
import os, json
from glob import glob
import numpy as np
np.set_printoptions(precision=4, linewidth=100)
from matplotlib import pyplot as plt
Explanation: A few basic libraries that we'll need for the initial exercises:
End of explanation
from importlib import reload
import utils ; reload(utils)
from utils import plots
Explanation: We have created a file most imaginatively called 'utils.py' to store any little convenience functions we'll want to use. We will discuss these as we use them.
End of explanation
# As large as you can, but no larger than 64 is recommended.
# If you have an older or cheaper GPU, you'll run out of memory, so will have to decrease this.
batch_size=16
# Import our class, and instantiate
import vgg16; reload(vgg16)
from vgg16 import Vgg16
vgg = Vgg16()
vgg.model.summary()
vgg = Vgg16()
# Grab a few images at a time for training and validation.
# NB: They must be in subdirectories named based on their category
batches = vgg.get_batches(path+'train', batch_size=batch_size)
print('1')
val_batches = vgg.get_batches(path+'valid', batch_size=batch_size*2)
print('2')
vgg.finetune(batches)
print('3')
vgg.fit(batches, val_batches, batch_size, nb_epoch=1)
print('4')
Explanation: Use a pretrained VGG model with our Vgg16 class
Our first step is simply to use a model that has been fully created for us, which can recognise a wide variety (1,000 categories) of images. We will use 'VGG', which won the 2014 Imagenet competition, and is a very simple model to create and understand. The VGG Imagenet team created both a larger, slower, slightly more accurate model (VGG 19) and a smaller, faster model (VGG 16). We will be using VGG 16 since the much slower performance of VGG19 is generally not worth the very minor improvement in accuracy.
We have created a python class, Vgg16, which makes using the VGG 16 model very straightforward.
The punchline: state of the art custom model in 7 lines of code
Here's everything you need to do to get >97% accuracy on the Dogs vs Cats dataset - we won't analyze how it works behind the scenes yet, since at this stage we're just going to focus on the minimum necessary to actually do useful work.
End of explanation
vgg = Vgg16()
Explanation: The code above will work for any image recognition task, with any number of categories! All you have to do is to put your images into one folder per category, and run the code above.
Let's take a look at how this works, step by step...
Use Vgg16 for basic image recognition
Let's start off by using the Vgg16 class to recognise the main imagenet category for each image.
We won't be able to enter the Cats vs Dogs competition with an Imagenet model alone, since 'cat' and 'dog' are not categories in Imagenet - instead each individual breed is a separate category. However, we can use it to see how well it can recognise the images, which is a good first step.
First, create a Vgg16 object:
End of explanation
batches = vgg.get_batches(path+'train', batch_size=4)
Explanation: Vgg16 is built on top of Keras (which we will be learning much more about shortly!), a flexible, easy to use deep learning library that sits on top of Theano or Tensorflow. Keras reads groups of images and labels in batches, using a fixed directory structure, where images from each category for training must be placed in a separate folder.
Let's grab batches of data from our training folder:
End of explanation
imgs,labels = next(batches)
Explanation: (BTW, when Keras refers to 'classes', it doesn't mean python classes - but rather it refers to the categories of the labels, such as 'pug', or 'tabby'.)
Batches is just a regular python iterator. Each iteration returns both the images themselves, as well as the labels.
End of explanation
plots(imgs, titles=labels)
Explanation: As you can see, the labels for each image are an array, containing a 1 in the first position if it's a cat, and in the second position if it's a dog. This approach to encoding categorical variables, where an array containing just a single 1 in the position corresponding to the category, is very common in deep learning. It is called one hot encoding.
The arrays contain two elements, because we have two categories (cat, and dog). If we had three categories (e.g. cats, dogs, and kangaroos), then the arrays would each contain two 0's, and one 1.
End of explanation
vgg.predict(imgs, True)
Explanation: We can now pass the images to Vgg16's predict() function to get back probabilities, category indexes, and category names for each image's VGG prediction.
End of explanation
vgg.classes[:4]
Explanation: The category indexes are based on the ordering of categories used in the VGG model - e.g here are the first four:
End of explanation
batch_size=32
batches = vgg.get_batches(path+'train', batch_size=batch_size)
val_batches = vgg.get_batches(path+'valid', batch_size=batch_size)
Explanation: (Note that, other than creating the Vgg16 object, none of these steps are necessary to build a model; they are just showing how to use the class to view imagenet predictions.)
Use our Vgg16 class to finetune a Dogs vs Cats model
To change our model so that it outputs "cat" vs "dog", instead of one of 1,000 very specific categories, we need to use a process called "finetuning". Finetuning looks from the outside to be identical to normal machine learning training - we provide a training set with data and labels to learn from, and a validation set to test against. The model learns a set of parameters based on the data provided.
However, the difference is that we start with a model that is already trained to solve a similar problem. The idea is that many of the parameters should be very similar, or the same, between the existing model, and the model we wish to create. Therefore, we only select a subset of parameters to train, and leave the rest untouched. This happens automatically when we call fit() after calling finetune().
We create our batches just like before, and making the validation set available as well. A 'batch' (or mini-batch as it is commonly known) is simply a subset of the training data - we use a subset at a time when training or predicting, in order to speed up training, and to avoid running out of memory.
End of explanation
vgg.finetune(batches)
Explanation: Calling finetune() modifies the model such that it will be trained based on the data in the batches provided - in this case, to predict either 'dog' or 'cat'.
End of explanation
vgg.fit(batches, val_batches, nb_epoch=1,batch_size = batch_size)
Explanation: Finally, we fit() the parameters of the model using the training data, reporting the accuracy on the validation set after every epoch. (An epoch is one full pass through the training data.)
End of explanation
from numpy.random import random, permutation
from scipy import misc, ndimage
from scipy.ndimage.interpolation import zoom
import keras
from keras import backend as K
from keras.utils.data_utils import get_file
from keras.models import Sequential, Model
from keras.layers.core import Flatten, Dense, Dropout, Lambda
from keras.layers import Input
from keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D
from keras.optimizers import SGD, RMSprop
from keras.preprocessing import image
Explanation: That shows all of the steps involved in using the Vgg16 class to create an image recognition model using whatever labels you are interested in. For instance, this process could classify paintings by style, or leaves by type of disease, or satellite photos by type of crop, and so forth.
Next up, we'll dig one level deeper to see what's going on in the Vgg16 class.
Create a VGG model from scratch in Keras
For the rest of this tutorial, we will not be using the Vgg16 class at all. Instead, we will recreate from scratch the functionality we just used. This is not necessary if all you want to do is use the existing model - but if you want to create your own models, you'll need to understand these details. It will also help you in the future when you debug any problems with your models, since you'll understand what's going on behind the scenes.
Model setup
We need to import all the modules we'll be using from numpy, scipy, and keras:
End of explanation
FILES_PATH = 'http://files.fast.ai/models/'; CLASS_FILE='imagenet_class_index.json'
# Keras' get_file() is a handy function that downloads files, and caches them for re-use later
fpath = get_file(CLASS_FILE, FILES_PATH+CLASS_FILE, cache_subdir='models')
with open(fpath) as f: class_dict = json.load(f)
# Convert dictionary with string indexes into an array
classes = [class_dict[str(i)][1] for i in range(len(class_dict))]
Explanation: Let's import the mappings from VGG ids to imagenet category ids and descriptions, for display purposes later.
End of explanation
classes[:5]
Explanation: Here's a few examples of the categories we just imported:
End of explanation
def ConvBlock(layers, model, filters):
for i in range(layers):
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(filters, 3, 3, activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
Explanation: Model creation
Creating the model involves creating the model architecture, and then loading the model weights into that architecture. We will start by defining the basic pieces of the VGG architecture.
VGG has just one type of convolutional block, and one type of fully connected ('dense') block. Here's the convolutional block definition:
End of explanation
def FCBlock(model):
model.add(Dense(4096, activation='relu'))
model.add(Dropout(0.5))
Explanation: ...and here's the fully-connected definition.
End of explanation
# Mean of each channel as provided by VGG researchers
vgg_mean = np.array([123.68, 116.779, 103.939]).reshape((3,1,1))
def vgg_preprocess(x):
x = x - vgg_mean # subtract mean
return x[:, ::-1] # reverse axis bgr->rgb
Explanation: When the VGG model was trained in 2014, the creators subtracted the average of each of the three (R,G,B) channels first, so that the data for each channel had a mean of zero. Furthermore, their software that expected the channels to be in B,G,R order, whereas Python by default uses R,G,B. We need to preprocess our data to make these two changes, so that it is compatible with the VGG model:
End of explanation
def VGG_16():
model = Sequential()
model.add(Lambda(vgg_preprocess, input_shape=(3,224,224)))
ConvBlock(2, model, 64)
ConvBlock(2, model, 128)
ConvBlock(3, model, 256)
ConvBlock(3, model, 512)
ConvBlock(3, model, 512)
model.add(Flatten())
FCBlock(model)
FCBlock(model)
model.add(Dense(1000, activation='softmax'))
return model
Explanation: Now we're ready to define the VGG model architecture - look at how simple it is, now that we have the basic blocks defined!
End of explanation
model = VGG_16()
Explanation: We'll learn about what these different blocks do later in the course. For now, it's enough to know that:
Convolution layers are for finding patterns in images
Dense (fully connected) layers are for combining patterns across an image
Now that we've defined the architecture, we can create the model like any python object:
End of explanation
fpath = get_file('vgg16.h5', FILES_PATH+'vgg16.h5', cache_subdir='models')
model.load_weights(fpath)
Explanation: As well as the architecture, we need the weights that the VGG creators trained. The weights are the part of the model that is learnt from the data, whereas the architecture is pre-defined based on the nature of the problem.
Downloading pre-trained weights is much preferred to training the model ourselves, since otherwise we would have to download the entire Imagenet archive, and train the model for many days! It's very helpful when researchers release their weights, as they did here.
End of explanation
batch_size = 4
Explanation: Getting imagenet predictions
The setup of the imagenet model is now complete, so all we have to do is grab a batch of images and call predict() on them.
End of explanation
def get_batches(dirname, gen=image.ImageDataGenerator(), shuffle=True,
batch_size=batch_size, class_mode='categorical'):
return gen.flow_from_directory(path+dirname, target_size=(224,224),
class_mode=class_mode, shuffle=shuffle, batch_size=batch_size)
Explanation: Keras provides functionality to create batches of data from directories containing images; all we have to do is to define the size to resize the images to, what type of labels to create, whether to randomly shuffle the images, and how many images to include in each batch. We use this little wrapper to define some helpful defaults appropriate for imagenet data:
End of explanation
batches = get_batches('train', batch_size=batch_size)
val_batches = get_batches('valid', batch_size=batch_size)
imgs,labels = next(batches)
# This shows the 'ground truth'
plots(imgs, titles=labels)
Explanation: From here we can use exactly the same steps as before to look at predictions from the model.
End of explanation
def pred_batch(imgs):
preds = model.predict(imgs)
idxs = np.argmax(preds, axis=1)
print('Shape: {}'.format(preds.shape))
print('First 5 classes: {}'.format(classes[:5]))
print('First 5 probabilities: {}\n'.format(preds[0, :5]))
print('Predictions prob/class: ')
for i in range(len(idxs)):
idx = idxs[i]
print (' {:.4f}/{}'.format(preds[i, idx], classes[idx]))
pred_batch(imgs)
Explanation: The VGG model returns 1,000 probabilities for each image, representing the probability that the model assigns to each possible imagenet category for each image. By finding the index with the largest probability (with np.argmax()) we can find the predicted label.
End of explanation |
13,418 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
First, import the processing tools that contain classes and methods to read, plot and process standard unit particle distribution files.
Step1: The module consists of a class 'ParticleDistribution' that initializes to a dictionary containing the following entries given a filepath
Step2: Alternatively one can ask for a pandas dataframe where each column is one of the above properties of a macroparticle per row.
Step3: This allows for quick plotting using the inbuilt pandas methods
Step4: If further statistical analysis is required, the class 'Statistics' is provided. This contains methods to process standard properties of the electron bunch. This is called by giving a filepath to 'Statistics' The following operations can be performed
Step5: And finally there is the FEL_Approximations which calculate simple FEL properties per slice. This is a subclass of statistics and as such every method described above is callable.
This class conatins the 'undulator' function that calculates planar undulator parameters given a period and either a peak magnetic field or K value.
The data must be sliced and most statistics have to be run before the other calculations can take place.
These are 'pierce' which calculates the pierce parameter and 1D gain length for a given slice, 'gain length' which calculates the Ming Xie gain and returns three entries in the dict 'MX_gain', '1D_gain', 'pierce', which hold an array for these values per slice.
'FELFrame' returns a pandas dataframe with these and 'z_pos' for reference.
To make this easier, the class ProcessedData takes a filepath, number of slcies, undulator period, magnetic field or K and performs all the necessary steps automatically. As this is a subclass of FEL_Approximations all the values written above are accessible from here.
Step6: If it is important to plot the statistical data alongside the FEL data, that can be easily achieved by concatinating the two sets as shown below | Python Code:
import processing_tools as pt
Explanation: First, import the processing tools that contain classes and methods to read, plot and process standard unit particle distribution files.
End of explanation
filepath = './example/example.h5'
data = pt.ParticleDistribution(filepath)
data.su2si
data.dict['x']
Explanation: The module consists of a class 'ParticleDistribution' that initializes to a dictionary containing the following entries given a filepath:
|key | value |
|----|-----------|
|'x' | x position|
|'y' | y position|
|'z' | z position|
|'px'| x momentum|
|'py'| y momentum|
|'pz'| z momentum|
|'NE'| number of electrons per macroparticle|
The units are in line with the Standard Unit specifications, but can be converted to SI by calling the class method SU2SI
Values can then be called by calling the 'dict':
End of explanation
panda_data = data.DistFrame()
panda_data[0:5]
Explanation: Alternatively one can ask for a pandas dataframe where each column is one of the above properties of a macroparticle per row.
End of explanation
import matplotlib.pyplot as plt
matplotlib.style.use('ggplot') #optional
x_axis = 'py'
y_axis = 'px'
plot = panda_data.plot(kind='scatter',x=x_axis,y=y_axis)
#sets axis limits
plot.set_xlim([panda_data[x_axis].min(),panda_data[x_axis].max()])
plot.set_ylim([panda_data[y_axis].min(),panda_data[y_axis].max()])
plt.show(plot)
Explanation: This allows for quick plotting using the inbuilt pandas methods
End of explanation
stats = pt.Statistics(filepath)
#preparing the statistics
stats.slice(100)
stats.calc_emittance()
stats.calc_CoM()
stats.calc_current()
#display pandas example
panda_stats = stats.StatsFrame()
panda_stats[0:5]
ax = panda_stats.plot(x='z_pos',y='CoM_y')
panda_stats.plot(ax=ax, x='z_pos',y='std_y',c='b') #first option allows shared axes
plt.show()
Explanation: If further statistical analysis is required, the class 'Statistics' is provided. This contains methods to process standard properties of the electron bunch. This is called by giving a filepath to 'Statistics' The following operations can be performed:
| Function | Effect and dict keys |
|---------------------|-------------------------------------------------------------------------------------------------------------------------------|
| calc_emittance | Calculates the emittance of all the slices, accessible by 'e_x' and 'e_y' |
| calc_CoM | Calculates the weighed averages and standard deviations per slice of every parameter and beta functions, see below for keys. |
| calc_current | Calculates current per slice, accessible in the dict as 'current'. |
|slice | Slices the data in equal slices of an integer number. |
This is a subclass of the ParticleDistribution and all the methods previously described work.
| CoM Keys | Parameter (per slice) |
|------------------------|------------------------------------------------------------|
| CoM_x, CoM_y, CoM_z | Centre of mass of x, y, z positions |
| std_x, std_y, std_z | Standard deviation of x, y, z positions |
| CoM_px, CoM_py, CoM_pz | Centre of mass of x, y, z momenta |
| std_px, std_py, std_pz | Standard deviation of x, y, z momenta |
| beta_x, beta_y | Beta functions (assuming Gaussian distribution) in x and y |
Furthermore, there is a 'Step_Z' which returns the size of a slice as well as 'z_pos' which gives you central position of a given slice.
And from this class both the DistFrame (containing the same data as above) and StatsFrame can be called:
End of explanation
FEL = pt.ProcessedData(filepath,num_slices=100,undulator_period=0.00275,k_fact=2.7)
panda_FEL = FEL.FELFrame()
panda_stats= FEL.StatsFrame()
panda_FEL[0:5]
Explanation: And finally there is the FEL_Approximations which calculate simple FEL properties per slice. This is a subclass of statistics and as such every method described above is callable.
This class conatins the 'undulator' function that calculates planar undulator parameters given a period and either a peak magnetic field or K value.
The data must be sliced and most statistics have to be run before the other calculations can take place.
These are 'pierce' which calculates the pierce parameter and 1D gain length for a given slice, 'gain length' which calculates the Ming Xie gain and returns three entries in the dict 'MX_gain', '1D_gain', 'pierce', which hold an array for these values per slice.
'FELFrame' returns a pandas dataframe with these and 'z_pos' for reference.
To make this easier, the class ProcessedData takes a filepath, number of slcies, undulator period, magnetic field or K and performs all the necessary steps automatically. As this is a subclass of FEL_Approximations all the values written above are accessible from here.
End of explanation
import pandas as pd
cat = pd.concat([panda_FEL,panda_stats], axis=1, join_axes=[panda_FEL.index]) #joins the two if you need to plot
#FEL parameters as well as slicel statistics on the same plot
cat['1D_gain']=cat['1D_gain']*40000000000 #one can scale to allow for visual comparison if needed
az = cat.plot(x='z_pos',y='1D_gain')
cat.plot(ax=az, x='z_pos',y='MX_gain',c='b')
plt.show()
Explanation: If it is important to plot the statistical data alongside the FEL data, that can be easily achieved by concatinating the two sets as shown below
End of explanation |
13,419 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic Cheshire objects and methods
This file wants to document how one can use Cheshire to query the CLiC database.
It fills in the gaps of the official cheshire documentation and it
provides a number of very specific, hands-on examples.
Author
Step1: Querying
Build a query. This does not hit the database itself.
Step2: A query can be printed as CQL or as XCQL
Step3: To search the database using this particular query, one needs to use the search method on a database object. This spits out a result set.
Step4: Handling the results
When using the chapter index, the result set is an iterable of results in which each chapter is one result. For the query above, there are thus 35 chapters that match the query.
Step5: Each result in the result set refers to a particular recordStore in which, surprise surprise, the actual chapter is stored.
Step6: Understanding the results
For each of these results a number of attributes can be accessed using the dot notation. The choices are
Step7: From what I gather, a result in a resultSet is only a pointer to the document and not the document itself.
The latter needs to be fetched and is generally called a record.
Records have the following attributes (most of which seem irrelevant for our purposes and several of which
only return empty strings)
Step8: The get_dom(session) method spits out the record in parsed xml.
This is essential for our purposes.
Step9: A record can be transformed into raw xml (in order to understand it), using
a method from lxml
Step10: This could also be used in simple python string manipulations.
For instance, to highlight something in a chapter, or to build
a concordance based on the raw string rather than an xml tree.
In that case one should note that only each occurrence of a term is
duplicated because it is present in <txt> and in its own word node.
Step11: Transforming a result
Rather than manually handling the xml like this, Cheshire has a class called a
Transformer that can perform xsl transformations on the xml of a chapter.
Transformers are defined in a configuration file. In our project they live in an
xsl file.
The following examples use a transformer that was not designed to work with our input,
but they do illustrate how transformers can be invoked.
Step12: Retrieving a chapter
Step13: Searching in a specific book
Step14: Messing around
Step15: Phrase search
Step16: And search
Step17: Or search
Step18: Proximity Information
Step19: Term highlighting | Python Code:
# coding: utf-8
import os
from cheshire3.baseObjects import Session
from cheshire3.document import StringDocument
from cheshire3.internal import cheshire3Root
from cheshire3.server import SimpleServer
session = Session()
session.database = 'db_dickens'
serv = SimpleServer(session, os.path.join(cheshire3Root, 'configs', 'serverConfig.xml'))
db = serv.get_object(session, session.database)
qf = db.get_object(session, 'defaultQueryFactory')
resultSetStore = db.get_object(session, 'resultSetStore')
idxStore = db.get_object(session, 'indexStore')
Explanation: Basic Cheshire objects and methods
This file wants to document how one can use Cheshire to query the CLiC database.
It fills in the gaps of the official cheshire documentation and it
provides a number of very specific, hands-on examples.
Author: Johan de Joode
Dates: 10/2/2015
Database: Dickens
Setup
End of explanation
query = qf.get_query(session, 'c3.subcorpus-idx = "dickens" and/cql.proxinfo c3.chapter-idx = "fog"')
Explanation: Querying
Build a query. This does not hit the database itself.
End of explanation
print query.toCQL()
print query.toXCQL()
Explanation: A query can be printed as CQL or as XCQL
End of explanation
result_set = db.search(session, query)
result_set
Explanation: To search the database using this particular query, one needs to use the search method on a database object. This spits out a result set.
End of explanation
print len(result_set)
Explanation: Handling the results
When using the chapter index, the result set is an iterable of results in which each chapter is one result. For the query above, there are thus 35 chapters that match the query.
End of explanation
for result in result_set:
print result
Explanation: Each result in the result set refers to a particular recordStore in which, surprise surprise, the actual chapter is stored.
End of explanation
for result in result_set:
print 'result.id: ', result.id
print 'result.database: ', result.database
print 'result.occurrences: ', result.occurences
print 'result.proxInfo: ', result.proxInfo
print "#########"
for result in result_set:
print result.attributesToSerialize
Explanation: Understanding the results
For each of these results a number of attributes can be accessed using the dot notation. The choices are:
result.attributesToSerialize
result.id
result.recordStore
result.database
result.diagnostic
result.fetch_record
result.proxInfo
result.weight
result.numericId
result.resultSet
result.occurences
result.serialize
result.scaledWeight
In our current setup it seems that results are not weighed.
proxInfo is one of the most important attributes for our purposes.
It describes the proximity information for a hit in a particular record,
or in other words, where in a record the search string can be found.
We currently assume the following values:
* the first item is the id of the root element from
which to start counting to find the word node
for instance, 0 for a chapter view (because the chapter
is the root element), but 151 for a search in quotes
text.
* the second item in the deepest list (169, 171)
is the id of the <w> (word) node
* the third element is the character offset,
or the exact character (spaces, and
and punctuation (stored in <n> (non-word) nodes
at which the search term starts
* the fourth element is the total amount of characters
in the document
End of explanation
for result in result_set:
rec = result.fetch_record(session)
print type(rec), rec
Explanation: From what I gather, a result in a resultSet is only a pointer to the document and not the document itself.
The latter needs to be fetched and is generally called a record.
Records have the following attributes (most of which seem irrelevant for our purposes and several of which
only return empty strings):
rec.baseUri rec.elementHash rec.get_sax rec.parent rec.rights
rec.byteCount rec.fetch_proxVector rec.get_xml rec.processHistory rec.sax
rec.context rec.fetch_vector rec.history rec.process_xpath rec.size
rec.digest rec.filename rec.id rec.recordStore rec.status
rec.dom rec.get_dom rec.metadata rec.resultSetItem rec.tagName
rec.wordCount rec.xml
End of explanation
for result in result_set:
rec = result.fetch_record(session)
print "rec.id: ", rec.id
print 'rec.wordCount: ', rec.wordCount
print 'rec.get_dom(session): ', rec.get_dom(session)
print "#######"
result_set.attributesToSerialize
result.attributesToSerialize
for result in result_set:
print result.serialize(session)
Explanation: The get_dom(session) method spits out the record in parsed xml.
This is essential for our purposes.
End of explanation
from lxml import etree
rec_tostring = etree.tostring(rec2)
print rec_tostring
Explanation: A record can be transformed into raw xml (in order to understand it), using
a method from lxml:
End of explanation
# find the first occurrence of the term love
# because that is what we are all looking for
love = rec_tostring.find('love')
conc_line = rec_tostring[love-50 : love + len('love') + 50]
conc_line.replace('love', 'LOVE')
Explanation: This could also be used in simple python string manipulations.
For instance, to highlight something in a chapter, or to build
a concordance based on the raw string rather than an xml tree.
In that case one should note that only each occurrence of a term is
duplicated because it is present in <txt> and in its own word node.
End of explanation
kwicTransformer = db.get_object(session, 'kwic-Txr')
print kwicTransformer
doc = kwicTransformer.process_record(session, rec).get_raw(session)
print doc
from cheshire3.transformer import XmlTransformer
dctxr = db.get_object(session, 'kwic-Txr')
dctxr
doc = dctxr.process_record(session, record)
print doc.get_raw(session)[:1000]
Explanation: Transforming a result
Rather than manually handling the xml like this, Cheshire has a class called a
Transformer that can perform xsl transformations on the xml of a chapter.
Transformers are defined in a configuration file. In our project they live in an
xsl file.
The following examples use a transformer that was not designed to work with our input,
but they do illustrate how transformers can be invoked.
End of explanation
query = qf.get_query(session, 'c3.book-idx = "LD"')
result_set = db.search(session, query)
chapter_1 = result_set[0]
chapter_44 = result_set[43]
chapter_1
rec = chapter_1.fetch_record(session).get_dom(session)
print rec
rec.attrib
rec.attrib['id']
type(rec)
print rec
doc = kwicTransformer.process_record(session, chapter_1.fetch_record(session)).get_raw(session)
print doc
articleTransformer = db.get_object(session, 'article-Txr')
doc = articleTransformer.process_record(session, chapter_1.fetch_record(session)).get_raw(session)
print doc
#FIXME How can you get immediately query for a chapter,
# rather than getting all chapters of a book first?
# --> you need to build a better index for this
query = qf.get_query(session, 'c3.book-idx "LD" and div.id = "LD.1"')
result_set = db.search(session, query)
len(result_set)
#TODO if recordStore's are unique AND they represent chapters, it could also be possible to simply
# get a particular recordStore from Cheshire (without querying the database again).
Explanation: Retrieving a chapter
End of explanation
query = qf.get_query(session, 'c3.subcorpus-idx = "dickens" and c3.chapter-idx = "fog" and c3.book-idx = "BH"')
result_set = db.search(session, query)
len(result_set)
Explanation: Searching in a specific book
End of explanation
query = qf.get_query(session, 'c3.subcorpus-idx = "dickens" \
and/cql.proxinfo c3.chapter-idx = "dense fog" \
') #and c3.chapter-idx = "dense"')
rs = db.search(session, query)
len(rs)
for result in rs:
print result.proxInfo
#FIXME it seems that occurences cannot be trusted?
print result.occurences
query = qf.get_query(session, 'c3.subcorpus-idx = "dickens" \
and/cql.proxinfo c3.chapter-idx = "the" \
')
query.addPrefix(query, 'test')
query.toCQL()
Explanation: Messing around
End of explanation
query = qf.get_query(session, 'c3.subcorpus-idx = "dickens" \
and/proxinfo c3.chapter-idx = "dense fog" \
')
rs = db.search(session, query)
total = 0
for result in rs:
total += len(result.proxInfo)
total
Explanation: Phrase search
End of explanation
query = qf.get_query(session, 'c3.subcorpus-idx = "dickens" \
and/cql.proxinfo c3.chapter-idx = "fog" \
and c3.chapter-idx = "dense"')
rs = db.search(session, query)
len(rs)
Explanation: And search
End of explanation
query = qf.get_query(session, 'c3.subcorpus-idx = "dickens" \
and/cql.proxinfo c3.chapter-idx = "fog" \
or c3.chapter-idx = "dense"')
rs = db.search(session, query)
len(rs)
query = qf.get_query(session, 'c3.subcorpus-idx = "dickens" \
and c3.book-idx = "LD"')
rs = db.search(session, query)
len(rs)
query = qf.get_query(session, 'c3.subcorpus-idx = "dickens" \
and c3.chapter-idx = "he" prox/distance=1/unordered c3.chapter-idx = "said" \
or c3.chapter-idx = "did" or c3.chapter-idx = "wanted"')
rs = db.search(session, query)
len(rs)
#TODO not
#TODO wildcards
query = qf.get_query(session, 'c3.subcorpus-idx = "dickens" \
and c3.chapter-idx window/distance<5/unordered "low voice"')
rs = db.search(session, query)
len(rs)
for result in rs:
print result.proxInfo
query = qf.get_query(session, 'c3.subcorpus-idx = "dickens" \
and c3.chapter-idx window/distance<5/unordered "voice low"')
rs = db.search(session, query)
len(rs)
query = qf.get_query(session, 'c3.subcorpus-idx = "dickens" \
and c3.chapter-idx window/distance<5/unordered "low high"')
rs = db.search(session, query)
len(rs)
query = qf.get_query(session, 'c3.subcorpus-idx = "dickens" \
and c3.chapter-idx window/distance<3 "Mr Arthur said"')
rs = db.search(session, query)
len(rs)
Explanation: Or search
End of explanation
query = qf.get_query(session, '(c3.subcorpus-idx all "dickens" and/cql.proxinfo c3.chapter-idx any "dense fog")')
result_set = db.search(session, query)
count = 0
for result in result_set:
record = result.fetch_record(session)
print result.occurences, record #wordCount #.process_xpath('//w[@o=%s]' % result.proxInfo[0][1])
for y in result.proxInfo:
print y
count += 1
#TODO why does proxinfo only have three values here?
# --> because the last any does not have a proxinfo value
Explanation: Proximity Information
End of explanation
from cheshire3.transformer import LxmlQueryTermHighlightingTransformer
Explanation: Term highlighting
End of explanation |
13,420 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Use of PYBIND11_MAKE_OPAQUE
pybind11 automatically converts std
Step1: Two identical classes
Both of then creates random vectors equivalent to std
Step2: Scenarii
Three possibilities | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
%matplotlib inline
Explanation: Use of PYBIND11_MAKE_OPAQUE
pybind11 automatically converts std::vector into python list. That's convenient but not necessarily efficient depending on how it is used after that. PYBIND11_MAKE_OPAQUE is used to create a capsule to hold a pointer on the C++ object.
End of explanation
from cpyquickhelper.examples.vector_container_python import (
RandomTensorVectorFloat, RandomTensorVectorFloat2)
rnd = RandomTensorVectorFloat(10, 10)
result = rnd.get_tensor_vector()
print(result)
result_ref = rnd.get_tensor_vector_ref()
print(result_ref)
rnd2 = RandomTensorVectorFloat2(10, 10)
result2 = rnd2.get_tensor_vector()
print(result2)
result2_ref = rnd2.get_tensor_vector_ref()
print(result2_ref)
%timeit rnd.get_tensor_vector()
%timeit rnd.get_tensor_vector_ref()
%timeit rnd2.get_tensor_vector()
%timeit rnd2.get_tensor_vector_ref()
Explanation: Two identical classes
Both of then creates random vectors equivalent to std::vector<Tensor>, and Tensor ~ std::vector<float>. The first one returns a capsule due PYBIND11_MAKE_OPAQUE(std::vector<OneTensorFloat>) inserted into the C++ code. The other one is returning a list.
End of explanation
import itertools
from cpyquickhelper.numbers.speed_measure import measure_time
from tqdm import tqdm
import pandas
data = []
sizes = [1, 2, 5, 10, 20, 50, 100, 200, 500, 1000, 5000, 10000]
sizes = list(itertools.product(sizes, sizes))
for i, j in tqdm(sizes):
if j >= 1000:
if i > 1000:
continue
if i * j >= 1e6:
repeat, number = 3, 3
else:
repeat, number = 10, 10
rnd = RandomTensorVectorFloat(i, j)
obs = measure_time(lambda: rnd.get_tensor_vector(), repeat=repeat, number=number, div_by_number=True)
obs['name'] = 'capsule'
obs['n_vectors'] = i
obs['size'] = j
data.append(obs)
rnd2 = RandomTensorVectorFloat2(i, j)
obs = measure_time(lambda: rnd2.get_tensor_vector(), repeat=repeat, number=number, div_by_number=True)
obs['name'] = 'list'
obs['n_vectors'] = i
obs['size'] = j
data.append(obs)
obs = measure_time(lambda: rnd2.get_tensor_vector_ref(), repeat=repeat, number=number, div_by_number=True)
obs['name'] = 'ref'
obs['n_vectors'] = i
obs['size'] = j
data.append(obs)
df = pandas.DataFrame(data)
df.tail()
piv = pandas.pivot_table(df, index=['n_vectors', 'size'], columns=['name'], values='average')
piv['ratio'] = piv['capsule'] / piv['list']
piv.tail()
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 2, figsize=(10, 4))
piv[['capsule', 'list', 'ref']].plot(logy=True, ax=ax[0], title='Capsule (OPAQUE) / list')
piv.sort_values('ratio', ascending=False)[['ratio']].plot(ax=ax[1], title='Ratio Capsule (OPAQUE) / list');
flat = piv.reset_index(drop=False)[['n_vectors', 'size', 'ratio']]
flat_piv = flat.pivot('n_vectors', 'size', 'ratio')
flat_piv
import numpy
import seaborn
seaborn.heatmap(numpy.minimum(flat_piv.values, 1), cmap="YlGnBu",
xticklabels=list(flat_piv.index), yticklabels=list(flat_piv.columns));
Explanation: Scenarii
Three possibilities:
list: std::vector<Tensor> is converted into a list of copied Tensors
capsule: std::vector<Tensor> is converted into a capsule on a copied std::vector<Tensor>, the capsule still holds the pointer and is responsible to the deletion.
ref: std::vector<Tensor> is just return as a pointer. The cost of getting the pointer does not depend on the content size. It is somehow the low limit.
Plots
End of explanation |
13,421 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1"><a href="#Seriation-Classification
Step1: sklearn-mmadsen is a python package of useful machine learning tools that I'm accumulating for research and commercial work. You can find it at http
Step3: Initial Classification Attempt
Let's just see if the graph spectral distance does anything useful at all, or whether I'm barking up the wrong tree. I imagine that we want a few neighbors (to rule out relying on a single neighbor which might be anomalous), but not too many. So let's start with k=5.
The approach here is to essentially do a "leave one out" strategy on the dataset. The KNN model isn't really "trained" in the usual sense of the term, so we don't need to separate a test and train set, we just need to make sure that the target graph we're trying to predict is not one of the "training" graphs that we calculate spectral distances to, otherwise the self-matching of the graph will always predict zero distance. So we first define a simple function which splits a graph out of the training set and returns the rest. I'd use scikit-learn functions for this, but our "data" is really a list of NetworkX objects, not a numeric matrix.
Step4: For a first try, this is pretty darned good, I think. Almost 77% of the time, we can correctly predict whether a seriation solution from one of two models belongs to the correct model. It would be nice to get that accuracy to near perfect if possible, howeve, because the goal here is to examine the fit between an empirical solution and a number of models, and the empirical solution will never have arisen from one of our pure theoretical models.
Leave-One-Out Cross Validation for Selecting Optimal K
Before working on more complex approaches, let's simply make sure we're choosing the optimal number of neighbors for the k-Nearest Neighbors classifier. | Python Code:
import numpy as np
import networkx as nx
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import cPickle as pickle
from copy import deepcopy
from sklearn.metrics import classification_report, accuracy_score, confusion_matrix
train_graphs = pickle.load(open("train-freq-graphs.pkl",'r'))
train_labels = pickle.load(open("train-freq-labels.pkl",'r'))
Explanation: Table of Contents
<p><div class="lev1"><a href="#Seriation-Classification:--sc-1"><span class="toc-item-num">1 </span>Seriation Classification: sc-1</a></div><div class="lev2"><a href="#Initial-Classification-Attempt"><span class="toc-item-num">1.1 </span>Initial Classification Attempt</a></div><div class="lev2"><a href="#Leave-One-Out-Cross-Validation-for-Selecting-Optimal-K"><span class="toc-item-num">1.2 </span>Leave-One-Out Cross Validation for Selecting Optimal K</a></div>
# Seriation Classification: sc-1 #
The goal of experiment `sc-1` is to validate that the Laplacian eigenvalue spectral distance can be useful in k-Nearest Neighbor classifiers for seriation output. In this experiment, I take a supervised learning approach, starting with two regional metapopulation models, simulating unbiased cultural transmission with 50 replicates across each model, sampling and time averaging the resulting cultural trait distributions in archaeologically realistic ways, and then seriating the results using our IDSS algorithm. Each seriation resulting from this procedure is thus "labeled" as to the regional metapopulation model from which it originated, so we can assess the accuracy of predicting that label based upon the graph spectral similarity.
End of explanation
import sklearn_mmadsen.graphs as skm
Explanation: sklearn-mmadsen is a python package of useful machine learning tools that I'm accumulating for research and commercial work. You can find it at http://github.com/mmadsen/sklearn-mmadsen.
End of explanation
gclf = skm.GraphEigenvalueNearestNeighbors(n_neighbors=5)
def leave_one_out_cv(ix, train_graphs, train_labels):
Simple LOO data sets for kNN classification, given an index, returns a train set, labels, with the left out
graph and label as test_graph, test_label
test_graph = train_graphs[ix]
test_label = train_labels[ix]
train_loo_graphs = deepcopy(train_graphs)
train_loo_labels = deepcopy(train_labels)
del train_loo_graphs[ix]
del train_loo_labels[ix]
return (train_loo_graphs, train_loo_labels, test_graph, test_label)
test_pred = []
for ix in range(0, len(train_graphs)):
train_loo_graphs, train_loo_labels, test_graph, test_label = leave_one_out_cv(ix, train_graphs, train_labels)
gclf.fit(train_loo_graphs, train_loo_labels)
test_pred.append(gclf.predict([test_graph])[0])
cm = confusion_matrix(train_labels, test_pred)
cmdf = pd.DataFrame(cm)
cmdf.columns = map(lambda x: 'predicted {}'.format(x), cmdf.columns)
cmdf.index = map(lambda x: 'actual {}'.format(x), cmdf.index)
print cmdf
print(classification_report(train_labels, test_pred))
print("Accuracy on test: %0.3f" % accuracy_score(train_labels, test_pred))
sns.heatmap(cm.T, square=True, annot=True, fmt='d', cbar=False)
Explanation: Initial Classification Attempt
Let's just see if the graph spectral distance does anything useful at all, or whether I'm barking up the wrong tree. I imagine that we want a few neighbors (to rule out relying on a single neighbor which might be anomalous), but not too many. So let's start with k=5.
The approach here is to essentially do a "leave one out" strategy on the dataset. The KNN model isn't really "trained" in the usual sense of the term, so we don't need to separate a test and train set, we just need to make sure that the target graph we're trying to predict is not one of the "training" graphs that we calculate spectral distances to, otherwise the self-matching of the graph will always predict zero distance. So we first define a simple function which splits a graph out of the training set and returns the rest. I'd use scikit-learn functions for this, but our "data" is really a list of NetworkX objects, not a numeric matrix.
End of explanation
knn = [1, 3, 5, 7, 9, 11, 15]
for nn in knn:
gclf = skm.GraphEigenvalueNearestNeighbors(n_neighbors=nn)
test_pred = []
for ix in range(0, len(train_graphs)):
train_loo_graphs, train_loo_labels, test_graph, test_label = leave_one_out_cv(ix, train_graphs, train_labels)
gclf.fit(train_loo_graphs, train_loo_labels)
test_pred.append(gclf.predict([test_graph])[0])
print("Accuracy on test for %s neighbors: %0.3f" % (nn, accuracy_score(train_labels, test_pred)))
Explanation: For a first try, this is pretty darned good, I think. Almost 77% of the time, we can correctly predict whether a seriation solution from one of two models belongs to the correct model. It would be nice to get that accuracy to near perfect if possible, howeve, because the goal here is to examine the fit between an empirical solution and a number of models, and the empirical solution will never have arisen from one of our pure theoretical models.
Leave-One-Out Cross Validation for Selecting Optimal K
Before working on more complex approaches, let's simply make sure we're choosing the optimal number of neighbors for the k-Nearest Neighbors classifier.
End of explanation |
13,422 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Read HIV Data
Step1: Read in Clinical Data
Step2: Update clinical data with new data provided by Howard Fox
Step3: Clean up diabetes across annotation files
Step4: All of the patients are white or Caucasian
Step5: Sex is not recorded for the cases but they are all HIV+ men.
Step6: Fix BMI to unified labels
Step7: Trimming the clinical dataset
None of the patients are diabetic
Step8: All of the patients are hepatitis C negative
Step9: All patients are currently using anti-retoviral therepy, but 5 patients treatment is not classified as HAART
Step10: All patients are reported as adhererent, but a few are not 100% adherent
Step11: We have a wide varienty of regimens with 81 unique combinations of 35 drugs
Step12: These can be broken down into 8 regimen types
Step13: IADL = Instrumental activities of daily living
Step14: Global imparement
Step15: Patient's Assessment of Own Functioning Inventory (PAOFI)
Step16: RPR (Rapid plasma reagin) is a diagnostic used to detect syphilis
Step17: Batches
Step18: Beck Depression Inventory is a questionarre measuring depression levels (from Wikipedia)
* 0–13
Step19: Read in lab blood work data
abstract exam date onto exam year for anonymity
Step20: Dropping five patients because they don't look Kosher
Renormalizing cell percentages because some don't sum to 100%
Step21: As can be seen above, we have two groups of patients with respect to HIV duration, we don't really have the sample size to tease apart any differences other than this main distiction so for now I am just treating duration of HIV as a categorical variable (e.g. controls, short exposure and long exposure)
Step22: Patient Selection Criteria
There are a couple of female patients in the controls, we are going to get rid of those as all of the cases are males
Most of the HIV patients are under HAART therepy, there are a few that are not and we are going to filter those out for now and possibly look at them after the primary analysis
Step23: Read in HIV Methylation data
Read in quantile-normalized data, adjusted for cellular compositions and then normalized agin using BMIQ.
Step24: Read in data processed with BMIQ using Horvath's gold standard.
Step25: Adjust this data for cellular composition. This is done after the normalization to not mess around with Horvath's pipeline too much.
Step26: Set up Probe Filters | Python Code:
import os
if os.getcwd().endswith('Setup'):
os.chdir('..')
import NotebookImport
from Setup.Imports import *
Explanation: Read HIV Data
End of explanation
c1 = pd.read_excel(ucsd_path + 'DESIGN_Fox_v2_Samples-ChipLAyout-Clinical UNMC-UCSD methylomestudy.xlsx',
'HIV- samples from OldStudy', index_col=0)
c2 = pd.read_excel(ucsd_path + 'DESIGN_Fox_v2_Samples-ChipLAyout-Clinical UNMC-UCSD methylomestudy.xlsx',
'HIV+ samples', index_col=0)
clinical = c1.append(c2)
clinical['Sentrix_Position'] = clinical['Sentrix_Position\\'].map(lambda s: s[:-1])
del clinical['Sentrix_Position\\']
Explanation: Read in Clinical Data
End of explanation
age_new = pd.read_csv(ucsd_path + 'UpdatesAges-Infection.csv', index_col=0)
age = age_new.age.combine_first(clinical.age)
age.name= 'age'
clinical['age'] = age
l = 'estimated duration hiv (months)'
clinical[l] = age_new['Estimated Duration HIV+ (months)'].combine_first(clinical[l])
Explanation: Update clinical data with new data provided by Howard Fox
End of explanation
diabetes = clinical['diabetes'].combine_first(clinical['Diabetes @ 000'])
diabetes = diabetes.replace('N','no')
clinical['diabetes'] = diabetes
del clinical['Diabetes @ 000']
diabetes.value_counts()
Explanation: Clean up diabetes across annotation files
End of explanation
ethnicity = clinical.ethnicity
ethnicity = ethnicity.replace('wht','white')
ethnicity = ethnicity.replace('Caucasian - European','white')
clinical['ethnicity'] = ethnicity
ethnicity.value_counts()
Explanation: All of the patients are white or Caucasian
End of explanation
clinical['sex'] = clinical['sex'].fillna('M')
Explanation: Sex is not recorded for the cases but they are all HIV+ men.
End of explanation
bmi = clinical['bmi'].combine_first(clinical['BMI'])
clinical['BMI'] = bmi
clinical = clinical[clinical.columns.diff(['bmi'])]
#Do not import
bmi.hist()
current_usage = ["Current 'Other' dx", 'Current Alcohol dx',
'Current Bipolar I', 'Current Bipolar II',
'Current Cannabis dx', 'Current Cocaine dx',
'Current Dysthymia', 'Current Halucinogen dx',
'Current Inhalant dx', 'Current MDD',
'Current Methamphetamine dx', 'Current Opioid dx',
'Current PCP dx', 'Current Sedative dx',
'Any Current Substance dx']
current_usage = clinical[current_usage]
current_usage.dropna(how='all').apply(pd.value_counts).fillna(0).T
past_usage = ["LT 'Other' dx", 'LT Alcohol dx', 'LT Bipolar I',
'LT Bipolar II', 'LT Cannabis dx', 'LT Cocaine dx',
'LT Dysthymia', 'LT Halucinogen dx', 'LT Inhalant dx',
'LT MDD', 'LT Methamphetamine dx', 'LT Opioid dx',
'LT PCP dx', 'LT Sedative dx', 'Any LT Substance dx']
past_usage = clinical[past_usage]
past_usage.dropna(how='all').apply(pd.value_counts).fillna(0).T
usage = current_usage.join(past_usage).dropna(how='all')
usage.to_csv(FIGDIR + 'drug_usage.csv')
merge_col = ['age','BMI','sex','diabetes','ethnicity','Sample_Plate','Sample_Well',
'Sentrix_ID','Sentrix_Position']
shared_clinical = clinical[merge_col]
shared_clinical.to_csv(FIGDIR + 'shared_clinical.csv')
c = [u'Site', u'ARV History', u'ARV Status', u'BDI > 17', u'CDC stage',
u'estimated duration hiv (months)', u'Current Regimen',
u'Regimen Type', u'adherence %',
u'HCV', u'IADL', u'RPR', u'Utox',
u'beck total', u'global impairment', u'paofi total',]
hiv_clinical = clinical[c].dropna(axis=0)
hiv_clinical.to_csv(FIGDIR + 'hiv_clinical.csv')
Explanation: Fix BMI to unified labels
End of explanation
clinical.diabetes.value_counts()
clinical['Diabetes @ 000'].value_counts()
Explanation: Trimming the clinical dataset
None of the patients are diabetic
End of explanation
clinical['HCV'].dropna(0).value_counts(0)
Explanation: All of the patients are hepatitis C negative
End of explanation
clinical['ARV History'].value_counts()
clinical['ARV Status'].value_counts()
Explanation: All patients are currently using anti-retoviral therepy, but 5 patients treatment is not classified as HAART
End of explanation
clinical['adherent'].value_counts()
#Do not import
clinical['adherence %'].hist()
Explanation: All patients are reported as adhererent, but a few are not 100% adherent
End of explanation
reg = clinical['Current Regimen'].dropna().str.split('/').map(sorted)
drugs = {r for s in reg for r in s}
drug_mat = pd.DataFrame({i: {d: d in s for d in drugs} for i,s in
reg.iteritems()}).T
drug_mat.sum().order()
Explanation: We have a wide varienty of regimens with 81 unique combinations of 35 drugs
End of explanation
clinical['Regimen Type'].value_counts()
kill_list = ['zhang id', 'diabetes', 'Methylation ID', 'Diabetes @ 000',
'Sentrix_ID','Sample_Plate','Sample_Well','Sentrix_Position\\',
]
drugs = ['ARV History', 'ARV Status', 'Current Regimen', 'Regimen Type',
'adherence %' ,'adherent']
left = [c for c in clinical if c not in past_usage and c not in current_usage
and c not in drugs]
age = clinical.age
clinical['BDI > 17'].value_counts()
clinical['ARV History'].value_counts()
clinical['CDC stage'].value_counts()
Explanation: These can be broken down into 8 regimen types
End of explanation
iadl = clinical.IADL
iadl.value_counts()
Explanation: IADL = Instrumental activities of daily living
End of explanation
clinical['global impairment'].value_counts()
Explanation: Global imparement
End of explanation
#Do not import
paofi = clinical['paofi total']
paofi.hist()
Explanation: Patient's Assessment of Own Functioning Inventory (PAOFI)
End of explanation
clinical.RPR.value_counts()
Explanation: RPR (Rapid plasma reagin) is a diagnostic used to detect syphilis
End of explanation
site = clinical.Site
site.value_counts()
clinical.Utox.value_counts()
Explanation: Batches
End of explanation
#Do not import
beck = clinical['beck total'].dropna()
beck.hist()
#Do not import
clinical['paofi total'].hist()
Explanation: Beck Depression Inventory is a questionarre measuring depression levels (from Wikipedia)
* 0–13: minimal depression
* 14–19: mild depression
20–28: moderate depression
29–63: severe depression.
End of explanation
labs = pd.read_excel(ucsd_path + 'fox_methylation_labdata_073014.xlsx',
index_col=0)
labs['nb exam year']= labs['nb exam date'].map(lambda s: s.year)
del labs['nb exam date']
labs = labs.dropna(axis=1, how='all')
labs = labs.ix[labs.index.intersection(clinical.index)]
hiv_rna = labs[['rnvalue PLASMA','LLQ PLASMA']]
hiv_rna.to_csv(FIGDIR + 'hiv_rna.csv')
labs.WBC.hist()
chem = ['WBC','RBC','HGB','HCT','MCV',
'MCH','MCHC','Platelets']
labs[chem].to_csv(FIGDIR + 'blood_workup.csv')
imm= ['CD4 Absolute',
'CD8 Absolute',
'CD3 Absolute',
'CD4/CD8 ratio',
'Neutrophil %','Lymphocyte %',
'Monocyte %','Eosinophil %','Basophil %']
cell_types = ['Neutrophil %','Lymphocyte %','Monocyte %',
'Eosinophil %','Basophil %']
hiv_imm = labs[imm].dropna(how='all')
Explanation: Read in lab blood work data
abstract exam date onto exam year for anonymity
End of explanation
keepers = labs.index.diff(['RG065','RG175','RG279','RA182','RM285'])
hiv_imm[cell_types] = hiv_imm[cell_types].div(hiv_imm[cell_types].sum(1), axis=0) * 100
keep = pd.Series('Keep', keepers).ix[hiv_imm.index].fillna('Reject')
hiv_imm['QC status'] = keep
hiv_imm.to_csv(FIGDIR + 'HIV_cell_comp.csv')
#Do not import
fig, axs = subplots(1,2, figsize=(9,4))
clinical.age.hist(ax=axs[0])
clinical['estimated duration hiv (months)'].hist(ax=axs[1])
axs[0].set_xlabel('Age')
axs[1].set_xlabel('estimated duration hiv (months)')
for ax in axs:
ax.set_ylabel('# of Patients')
prettify_ax(ax)
Explanation: Dropping five patients because they don't look Kosher
Renormalizing cell percentages because some don't sum to 100%
End of explanation
duration = clinical['estimated duration hiv (months)']
duration = (1.*duration.notnull()) + (1.*duration > 100)
duration = duration.map({0:'Control',1:'HIV Short',2:'HIV Long'})
duration.value_counts()
Explanation: As can be seen above, we have two groups of patients with respect to HIV duration, we don't really have the sample size to tease apart any differences other than this main distiction so for now I am just treating duration of HIV as a categorical variable (e.g. controls, short exposure and long exposure)
End of explanation
duration = duration.ix[ti(clinical['ARV Status'] != 'non-HAART')]
duration = duration.ix[ti(clinical.sex != 'F')].dropna()
duration.value_counts()
Explanation: Patient Selection Criteria
There are a couple of female patients in the controls, we are going to get rid of those as all of the cases are males
Most of the HIV patients are under HAART therepy, there are a few that are not and we are going to filter those out for now and possibly look at them after the primary analysis
End of explanation
df_hiv = pd.read_hdf('/data/methylation_norm.h5', 'quant_BMIQ_adj')
df_hiv = df_hiv.ix[:, duration.index]
df_hiv = df_hiv.dropna(1)
Explanation: Read in HIV Methylation data
Read in quantile-normalized data, adjusted for cellular compositions and then normalized agin using BMIQ.
End of explanation
df_hiv_n = pd.read_hdf('/data/methylation_norm.h5', 'BMIQ_Horvath')
df_hiv_n = df_hiv_n.ix[:, duration.index]
df_hiv_n = df_hiv_n.dropna(1)
Explanation: Read in data processed with BMIQ using Horvath's gold standard.
End of explanation
flow_sorted_data = pd.read_hdf('/data/methylation_annotation.h5','flow_sorted_data_horvath_norm')
cell_type = pd.read_hdf('/data/methylation_annotation.h5', 'label_map')
cell_counts = pd.read_hdf('/data/dx_methylation.h5', 'cell_counts')
n2 = flow_sorted_data.groupby(cell_type, axis=1).mean()
avg = n2[cell_counts.columns].dot(cell_counts.ix[df_hiv.columns].T)
d2 = df_hiv_n.ix[avg.index, df_hiv.columns].dropna(axis=[0,1], how='all')
cc = avg.ix[:, ti(duration=='Control')].mean(1)
df_hiv_n = (d2 - avg).add(cc, axis=0).dropna(how='all')
keepers = duration.index.intersection(df_hiv.columns.intersection(df_hiv_n.columns))
duration = duration.ix[keepers]
duration.value_counts()
store = pd.HDFStore('/data/dx_methylation.h5')
study = store['study']
age = store['age']
gender = store['gender']
Explanation: Adjust this data for cellular composition. This is done after the normalization to not mess around with Horvath's pipeline too much.
End of explanation
detection_p = pd.read_hdf('/data/dx_methylation.h5', 'detection_p')
#detection_p = detection_p[detection_p[0] > 10e-5]
detection_p = detection_p[detection_p.Sample_Name.isin(duration.index)]
ff = detection_p.groupby('level_0').size() > 3
ff.value_counts()
STORE = '/data/methylation_annotation.h5'
snps = pd.read_hdf(STORE, 'snps')
snp_near = (snps.Probe_SNPs != '')
snp_near.value_counts()
probe_idx = df_hiv.index.diff(ti(ff))
Explanation: Set up Probe Filters
End of explanation |
13,423 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Encontro 22
Step1: Início da Atividade 1
Definindo uma função que gera um grafo aleatório tal que a probabilidade de uma aresta existir é c sobre o número de nós
Step2: Gerando um grafo passando parâmetros específicos para a função acima.
Step3: Verificando se a distribuição dos graus de pg segue uma Poisson com média c
Step4: Calculando variação de clustering coefficient e average distance conforme aumentamos num_nodes
Step5: Plotando variação de clustering coefficient
Step6: Plotando variação de average distance
Step7: Início da Atividade 2
Definindo uma função que gera um grafo circular
Step8: Calculando variação de clustering coefficient e average distance conforme aumentamos num_nodes
Step9: Plotando variação de clustering coefficient
Step10: Plotando variação de average distance
Step11: Início da Atividade 3
Definindo uma função que gera um grafo híbrido
Step12: Os próximos gráficos serão para N e C fixos. Por conveniência, vamos repetir a definição.
Step13: Calculando variação de clustering coefficient e average distance conforme aumentamos p
Step14: Comparando variação de clustering coefficient com o valor de referência do modelo aleatório.
Em um "pequeno mundo", espera-se um clustering coefficient acima desse valor de referência.
Step15: Comparando variação de average distance com o valor de referência do modelo circular.
Em um "pequeno mundo", espera-se um average distance abaixo desse valor de referência. | Python Code:
import sys
sys.path.append('..')
import socnet as sn
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Encontro 22: Mundos Pequenos
Importando as bibliotecas:
End of explanation
from random import random
def generate_random_graph(num_nodes, c):
g = sn.generate_empty_graph(num_nodes)
nodes = list(g.nodes)
for i in range(num_nodes):
n = nodes[i]
for j in range(i + 1, num_nodes):
m = nodes[j]
if random() < c / num_nodes:
g.add_edge(n, m)
return g
Explanation: Início da Atividade 1
Definindo uma função que gera um grafo aleatório tal que a probabilidade de uma aresta existir é c sobre o número de nós:
End of explanation
N = 100
C = 10
rg = generate_random_graph(N, C)
Explanation: Gerando um grafo passando parâmetros específicos para a função acima.
End of explanation
from scipy.stats import poisson
x = range(N)
plt.hist([rg.degree(n) for n in rg.nodes], x, normed=True)
plt.plot(x, poisson.pmf(C, x));
Explanation: Verificando se a distribuição dos graus de pg segue uma Poisson com média c:
End of explanation
x = []
rcc = []
rad = []
for num_nodes in range(C + 1, N):
g = generate_random_graph(num_nodes, C)
x.append(num_nodes)
rcc.append(sn.average_clustering_coefficient(g))
rad.append(sn.average_distance(g))
Explanation: Calculando variação de clustering coefficient e average distance conforme aumentamos num_nodes:
End of explanation
plt.plot(x, rcc);
Explanation: Plotando variação de clustering coefficient:
End of explanation
plt.plot(x, rad);
Explanation: Plotando variação de average distance:
End of explanation
def generate_circular_graph(num_nodes, c):
g = sn.generate_empty_graph(num_nodes)
nodes = list(g.nodes)
for i in range(num_nodes):
n = nodes[i]
for delta in range(1, c // 2 + 1):
j = (i + delta) % num_nodes
m = nodes[j]
g.add_edge(n, m)
return g
Explanation: Início da Atividade 2
Definindo uma função que gera um grafo circular:
End of explanation
ccc = []
cad = []
for num_nodes in x:
g = generate_circular_graph(num_nodes, C)
ccc.append(sn.average_clustering_coefficient(g))
cad.append(sn.average_distance(g))
Explanation: Calculando variação de clustering coefficient e average distance conforme aumentamos num_nodes:
End of explanation
plt.plot(x, ccc);
Explanation: Plotando variação de clustering coefficient:
End of explanation
plt.plot(x, cad);
Explanation: Plotando variação de average distance:
End of explanation
from random import choice
def generate_hybrid_graph(num_nodes, c, p):
g = generate_circular_graph(num_nodes, c)
for n in g.nodes:
non_neighbors = set(g.nodes)
for m in g.neighbors(n):
non_neighbors.remove(m)
non_neighbors.remove(n)
for m in list(g.neighbors(n)):
if random() < p:
g.remove_edge(n, m)
non_neighbors.add(m)
l = choice(list(non_neighbors))
non_neighbors.remove(l)
g.add_edge(n, l)
return g
Explanation: Início da Atividade 3
Definindo uma função que gera um grafo híbrido:
End of explanation
N = 100
C = 10
Explanation: Os próximos gráficos serão para N e C fixos. Por conveniência, vamos repetir a definição.
End of explanation
x = []
hcc = []
had = []
for ip in range(0, 11):
p = ip / 10
g = generate_hybrid_graph(N, C, p)
x.append(p)
hcc.append(sn.average_clustering_coefficient(g))
had.append(sn.average_distance(g))
Explanation: Calculando variação de clustering coefficient e average distance conforme aumentamos p:
End of explanation
plt.plot(x, 11 * [C / N])
plt.plot(x, hcc);
Explanation: Comparando variação de clustering coefficient com o valor de referência do modelo aleatório.
Em um "pequeno mundo", espera-se um clustering coefficient acima desse valor de referência.
End of explanation
plt.plot(x, 11 * [N / (2 * C)])
plt.plot(x, had);
Explanation: Comparando variação de average distance com o valor de referência do modelo circular.
Em um "pequeno mundo", espera-se um average distance abaixo desse valor de referência.
End of explanation |
13,424 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reinforcement Learning
Step1: Ok, on to the agent itself. I'll present the code in full here, then explain parts in more detail.
Step2: You'll see that this is quite similar to the previous Q-learning agent we implemented. There are explore and discount values, for example. But the Q function is now a neural network.
The biggest difference are these remember and replay methods.
A challenge with DQNs is that they can be unstable - in particular, they exhibit a problem known as catastrophic forgetting in which later experiences overwrite earlier ones. When this happens, the agent is unable to take full advantage of everything it's learned, only what it's learned most recently.
A method to deal with this is called experience replay. We just store experienced states and their resulting rewards (as "memories"), then between actions we sample a batch of these memories (this is what the _prep_batch method does) and use them to train the neural network (i.e. "replay" these remembered experiences). This will become clearer in the code below, where we actually train the agent.
Step3: Here we train the agent for 6500 epochs (that is, 6500 games). We also keep a trailing record of its wins to see if its win rate is improving.
A game goes on until the reward is non-zero, which means the agent has either lost (reward of -1) or won (reward of +1). Note that between each action the agent "remembers" the states and reward it just saw, as well as the action it took. Then it "replays" past experiences to update its neural network.
Once the agent is trained, we can play a round and see if it wins.
Depicted below are the results from my training | Python Code:
import numpy as np
from blessings import Terminal
class Game():
def __init__(self, shape=(10,10)):
self.shape = shape
self.height, self.width = shape
self.last_row = self.height - 1
self.paddle_padding = 1
self.n_actions = 3 # left, stay, right
self.term = Terminal()
self.reset()
def reset(self):
# reset grid
self.grid = np.zeros(self.shape)
# can only move left or right (or stay)
# so position is only its horizontal position (col)
self.pos = np.random.randint(self.paddle_padding, self.width - 1 - self.paddle_padding)
self.set_paddle(1)
# item to catch
self.target = (0, np.random.randint(self.width - 1))
self.set_position(self.target, 1)
def move(self, action):
# clear previous paddle position
self.set_paddle(0)
# action is either -1, 0, 1,
# but comes in as 0, 1, 2, so subtract 1
action -= 1
self.pos = min(max(self.pos + action, self.paddle_padding), self.width - 1 - self.paddle_padding)
# set new paddle position
self.set_paddle(1)
def set_paddle(self, val):
for i in range(1 + self.paddle_padding*2):
pos = self.pos - self.paddle_padding + i
self.set_position((self.last_row, pos), val)
@property
def state(self):
return self.grid.reshape((1,-1)).copy()
def set_position(self, pos, val):
r, c = pos
self.grid[r,c] = val
def update(self):
r, c = self.target
self.set_position(self.target, 0)
self.set_paddle(1) # in case the target is on the paddle
self.target = (r+1, c)
self.set_position(self.target, 1)
# off the map, it's gone
if r + 1 == self.last_row:
# reward of 1 if collided with paddle, else -1
if abs(c - self.pos) <= self.paddle_padding:
return 1
else:
return -1
return 0
def render(self):
print(self.term.clear())
for r, row in enumerate(self.grid):
for c, on in enumerate(row):
if on:
color = 235
else:
color = 229
print(self.term.move(r, c) + self.term.on_color(color) + ' ' + self.term.normal)
# move cursor to end
print(self.term.move(self.height, 0))
Explanation: Reinforcement Learning: Deep Q-Networks
If you aren't familiar with reinforcement learning, check out the previous guide on reinforcement learning for an introduction.
In the previous guide we implemented the Q function as a lookup table. That worked well enough for that scenario because it had a fairly small state space. However, consider something like DeepMind's Atari player. A state in that task is a unique configuration of pixels. All those Atari games are color, so each pixel has three values (R,G,B), and there are quite a few pixels. So there is a massive state space for all possible configurations of pixels, and we simply can't implement a lookup table encompassing all of these states - it would take up too much memory.
Instead, we can learn a Q function that approximately maps a set of pixel values and an action to some value. We could implement this Q function as a neural network and have it learn how to predict rewards for each action given an input state. This is the general idea behind deep Q-learning (i.e. deep Q networks, or DQNs).
Here we'll put together a simple DQN agent that learns how to play a simple game of catch. The agent controls a paddle at the bottom of the screen that it can move left, right, or not at all (so there are three possible action). An object falls from the top of the screen, and the agent wins if it catches it (a reward of +1). Otherwise, it loses (a reward of -1).
We'll implement the game in black-and-white so that the pixels in the game can be represented as 1 or 0.
Using DQNs are quite like using neural networks in ways you may be more familiar with. Here we'll take a vector that represents the screen, feed it through the network, and the network will output a distribution of values over possible actions. You can kind of think of it as a classification problem: given this input state, label it with the best action to take.
For example, this is the architecture of the Atari player:
The scenario we're dealing with is simple enough that we don't need convolutional neural networks, but we could easily extend it in that way if we wanted (just replace our vanilla neural network with a convolutional one).
Here's what our catch game will look like:
To start I'll present the code for the catch game itself. It's not important that you understand this code - the part we care about is the agent itself.
Note that this needs to be run in the terminal in order to visualize the game.
End of explanation
import os
#if using Theano with GPU
#os.environ["THEANO_FLAGS"] = "mode=FAST_RUN,device=gpu,floatX=float32"
import random
from keras.models import Sequential
from keras.layers.core import Dense
from collections import deque
class Agent():
def __init__(self, env, explore=0.1, discount=0.9, hidden_size=100, memory_limit=5000):
self.env = env
model = Sequential()
model.add(Dense(hidden_size, input_shape=(env.height * env.width,), activation='relu'))
model.add(Dense(hidden_size, activation='relu'))
model.add(Dense(env.n_actions))
model.compile(loss='mse', optimizer='sgd')
self.Q = model
# experience replay:
# remember states to "reflect" on later
self.memory = deque([], maxlen=memory_limit)
self.explore = explore
self.discount = discount
def choose_action(self):
if np.random.rand() <= self.explore:
return np.random.randint(0, self.env.n_actions)
state = self.env.state
q = self.Q.predict(state)
return np.argmax(q[0])
def remember(self, state, action, next_state, reward):
# the deque object will automatically keep a fixed length
self.memory.append((state, action, next_state, reward))
def _prep_batch(self, batch_size):
if batch_size > self.memory.maxlen:
Warning('batch size should not be larger than max memory size. Setting batch size to memory size')
batch_size = self.memory.maxlen
batch_size = min(batch_size, len(self.memory))
inputs = []
targets = []
# prep the batch
# inputs are states, outputs are values over actions
batch = random.sample(list(self.memory), batch_size)
random.shuffle(batch)
for state, action, next_state, reward in batch:
inputs.append(state)
target = self.Q.predict(state)[0]
# debug, "this should never happen"
assert not np.array_equal(state, next_state)
# non-zero reward indicates terminal state
if reward:
target[action] = reward
else:
# reward + gamma * max_a' Q(s', a')
Q_sa = np.max(self.Q.predict(next_state)[0])
target[action] = reward + self.discount * Q_sa
targets.append(target)
# to numpy matrices
return np.vstack(inputs), np.vstack(targets)
def replay(self, batch_size):
inputs, targets = self._prep_batch(batch_size)
loss = self.Q.train_on_batch(inputs, targets)
return loss
def save(self, fname):
self.Q.save_weights(fname)
def load(self, fname):
self.Q.load_weights(fname)
print(self.Q.get_weights())
Explanation: Ok, on to the agent itself. I'll present the code in full here, then explain parts in more detail.
End of explanation
import os
import sys
from time import sleep
game = Game()
agent = Agent(game)
print('training...')
epochs = 6500
batch_size = 256
fname = 'game_weights.h5'
# keep track of past record_len results
record_len = 100
record = deque([], record_len)
for i in range(epochs):
game.reset()
reward = 0
loss = 0
# rewards only given at end of game
while reward == 0:
prev_state = game.state
action = agent.choose_action()
game.move(action)
reward = game.update()
new_state = game.state
# debug, "this should never happen"
assert not np.array_equal(new_state, prev_state)
agent.remember(prev_state, action, new_state, reward)
loss += agent.replay(batch_size)
# if running in a terminal, use these instead of print:
#sys.stdout.flush()
#sys.stdout.write('epoch: {:04d}/{} | loss: {:.3f} | win rate: {:.3f}\r'.format(i+1, epochs, loss, sum(record)/len(record) if record else 0))
if i % 100 == 0:
print('epoch: {:04d}/{} | loss: {:.3f} | win rate: {:.3f}\r'.format(i+1, epochs, loss, sum(record)/len(record) if record else 0))
record.append(reward if reward == 1 else 0)
agent.save(fname)
Explanation: You'll see that this is quite similar to the previous Q-learning agent we implemented. There are explore and discount values, for example. But the Q function is now a neural network.
The biggest difference are these remember and replay methods.
A challenge with DQNs is that they can be unstable - in particular, they exhibit a problem known as catastrophic forgetting in which later experiences overwrite earlier ones. When this happens, the agent is unable to take full advantage of everything it's learned, only what it's learned most recently.
A method to deal with this is called experience replay. We just store experienced states and their resulting rewards (as "memories"), then between actions we sample a batch of these memories (this is what the _prep_batch method does) and use them to train the neural network (i.e. "replay" these remembered experiences). This will become clearer in the code below, where we actually train the agent.
End of explanation
# play a round
game.reset()
#game.render() # rendering won't work inside a notebook, only from terminal. uncomment
reward = 0
while reward == 0:
action = agent.choose_action()
game.move(action)
reward = game.update()
#game.render()
sleep(0.1)
print('winner!' if reward == 1 else 'loser!')
Explanation: Here we train the agent for 6500 epochs (that is, 6500 games). We also keep a trailing record of its wins to see if its win rate is improving.
A game goes on until the reward is non-zero, which means the agent has either lost (reward of -1) or won (reward of +1). Note that between each action the agent "remembers" the states and reward it just saw, as well as the action it took. Then it "replays" past experiences to update its neural network.
Once the agent is trained, we can play a round and see if it wins.
Depicted below are the results from my training:
End of explanation |
13,425 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: グラフと tf.function の基礎
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https
Step2: 一方、Function は、TensorFlow 演算を使って記述する通常の関数のように見えます。ただし、その根底では非常に異なります。Function は1 つの API の背後で複数の tf.Graph をカプセル化しています。Function が速度やデプロイ可能性といったGraph execution のメリットを提供できるのはこのためです。
tf.function は関数とそれが呼び出すその他すべての関数に次のように適用します。
Step3: TensorFlow 1.x を使用したことがある場合は、Placeholder または tf.Sesssion をまったく定義する必要がないことに気づくでしょう。
Python 関数をグラフに変換する
TensorFlow で記述するすべての関数には、組み込みの TF 演算と、if-then 句、ループ、break、return、continue などの Python ロジックが含まれます。TensorFlow 演算は tf.Graph で簡単にキャプチャされますが、Python 固有のロジックがグラフの一部となるには、さらにステップが必要となります。tf.function は、Python コードをグラフが生成するコードに変換するために、AutoGraph(tf.autograph)というライブラリを使用しています。
Step4: 直接グラフを閲覧する必要があることはほぼありませんが、正確な結果を確認するために出力を検査することは可能です。簡単に読み取れるものではありませんので、精査する必要はありません!
Step5: ほとんどの場合、tf.function の動作に特別な考慮はいりませんが、いくつかの注意事項があり、これについては tf.function ガイドのほか、詳細な Autograph リファレンスが役立ちます。
ポリモーフィズム
Step6: Function がそのシグネチャですでに呼び出されている場合、Function は新しい tf.Graph を作成しません。
Step7: 複数のグラフでサポートされているため、Function はポリモーフィックです。そのため、単一の tf.Graph が表現できる以上の入力型をサポートし、パフォーマンスが改善されるように tf.Graph ごとに最適化することができます。
Step8: tf.function を使用する
ここまでで、tf.function をデコレータまたはラッパーとして使用するだけで、Python 関数をグラフに変換できることを理解しました。しかし実際には、tf.function を正しく動作させるにはコツがいります!以下のセクションでは、tf.function を使って期待通りにコードを動作させるようにする方法を説明します。
Graph execution と Eager execution
Function 内のコードは、Eager と Graph の両方で実行できますが、デフォルトでは、Function は Graph としてコードを実行するようになっています。
Step9: Function のグラフがそれに相当する Python 関数と同じように計算していることを確認するには、tf.config.run_functions_eagerly(True) を使って Eager で実行することができます。これは、通常どおりコードを実行するのではなく、<strong data-md-type="double_emphasis">グラフを作成して実行する Function の能力をオフにする</strong>スイッチです。
Step10: ただし、Eager execution と Graph execution では Function の動作が異なることがあります。Python の print 関数がその例です。関数に print ステートメントを挿入して、それを繰り返し呼び出すとどうなるかを見てみましょう。
Step11: 何が出力されるか観察しましょう。
Step12: この出力に驚きましたか?get_MSE は 3 回呼び出されたにもかかわらず、出力されたのは 1 回だけでした。
説明すると、print ステートメントは Function が「トレーシング」というプロセスでグラフを作成するために元のコードを実行したときに実行されます。<strong data-md-type="double_emphasis">トレーシングは TensorFlow 演算をグラフにキャプチャしますが、グラフには print はキャプチャされません。</strong>以降、そのグラフは Python コードを再実行せずに、3 つのすべての呼び出しに対して実行されます。
サニティーチェックとして、Graph execution をオフにして比較してみましょう。
Step13: print は Python の副作用です。違いはほかにもあり、関数を Function に変換する場合には注意が必要です。
注意
Step14: トレーニングループの高速化には、一般的に tf.function が使用されます。Keras を使った例はこちらをご覧ください。
注意 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
import tensorflow as tf
import timeit
from datetime import datetime
# Define a Python function
def function_to_get_faster(x, y, b):
x = tf.matmul(x, y)
x = x + b
return x
# Create a `Function` object that contains a graph
a_function_that_uses_a_graph = tf.function(function_to_get_faster)
# Make some tensors
x1 = tf.constant([[1.0, 2.0]])
y1 = tf.constant([[2.0], [3.0]])
b1 = tf.constant(4.0)
# It just works!
a_function_that_uses_a_graph(x1, y1, b1).numpy()
Explanation: グラフと tf.function の基礎
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/guide/intro_to_graphs" class=""><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a> </td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/guide/intro_to_graphs.ipynb" class=""><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a> </td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/guide/intro_to_graphs.ipynb" class=""><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a> </td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/guide/intro_to_graphs.ipynb" class=""><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a> </td>
</table>
概要
このガイドは、TensorFlow の仕組みを説明するために、TensorFlow と Keras 基礎を説明します。今すぐ Keras に取り組みたい方は、Keras のガイド一覧を参照してください。
このガイドでは、グラフ取得のための単純なコード変更、格納と表現、およびモデルの高速化とエクスポートを行うための使用方法について、TensorFlow の中核的な仕組みを説明します。
注意: TensorFlow 1.x のみの知識をお持ちの場合は、このガイドでは、非常に異なるグラフビューが紹介されています。
ここでは、tf.function を使って Eager execution から Graph execution に切り替える方法を概説しています。 より詳しい tf.function の仕様については、<a href="function" data-md-type="link">tf.function ガイド</a>をご覧ください。
グラフとは?
前の 3 つのガイドでは、TensorFlow を Eager で実行する方法を紹介しました。つまり、TensorFlow 演算は、Python によって演算ごとに実行され、Python に結果を戻しました。
Eager execution には特有のメリットがいくつかありますが、Graph execution では Python 外への移植が可能になり、より優れたパフォーマンスを得られる傾向にあります。Graph execution では、テンソルの計算は TensorFlow グラフ(tf.Graph または単に「graph」とも呼ばれます)として実行されます。
グラフとは、計算のユニットを表す一連の tf.Operation オブジェクトと、演算間を流れるデータのユニットを表す tf.Tensor オブジェクトを含むデータ構造です。 tf.Graph コンテキストで定義されます。これらのグラフはデータ構造であるため、元の Python コードがなくても、保存、実行、および復元することができます。
グラフとは、計算のユニットを表す一連の tf.Operation オブジェクトと、演算間を流れるデータのユニットを表す tf.Tensor オブジェクトを含むデータ構造です。 tf.Graph コンテキストで定義されます。これらのグラフはデータ構造であるため、元の Python コードがなくても、保存、実行、および復元することができます。
<img alt="単純なTensorFlowグラフ" src="./images/intro_to_graphs/two-layer-network.png">
グラフのメリット
グラフを使用すると、柔軟性が大幅に向上し、モバイルアプリケーション。組み込みデバイス、バックエンドサーバーといった Python インタプリタのない環境でも TensorFlow グラフを使用できます。TensorFlow は、Python からエクスポートされた場合に、保存されるモデルの形式としてグラフを使用します。
また、グラフは最適化を簡単に行えるため、コンパイラは次のような変換を行えます。
計算に定数ノードを畳み込むで、テンソルの値を統計的に推論します(「定数畳み込み」)。
独立した計算のサブパートを分離し、スレッドまたはデバイスに分割します。
共通部分式を取り除き、算術演算を単純化します。
これやほかの高速化を実行する Grappler という総合的な最適化システムがあります。
まとめると、グラフは非常に便利なもので、複数のデバイスで、TensorFlow の高速化、並列化、および効率化を期待することができます。
ただし、便宜上、Python で機械学習モデル(またはその他の計算)を定義した後、必要となったときに自動的にグラフを作成することをお勧めします。
グラフを利用する
TensorFlow では、tf.function を直接呼出しまたはデコレータとして使用し、グラフを作成して実行します。tf.function は通常の関数を入力として取り、Function を返します。<strong data-md-type="double_emphasis">Function は、Python 関数から TensorFlow グラフを構築する Python コーラブルです。Function は 相当する Python 関数と同様に使用します。</strong>
End of explanation
def inner_function(x, y, b):
x = tf.matmul(x, y)
x = x + b
return x
# Use the decorator
@tf.function
def outer_function(x):
y = tf.constant([[2.0], [3.0]])
b = tf.constant(4.0)
return inner_function(x, y, b)
# Note that the callable will create a graph that
# includes inner_function() as well as outer_function()
outer_function(tf.constant([[1.0, 2.0]])).numpy()
Explanation: 一方、Function は、TensorFlow 演算を使って記述する通常の関数のように見えます。ただし、その根底では非常に異なります。Function は1 つの API の背後で複数の tf.Graph をカプセル化しています。Function が速度やデプロイ可能性といったGraph execution のメリットを提供できるのはこのためです。
tf.function は関数とそれが呼び出すその他すべての関数に次のように適用します。
End of explanation
def my_function(x):
if tf.reduce_sum(x) <= 1:
return x * x
else:
return x-1
a_function = tf.function(my_function)
print("First branch, with graph:", a_function(tf.constant(1.0)).numpy())
print("Second branch, with graph:", a_function(tf.constant([5.0, 5.0])).numpy())
Explanation: TensorFlow 1.x を使用したことがある場合は、Placeholder または tf.Sesssion をまったく定義する必要がないことに気づくでしょう。
Python 関数をグラフに変換する
TensorFlow で記述するすべての関数には、組み込みの TF 演算と、if-then 句、ループ、break、return、continue などの Python ロジックが含まれます。TensorFlow 演算は tf.Graph で簡単にキャプチャされますが、Python 固有のロジックがグラフの一部となるには、さらにステップが必要となります。tf.function は、Python コードをグラフが生成するコードに変換するために、AutoGraph(tf.autograph)というライブラリを使用しています。
End of explanation
# This is the graph-generating output of AutoGraph.
print(tf.autograph.to_code(simple_relu))
# Don't read the output too carefully.
print(tf.autograph.to_code(my_function))
Explanation: 直接グラフを閲覧する必要があることはほぼありませんが、正確な結果を確認するために出力を検査することは可能です。簡単に読み取れるものではありませんので、精査する必要はありません!
End of explanation
@tf.function
def my_relu(x):
return tf.maximum(0., x)
# `my_relu` creates new graphs as it sees more signatures.
print(my_relu(tf.constant(5.5)))
print(my_relu([1, -1]))
print(my_relu(tf.constant([3., -3.])))
Explanation: ほとんどの場合、tf.function の動作に特別な考慮はいりませんが、いくつかの注意事項があり、これについては tf.function ガイドのほか、詳細な Autograph リファレンスが役立ちます。
ポリモーフィズム: 1 つの Function で複数のグラフを得る
tf.Graph は特定の型の入力(特定の dtype のテンソルまたは同じ id() のオブジェクトなど)に特化しています。
Function を新しい dtype と形状の引数を使って呼び出すたびに、Function は新しい引数に対する新しい tf.Graph を作成します。tf.Graph の入力の dtypes と形状は入力シグネチャまたは単にシグネチャと呼ばれます。
Function はそのシグネチャに対応する tf.Graph を ConcreteFunction に格納します。<strong data-md-type="double_emphasis">ConcreteFunction は tf.Graph を囲むラッパーです。</strong>
End of explanation
# These two calls do *not* create new graphs.
print(my_relu(tf.constant(-2.5))) # Signature matches `tf.constant(5.5)`.
print(my_relu(tf.constant([-1., 1.]))) # Signature matches `tf.constant([3., -3.])`.
Explanation: Function がそのシグネチャですでに呼び出されている場合、Function は新しい tf.Graph を作成しません。
End of explanation
# There are three `ConcreteFunction`s (one for each graph) in `my_relu`.
# The `ConcreteFunction` also knows the return type and shape!
print(my_relu.pretty_printed_concrete_signatures())
Explanation: 複数のグラフでサポートされているため、Function はポリモーフィックです。そのため、単一の tf.Graph が表現できる以上の入力型をサポートし、パフォーマンスが改善されるように tf.Graph ごとに最適化することができます。
End of explanation
@tf.function
def get_MSE(y_true, y_pred):
sq_diff = tf.pow(y_true - y_pred, 2)
return tf.reduce_mean(sq_diff)
y_true = tf.random.uniform([5], maxval=10, dtype=tf.int32)
y_pred = tf.random.uniform([5], maxval=10, dtype=tf.int32)
print(y_true)
print(y_pred)
get_MSE(y_true, y_pred)
Explanation: tf.function を使用する
ここまでで、tf.function をデコレータまたはラッパーとして使用するだけで、Python 関数をグラフに変換できることを理解しました。しかし実際には、tf.function を正しく動作させるにはコツがいります!以下のセクションでは、tf.function を使って期待通りにコードを動作させるようにする方法を説明します。
Graph execution と Eager execution
Function 内のコードは、Eager と Graph の両方で実行できますが、デフォルトでは、Function は Graph としてコードを実行するようになっています。
End of explanation
tf.config.run_functions_eagerly(True)
get_MSE(y_true, y_pred)
# Don't forget to set it back when you are done.
tf.config.run_functions_eagerly(False)
Explanation: Function のグラフがそれに相当する Python 関数と同じように計算していることを確認するには、tf.config.run_functions_eagerly(True) を使って Eager で実行することができます。これは、通常どおりコードを実行するのではなく、<strong data-md-type="double_emphasis">グラフを作成して実行する Function の能力をオフにする</strong>スイッチです。
End of explanation
@tf.function
def get_MSE(y_true, y_pred):
print("Calculating MSE!")
sq_diff = tf.pow(y_true - y_pred, 2)
return tf.reduce_mean(sq_diff)
Explanation: ただし、Eager execution と Graph execution では Function の動作が異なることがあります。Python の print 関数がその例です。関数に print ステートメントを挿入して、それを繰り返し呼び出すとどうなるかを見てみましょう。
End of explanation
error = get_MSE(y_true, y_pred)
error = get_MSE(y_true, y_pred)
error = get_MSE(y_true, y_pred)
Explanation: 何が出力されるか観察しましょう。
End of explanation
# Now, globally set everything to run eagerly
tf.config.experimental_run_functions_eagerly(True)
print("Run all functions eagerly.")
# First, trace the model, triggering the side effect
polymorphic_function = tf.function(model)
# It was traced...
print(polymorphic_function.get_concrete_function(input_data))
# But when you run the function again, the side effect happens (both times).
result = polymorphic_function(input_data)
result = polymorphic_function(input_data)
# Observe what is printed below.
error = get_MSE(y_true, y_pred)
error = get_MSE(y_true, y_pred)
error = get_MSE(y_true, y_pred)
tf.config.run_functions_eagerly(False)
Explanation: この出力に驚きましたか?get_MSE は 3 回呼び出されたにもかかわらず、出力されたのは 1 回だけでした。
説明すると、print ステートメントは Function が「トレーシング」というプロセスでグラフを作成するために元のコードを実行したときに実行されます。<strong data-md-type="double_emphasis">トレーシングは TensorFlow 演算をグラフにキャプチャしますが、グラフには print はキャプチャされません。</strong>以降、そのグラフは Python コードを再実行せずに、3 つのすべての呼び出しに対して実行されます。
サニティーチェックとして、Graph execution をオフにして比較してみましょう。
End of explanation
x = tf.random.uniform(shape=[10, 10], minval=-1, maxval=2, dtype=tf.dtypes.int32)
def power(x, y):
result = tf.eye(10, dtype=tf.dtypes.int32)
for _ in range(y):
result = tf.matmul(x, result)
return result
print("Eager execution:", timeit.timeit(lambda: power(x, 100), number=1000))
power_as_graph = tf.function(power)
print("Graph execution:", timeit.timeit(lambda: power_as_graph(x, 100), number=1000))
Explanation: print は Python の副作用です。違いはほかにもあり、関数を Function に変換する場合には注意が必要です。
注意: Eager execution と Graph execution の両方で値を出力する場合は、代わりに tf.print を使用してください。
tf.function のベストプラクティス
Function の動作に慣れるまで、しばらく時間がかかるかもしれませんが、その時間を短縮するには、トイ関数に @tf.function をデコレートしていろいろ試しながら、Eager から Graph execution に切り替える経験を積むと良いでしょう。
グラフ対応の TesorFlow プログラムを記述するには、tf.function 向けにデザインするのが一番かもしれません。その際のヒントをいくつか紹介します。
早い段階で Eager execution と Graph execution を切り替えながら、2 つのモードで異なる結果を得るかどうか、またはそのタイミングを知るために tf.config.run_functions_eagerly を頻繁に使用しましょう。
Python 関数の外で tf.Variable を作成し、Python 関数の中でそれを変更するようにします。これは、keras.layers、keras.Model、tf.optimizers などの tf.Variable を使用するオブジェクトでも同じです。
外側の Python 変数に依存する関数を記述しないようにしましょう。ただし、tf.Variables と Keras オブジェクトについては例外です。
テンソルとほかの TensorFlow の型を入力として取る関数を記述するようにしましょう。ほかの型のオブジェクトを渡すことは可能ですが、十分な注意が必要です!
パフォーマンスを最大限に得るには、tf.function の下にできるだけ多くの計算を含めるようにしましょう。たとえば、トレーニングステップ全体またはトレーニングループ全体をデコレートすることができます。
高速化の確認
コードのパフォーマンスは通常、tf.function によって改善されますが、改善率は実行する計算によって異なります。 小さな計算であれば、グラフ呼び出しのオーバーヘッドに制約を受ける可能性があります。パフォーマンスの変化は、次のようにして確認することができます。
End of explanation
@tf.function
def a_function_with_python_side_effect(x):
print("Tracing!") # An eager-only side effect.
return x * x + tf.constant(2)
# This is traced the first time.
print(a_function_with_python_side_effect(tf.constant(2)))
# The second time through, you won't see the side effect.
print(a_function_with_python_side_effect(tf.constant(3)))
# This retraces each time the Python argument changes,
# as a Python argument could be an epoch count or other
# hyperparameter.
print(a_function_with_python_side_effect(2))
print(a_function_with_python_side_effect(3))
Explanation: トレーニングループの高速化には、一般的に tf.function が使用されます。Keras を使った例はこちらをご覧ください。
注意: パフォーマンスをさらに大きく改善させるには、tf.function(jit_compile=True) を使用することもできます。特に、コードで大量の TF 制御フローが使用されており、小さなテンソルが多数使用されている場合に最適です。
パフォーマンスとトレードオフ
グラフを使ってコードを高速化することは可能ですが、グラフを作成するプロセスにはオーバーヘッドが伴います。一部の関数の場合、グラフの作成にはグラフを実行するよりも長い時間が掛かることがあります。このオーバーヘッドは、以降の実行においてパフォーマンスが向上するのであれば挽回することができますが、大規模なモデルトレーニングの最初の数ステップではトレーシングにより速度が減少する可能性があることに注意してください。
モデルの規模に関係なく、頻繁にトレースするのは避けたほうがよいでしょう。tf.function ガイドでは、リトレーシングを回避できるよう、入力仕様を設定してテンソル引数を使用する方法が説明されています。パフォーマンスが異常に低下している場合は、リトレーシングをうっかり行っていないかどうかを確認することをお勧めします。
Function がトレーシングしているタイミングを確認するには
Function がトレーシングしているタイミングを確認するには、コードに print ステートメントを追加すれば、Function がトレーシングを行うたびに print ステートメントが実行されるようになります。
End of explanation |
13,426 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Вопросы по прошлому занятию
Что найдет регулярка "^\d+.\d{1,2}.\d{1,2}\s[^A-Z]?$" ?
Что делает функция filter(lambda s
Step1: Упражнение
Написать класс RangeCounter, который принимает начальное значение и шаг. У счетчика есть метод step(), который позволяет увеличить значение на размер шага. Давайте запретим менять значение напрямую, а шаг позволим устанавливать через сеттер.
Более подробно о with
Step3: Что такое потоки?
Системный планировщик отдает процессорное время потокам/процессам, переключая между ними контекст
Процессы/потоки работают "параллельно", в идеале используя несколько ядер процессора
Пути планировщика неисповедимы, нельзя заранее предсказать, какой процесс получит ресурсы в конкретный момент
Потоки надо синхронизировать согласно задачам, чтобы не было проблем с одновременным доступом
Пример - простая версия веб-сервера
Есть CPU-bound задачи и есть I/O-bound задачи - важно понимать разницу
Что такое GIL?
GIL - это глобальный мьютекс (механизм синхронизации) в интерпретаторе Python
GIL запрещает выполнять байткод Python больше чем одному потоку одновременно
Но это касается ТОЛЬКО байткода Python и не распространяется на I/O операции
Потоки Python (в отличие от потоков, скажем, в Ruby) - это полноценные потоки ОС
Step4: Упражнение
Реализовать "sleepsort". Предположим у нас есть короткий список чисел от 0 до 10. Чтобы их вывести в отсортированном порядке - достаточно каждый поток заставить "спать" количество секунд, равное самому числу, и только потом его выводить. В чем недостаток данного подхода?
Как обойти GIL?
Например, использовать процессы вместо потоков.
Тогда проблема будет с синхронизацией и обменом сообщениями (см. pickle)
И процессы все-таки немного тяжелее потоков. Стартовать по процессу на каждого клиента слишком дорого.
Step5: Ipyparallel
0MQ + Kernels
Поддержка платформ наподобие EC2
mpi4py
Task DAG
https
Step6: Примитивы синхронизации - мьютекс | Python Code:
a = 1
b = 3
a + b
a.__add__(b)
type(a)
isinstance(a, int)
class Animal(object):
mammal = True # class variable
def __init__(self, name, voice, color="black"):
self.name = name
self.__voice = voice # "приватный" или "защищенный" атрибут
self._color = color # "типа приватный" атрибут
def make_sound(self):
print('{0} {1} says "{2}"'.format(self._color, self.name, self.__voice))
@classmethod
def description(cls):
print("Some animal")
Animal.mammal
Animal.description()
a = Animal("dog", "bark")
a.mammal
c.__voice
c._color
dir(c)
class Cat(Animal):
def __init__(self, color):
super().__init__(name="cat", voice="meow", color=color)
c = Cat(color="white")
isinstance(c, Animal)
c1 = Cat(color="white")
c2 = Cat(color="black")
print(c1.mammal)
c1.mammal = False
print(c1.mammal)
print(c2.mammal)
c1 = Cat(color="white")
c2 = Cat(color="black")
print(c1.mammal)
Cat.mammal = False
print(c1.mammal)
print(c2.mammal)
c._color = "green"
c.make_sound()
class Cat(Animal):
def __init__(self, color):
super().__init__(name="cat", voice="meow", color=color)
@property
def color(self):
return self._color
@color.setter
def color(self, val):
if val not in ("black", "white", "grey", "mixed"):
raise Exception("Cat can't be {0}!".format(val))
self._color = val
c = Cat("white")
c.color
c.color = "green"
c.color
Explanation: Вопросы по прошлому занятию
Что найдет регулярка "^\d+.\d{1,2}.\d{1,2}\s[^A-Z]?$" ?
Что делает функция filter(lambda s: s.startswith("https://"), sys.stdin)?
Объясните своими словами, что такое yield.
В Python 2 была возможность получить очень странную ошибку ValueError: function 'func' accepts at least 2 arguments (2 given). В Python 3 сообщение об ошибке исправили на более информативное, но попытайтесь предположить, что надо было сделать для получения такой ошибки?
Как выбрать случайный элемент из списка?
Классы и магические методы
End of explanation
class A(object):
def __init__(self):
self.sandbox = {}
def __enter__(self):
return self.sandbox
def __exit__(self, exc_type, exc_value, traceback):
self.sandbox = {}
a = A()
with a as sbox:
sbox["foo"] = "bar"
print(sbox)
print(a.sandbox)
from contextlib import contextmanager
@contextmanager
def contextgen():
print("enter")
yield 1
print("exit")
with contextgen() as a:
print(a)
Explanation: Упражнение
Написать класс RangeCounter, который принимает начальное значение и шаг. У счетчика есть метод step(), который позволяет увеличить значение на размер шага. Давайте запретим менять значение напрямую, а шаг позволим устанавливать через сеттер.
Более подробно о with
End of explanation
import os
import requests
from threading import Thread
class DownloadThread(Thread):
def __init__(self, url, name):
super().__init__()
self.url = url
self.name = name
def run(self):
res = requests.get(self.url, stream=True)
res.raise_for_status()
fname = os.path.basename(self.url)
with open(fname, "wb") as savefile:
for chunk in res.iter_content(1024):
savefile.write(chunk)
print(f"{self.name} закончил загрузку {self.url} !")
def main(urls):
for item, url in enumerate(urls):
thread = DownloadThread(url, f"Поток {item + 1}")
thread.start()
main([
"http://www.irs.gov/pub/irs-pdf/f1040.pdf",
"http://www.irs.gov/pub/irs-pdf/f1040a.pdf",
"http://www.irs.gov/pub/irs-pdf/f1040ez.pdf",
"http://www.irs.gov/pub/irs-pdf/f1040es.pdf",
"http://www.irs.gov/pub/irs-pdf/f1040sb.pdf"
])
В данном случае интерпретатор дожидается
завершения всех дочерних потоков.
В других языках может быть иначе!
import queue
class DownloadThread2(Thread):
def __init__(self, queue, name):
super().__init__()
self.queue = queue
self.name = name
def run(self):
while True:
url = self.queue.get()
fname = os.path.basename(url)
res = requests.get(url, stream=True)
res.raise_for_status()
with open(fname, "wb") as savefile:
for chunk in res.iter_content(1024):
savefile.write(chunk)
self.queue.task_done()
print(f"{self.name} закончил загрузку {url} !")
def main(urls):
q = queue.Queue()
threads = [DownloadThread2(q, f"Поток {i + 1}") for i in range(2)]
for t in threads:
# заставляем интерпретатор НЕ ждать завершения дочерних потоков
t.setDaemon(True)
t.start()
for url in urls:
q.put(url)
q.join() # все обработано - выходим
main([
"http://www.irs.gov/pub/irs-pdf/f1040.pdf",
"http://www.irs.gov/pub/irs-pdf/f1040a.pdf",
"http://www.irs.gov/pub/irs-pdf/f1040ez.pdf",
"http://www.irs.gov/pub/irs-pdf/f1040es.pdf",
"http://www.irs.gov/pub/irs-pdf/f1040sb.pdf"
])
Explanation: Что такое потоки?
Системный планировщик отдает процессорное время потокам/процессам, переключая между ними контекст
Процессы/потоки работают "параллельно", в идеале используя несколько ядер процессора
Пути планировщика неисповедимы, нельзя заранее предсказать, какой процесс получит ресурсы в конкретный момент
Потоки надо синхронизировать согласно задачам, чтобы не было проблем с одновременным доступом
Пример - простая версия веб-сервера
Есть CPU-bound задачи и есть I/O-bound задачи - важно понимать разницу
Что такое GIL?
GIL - это глобальный мьютекс (механизм синхронизации) в интерпретаторе Python
GIL запрещает выполнять байткод Python больше чем одному потоку одновременно
Но это касается ТОЛЬКО байткода Python и не распространяется на I/O операции
Потоки Python (в отличие от потоков, скажем, в Ruby) - это полноценные потоки ОС
End of explanation
from multiprocessing import Process
from multiprocessing import Queue
Explanation: Упражнение
Реализовать "sleepsort". Предположим у нас есть короткий список чисел от 0 до 10. Чтобы их вывести в отсортированном порядке - достаточно каждый поток заставить "спать" количество секунд, равное самому числу, и только потом его выводить. В чем недостаток данного подхода?
Как обойти GIL?
Например, использовать процессы вместо потоков.
Тогда проблема будет с синхронизацией и обменом сообщениями (см. pickle)
И процессы все-таки немного тяжелее потоков. Стартовать по процессу на каждого клиента слишком дорого.
End of explanation
import time
from concurrent.futures import ThreadPoolExecutor
# аналогично с ProcessPoolExecutor
def hold_my_beer_5_sec(beer):
time.sleep(5)
return beer
pool = ThreadPoolExecutor(3)
future = pool.submit(hold_my_beer_5_sec, ("Балтика"))
print(future.done())
time.sleep(5)
print(future.done())
print(future.result())
import concurrent.futures
import requests
def load_url(url):
fname = os.path.basename(url)
res = requests.get(url, stream=True)
res.raise_for_status()
with open(fname, "wb") as savefile:
for chunk in res.iter_content(1024):
savefile.write(chunk)
return fname
URLS = [
"http://www.irs.gov/pub/irs-pdf/f1040.pdf",
"http://www.irs.gov/pub/irs-pdf/f1040a.pdf",
"http://www.irs.gov/pub/irs-pdf/f1040ez.pdf",
"http://www.irs.gov/pub/irs-pdf/f1040es.pdf",
"http://www.irs.gov/pub/irs-pdf/f1040sb.pdf"
]
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
future_to_url = {
executor.submit(load_url, url): url
for url in URLS
}
for future in concurrent.futures.as_completed(future_to_url):
url = future_to_url[future]
print(f"URL '{future_to_url[future]}' is saved to '{future.result()}'")
Explanation: Ipyparallel
0MQ + Kernels
Поддержка платформ наподобие EC2
mpi4py
Task DAG
https://ipyparallel.readthedocs.io/en/latest/
End of explanation
m = threading.Lock()
m.acquire()
m.release()
Explanation: Примитивы синхронизации - мьютекс
End of explanation |
13,427 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Acme
Step1: Installation
Install Acme
Step2: Install the environment library
Step3: Install visualization packages
Step4: Import Modules
Step5: Load an environment
We can now load an environment. In what follows we'll create an environment and grab the environment's specifications.
Step6: ## Create a D4PG agent
Step7: Run a training loop
Step9: Visualize an evaluation loop
Helper functions for rendering and vizualization
Step10: Run and visualize the agent in the environment for an episode | Python Code:
environment_library = 'gym' # @param ['dm_control', 'gym']
Explanation: Acme: Quickstart
Guide to installing Acme and training your first D4PG agent.
<a href="https://colab.research.google.com/github/deepmind/acme/blob/master/examples/quickstart.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Select your environment library
End of explanation
!pip install dm-acme
!pip install dm-acme[reverb]
!pip install dm-acme[tf]
Explanation: Installation
Install Acme
End of explanation
if environment_library == 'dm_control':
import distutils.util
import subprocess
if subprocess.run('nvidia-smi').returncode:
raise RuntimeError(
'Cannot communicate with GPU. '
'Make sure you are using a GPU Colab runtime. '
'Go to the Runtime menu and select Choose runtime type.')
mujoco_dir = "$HOME/.mujoco"
print('Installing OpenGL dependencies...')
!apt-get update -qq
!apt-get install -qq -y --no-install-recommends libglew2.0 > /dev/null
print('Downloading MuJoCo...')
BASE_URL = 'https://github.com/deepmind/mujoco/releases/download'
MUJOCO_VERSION = '2.1.1'
MUJOCO_ARCHIVE = (
f'mujoco-{MUJOCO_VERSION}-{distutils.util.get_platform()}.tar.gz')
!wget -q "{BASE_URL}/{MUJOCO_VERSION}/{MUJOCO_ARCHIVE}"
!wget -q "{BASE_URL}/{MUJOCO_VERSION}/{MUJOCO_ARCHIVE}.sha256"
check_result = !shasum -c "{MUJOCO_ARCHIVE}.sha256"
if _exit_code:
raise RuntimeError(
'Downloaded MuJoCo archive is corrupted (checksum mismatch)')
print('Unpacking MuJoCo...')
MUJOCO_DIR = '$HOME/.mujoco'
!mkdir -p "{MUJOCO_DIR}"
!tar -zxf {MUJOCO_ARCHIVE} -C "{MUJOCO_DIR}"
# Configure dm_control to use the EGL rendering backend (requires GPU)
%env MUJOCO_GL=egl
print('Installing dm_control...')
# Version 0.0.416848645 is the first one to support MuJoCo 2.1.1.
!pip install -q dm_control>=0.0.416848645
print('Checking that the dm_control installation succeeded...')
try:
from dm_control import suite
env = suite.load('cartpole', 'swingup')
pixels = env.physics.render()
except Exception as e:
raise e from RuntimeError(
'Something went wrong during installation. Check the shell output above '
'for more information.\n'
'If using a hosted Colab runtime, make sure you enable GPU acceleration '
'by going to the Runtime menu and selecting "Choose runtime type".')
else:
del suite, env, pixels
!echo Installed dm_control $(pip show dm_control | grep -Po "(?<=Version: ).+")
elif environment_library == 'gym':
!pip install gym
Explanation: Install the environment library
End of explanation
!sudo apt-get install -y xvfb ffmpeg
!pip install imageio
!pip install PILLOW
!pip install pyvirtualdisplay
Explanation: Install visualization packages
End of explanation
import IPython
from acme import environment_loop
from acme import specs
from acme import wrappers
from acme.agents.tf import d4pg
from acme.tf import networks
from acme.tf import utils as tf2_utils
from acme.utils import loggers
import numpy as np
import sonnet as snt
# Import the selected environment lib
if environment_library == 'dm_control':
from dm_control import suite
elif environment_library == 'gym':
import gym
# Imports required for visualization
import pyvirtualdisplay
import imageio
import base64
# Set up a virtual display for rendering.
display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start()
Explanation: Import Modules
End of explanation
if environment_library == 'dm_control':
environment = suite.load('cartpole', 'balance')
elif environment_library == 'gym':
environment = gym.make('MountainCarContinuous-v0')
environment = wrappers.GymWrapper(environment) # To dm_env interface.
else:
raise ValueError(
"Unknown environment library: {};".format(environment_library) +
"choose among ['dm_control', 'gym'].")
# Make sure the environment outputs single-precision floats.
environment = wrappers.SinglePrecisionWrapper(environment)
# Grab the spec of the environment.
environment_spec = specs.make_environment_spec(environment)
Explanation: Load an environment
We can now load an environment. In what follows we'll create an environment and grab the environment's specifications.
End of explanation
#@title Build agent networks
# Get total number of action dimensions from action spec.
num_dimensions = np.prod(environment_spec.actions.shape, dtype=int)
# Create the shared observation network; here simply a state-less operation.
observation_network = tf2_utils.batch_concat
# Create the deterministic policy network.
policy_network = snt.Sequential([
networks.LayerNormMLP((256, 256, 256), activate_final=True),
networks.NearZeroInitializedLinear(num_dimensions),
networks.TanhToSpec(environment_spec.actions),
])
# Create the distributional critic network.
critic_network = snt.Sequential([
# The multiplexer concatenates the observations/actions.
networks.CriticMultiplexer(),
networks.LayerNormMLP((512, 512, 256), activate_final=True),
networks.DiscreteValuedHead(vmin=-150., vmax=150., num_atoms=51),
])
# Create a logger for the agent and environment loop.
agent_logger = loggers.TerminalLogger(label='agent', time_delta=10.)
env_loop_logger = loggers.TerminalLogger(label='env_loop', time_delta=10.)
# Create the D4PG agent.
agent = d4pg.D4PG(
environment_spec=environment_spec,
policy_network=policy_network,
critic_network=critic_network,
observation_network=observation_network,
sigma=1.0,
logger=agent_logger,
checkpoint=False
)
# Create an loop connecting this agent to the environment created above.
env_loop = environment_loop.EnvironmentLoop(
environment, agent, logger=env_loop_logger)
Explanation: ## Create a D4PG agent
End of explanation
# Run a `num_episodes` training episodes.
# Rerun this cell until the agent has learned the given task.
env_loop.run(num_episodes=100)
Explanation: Run a training loop
End of explanation
# Create a simple helper function to render a frame from the current state of
# the environment.
if environment_library == 'dm_control':
def render(env):
return env.physics.render(camera_id=0)
elif environment_library == 'gym':
def render(env):
return env.environment.render(mode='rgb_array')
else:
raise ValueError(
"Unknown environment library: {};".format(environment_library) +
"choose among ['dm_control', 'gym'].")
def display_video(frames, filename='temp.mp4'):
Save and display video.
# Write video
with imageio.get_writer(filename, fps=60) as video:
for frame in frames:
video.append_data(frame)
# Read video and display the video
video = open(filename, 'rb').read()
b64_video = base64.b64encode(video)
video_tag = ('<video width="320" height="240" controls alt="test" '
'src="data:video/mp4;base64,{0}">').format(b64_video.decode())
return IPython.display.HTML(video_tag)
Explanation: Visualize an evaluation loop
Helper functions for rendering and vizualization
End of explanation
timestep = environment.reset()
frames = [render(environment)]
while not timestep.last():
# Simple environment loop.
action = agent.select_action(timestep.observation)
timestep = environment.step(action)
# Render the scene and add it to the frame stack.
frames.append(render(environment))
# Save and display a video of the behaviour.
display_video(np.array(frames))
Explanation: Run and visualize the agent in the environment for an episode
End of explanation |
13,428 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 6
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
Step1: Exploring the Fermi distribution
In quantum statistics, the Fermi-Dirac distribution is related to the probability that a particle will be in a quantum state with energy $\epsilon$. The equation for the distribution $F(\epsilon)$ is
Step2: In this equation
Step3: Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT.
Use enegies over the range $[0,10.0]$ and a suitable number of points.
Choose an appropriate x and y limit for your visualization.
Label your x and y axis and the overall visualization.
Customize your plot in 3 other ways to make it effective and beautiful.
Step4: Use interact with plot_fermidist to explore the distribution | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import Image
from IPython.html.widgets import interact, interactive, fixed
Explanation: Interact Exercise 6
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
End of explanation
Image('fermidist.png')
Explanation: Exploring the Fermi distribution
In quantum statistics, the Fermi-Dirac distribution is related to the probability that a particle will be in a quantum state with energy $\epsilon$. The equation for the distribution $F(\epsilon)$ is:
End of explanation
def fermidist(energy, mu, kT):
F = 1/(np.exp((energy - mu)/kT)+1)
return F
assert np.allclose(fermidist(0.5, 1.0, 10.0), 0.51249739648421033)
assert np.allclose(fermidist(np.linspace(0.0,1.0,10), 1.0, 10.0),
np.array([ 0.52497919, 0.5222076 , 0.51943465, 0.5166605 , 0.51388532,
0.51110928, 0.50833256, 0.50555533, 0.50277775, 0.5 ]))
Explanation: In this equation:
$\epsilon$ is the single particle energy.
$\mu$ is the chemical potential, which is related to the total number of particles.
$k$ is the Boltzmann constant.
$T$ is the temperature in Kelvin.
In the cell below, typeset this equation using LaTeX:
\begin{equation}
F(\epsilon) = \frac{1}{e^{(\epsilon - \mu)/kT}+1}
\end{equation}
Define a function fermidist(energy, mu, kT) that computes the distribution function for a given value of energy, chemical potential mu and temperature kT. Note here, kT is a single variable with units of energy. Make sure your function works with an array and don't use any for or while loops in your code.
End of explanation
def plot_fermidist(mu, kT):
fig, ax = plt.subplots(figsize=(15,4))
plt.plot(np.linspace(0,10.0,100),fermidist(np.linspace(0,10.0,100), mu, kT))
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
plt.xlim(0,10);
plt.ylim(0,1.2);
plt.title('Fermidistribution as a Function of Energy');
plt.xlabel('Energy Levels');
plt.ylabel('Fermidistribution');
plt.tick_params(axis='y', direction='inout', length=5)
plt.tick_params(axis='x', direction='inout', length=8)
plot_fermidist(4.0, 1.0);
assert True # leave this for grading the plot_fermidist function
Explanation: Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT.
Use enegies over the range $[0,10.0]$ and a suitable number of points.
Choose an appropriate x and y limit for your visualization.
Label your x and y axis and the overall visualization.
Customize your plot in 3 other ways to make it effective and beautiful.
End of explanation
interact(plot_fermidist,mu=(0.0,5.0),kT=(0.1,10.0));
Explanation: Use interact with plot_fermidist to explore the distribution:
For mu use a floating point slider over the range $[0.0,5.0]$.
for kT use a floating point slider over the range $[0.1,10.0]$.
End of explanation |
13,429 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: Build a digit classifier app with TensorFlow Lite
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Download and explore the MNIST dataset
The MNIST database contains 60,000 training images and 10,000 testing images of handwritten digits. We will use the dataset to demonstrate how to train a image classification model and convert it to TensorFlow Lite format.
Each image in the MNIST dataset is a 28x28 grayscale image containing a digit.
Step3: Train a TensorFlow model to classify digit images
We use Keras API to build a TensorFlow model that can classify the digit images. Please see this tutorial if you are interested to learn more about how to build machine learning model with Keras and TensorFlow.
Step4: Evaluate our model
We run our digit classification model against our test dataset that the model hasn't seen during its training process. We want to confirm that the model didn't just remember the digits it saw but also generalize well to new images.
Step5: Convert the Keras model to TensorFlow Lite
Step6: Verify the TensorFlow Lite model | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
# TensorFlow and tf.keras
import tensorflow as tf
from tensorflow import keras
# Helper libraries
import numpy as np
import matplotlib.pyplot as plt
import math
print(tf.__version__)
# Helper function to display digit images
def show_sample(images, labels, sample_count=25):
# Create a square with can fit {sample_count} images
grid_count = math.ceil(math.ceil(math.sqrt(sample_count)))
grid_count = min(grid_count, len(images), len(labels))
plt.figure(figsize=(2*grid_count, 2*grid_count))
for i in range(sample_count):
plt.subplot(grid_count, grid_count, i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(images[i], cmap=plt.cm.gray)
plt.xlabel(labels[i])
plt.show()
Explanation: Build a digit classifier app with TensorFlow Lite
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/blob/master/lite/examples/digit_classifier/ml/mnist_tflite.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/examples/blob/master/lite/examples/digit_classifier/ml/mnist_tflite.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Overview
This notebook shows an end-to-end example of training a TensorFlow model using Keras and Python, then export it to TensorFlow Lite format to use in mobile apps. Here we will train a handwritten digit classifier using MNIST dataset.
Setup
End of explanation
# Download MNIST dataset.
mnist = keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# If you can't download the MNIST dataset from Keras, please try again with an alternative method below
# path = keras.utils.get_file('mnist.npz',
# origin='https://s3.amazonaws.com/img-datasets/mnist.npz',
# file_hash='8a61469f7ea1b51cbae51d4f78837e45')
# with np.load(path, allow_pickle=True) as f:
# train_images, train_labels = f['x_train'], f['y_train']
# test_images, test_labels = f['x_test'], f['y_test']
# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images / 255.0
test_images = test_images / 255.0
# Show the first 25 images in the training dataset.
show_sample(train_images,
['Label: %s' % label for label in train_labels])
Explanation: Download and explore the MNIST dataset
The MNIST database contains 60,000 training images and 10,000 testing images of handwritten digits. We will use the dataset to demonstrate how to train a image classification model and convert it to TensorFlow Lite format.
Each image in the MNIST dataset is a 28x28 grayscale image containing a digit.
End of explanation
# Define the model architecture
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation=tf.nn.relu),
# Optional: You can replace the dense layer above with the convolution layers below to get higher accuracy.
# keras.layers.Reshape(target_shape=(28, 28, 1)),
# keras.layers.Conv2D(filters=32, kernel_size=(3, 3), activation=tf.nn.relu),
# keras.layers.Conv2D(filters=64, kernel_size=(3, 3), activation=tf.nn.relu),
# keras.layers.MaxPooling2D(pool_size=(2, 2)),
# keras.layers.Dropout(0.25),
# keras.layers.Flatten(input_shape=(28, 28)),
# keras.layers.Dense(128, activation=tf.nn.relu),
# keras.layers.Dropout(0.5),
keras.layers.Dense(10)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# Train the digit classification model
model.fit(train_images, train_labels, epochs=5)
Explanation: Train a TensorFlow model to classify digit images
We use Keras API to build a TensorFlow model that can classify the digit images. Please see this tutorial if you are interested to learn more about how to build machine learning model with Keras and TensorFlow.
End of explanation
# Evaluate the model using test dataset.
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
# Predict the labels of digit images in our test dataset.
predictions = model.predict(test_images)
# Then plot the first 25 test images and their predicted labels.
show_sample(test_images,
['Predicted: %d' % np.argmax(result) for result in predictions])
Explanation: Evaluate our model
We run our digit classification model against our test dataset that the model hasn't seen during its training process. We want to confirm that the model didn't just remember the digits it saw but also generalize well to new images.
End of explanation
# Convert Keras model to TF Lite format.
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
# Save the TF Lite model as file
f = open('mnist.tflite', "wb")
f.write(tflite_model)
f.close()
# Download the digit classification model if you're using Colab,
# or print the model's local path if you're not using Colab.
try:
from google.colab import files
files.download('mnist.tflite')
except ImportError:
import os
print('TF Lite model:', os.path.join(os.getcwd(), 'mnist.tflite'))
Explanation: Convert the Keras model to TensorFlow Lite
End of explanation
# Download a test image
zero_img_path = keras.utils.get_file(
'zero.png',
'https://storage.googleapis.com/khanhlvg-public.appspot.com/digit-classifier/zero.png'
)
image = keras.preprocessing.image.load_img(
zero_img_path,
color_mode = 'grayscale',
target_size=(28, 28),
interpolation='bilinear'
)
# Pre-process the image: Adding batch dimension and normalize the pixel value to [0..1]
# In training, we feed images in a batch to the model to improve training speed, making the model input shape to be (BATCH_SIZE, 28, 28).
# For inference, we still need to match the input shape with training, so we expand the input dimensions to (1, 28, 28) using np.expand_dims
input_image = np.expand_dims(np.array(image, dtype=np.float32) / 255.0, 0)
# Show the pre-processed input image
show_sample(input_image, ['Input Image'], 1)
# Run inference with TensorFlow Lite
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
interpreter.set_tensor(interpreter.get_input_details()[0]["index"], input_image)
interpreter.invoke()
output = interpreter.tensor(interpreter.get_output_details()[0]["index"])()[0]
# Print the model's classification result
digit = np.argmax(output)
print('Predicted Digit: %d\nConfidence: %f' % (digit, output[digit]))
Explanation: Verify the TensorFlow Lite model
End of explanation |
13,430 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Notebook used to develop code
output from classification is data frame with slave_logs (maybe rename that column?) indicating
Step1: Load and clean data
Load CLIWOC ship logs
Step2: Find definite slave data in CLIWOC data set
These logs will be used to test the classifier
Step3: Clean CLIWOC data
Step4: cliwoc_data (unclassified) = 0
cliwoc_data (no slaves) = 1
cliwoc_data (slaves) = 2
slave_data = 3
Step5: Load Slave Voyages data
Step6: Clean Slave voyages data
Step7: Join data sets
Step8: Test of fuzzywuzzy method
Step9: Encode data
Must encode data before separating, otherwise values that do not occur in a subset will be encoded differently
Step10: Extract training data, and create list of classes
Step11: left this code so we can check if there are any null values in each
dataframe
Step12: Fit training data to classifier
note! first column of numpy array is index! do not include in classification!
Step15: Test classifier
check if slave logs from cliwoc data were classified correctly (want mostly classified as 1)
compare first column with slave_index
Step16: try decision trees plotting
Following lines of code do not currently work, we need to install graphviz | Python Code:
classifier_algorithm = "Decision Tree"
import collections
import exploringShipLogbooks
import numpy as np
import os.path as op
import pandas as pd
import exploringShipLogbooks.wordcount as wc
from fuzzywuzzy import fuzz
from sklearn import preprocessing
from sklearn.naive_bayes import MultinomialNB
from sklearn import tree
from exploringShipLogbooks.basic_utils import clean_data
from exploringShipLogbooks.basic_utils import encode_data_df
from exploringShipLogbooks.basic_utils import extract_logbook_data
from exploringShipLogbooks.fuzz_replacement import fuzzy_wuzzy_classification
from exploringShipLogbooks.basic_utils import isolate_columns
from exploringShipLogbooks.basic_utils import isolate_training_data
from exploringShipLogbooks.config import *
Explanation: Notebook used to develop code
output from classification is data frame with slave_logs (maybe rename that column?) indicating:
cliwoc_data (unclassified) = 0
cliwoc_data (no slaves) = 1
cliwoc_data (slaves) = 2
slave_data = 3
classified as slave log = 4
classified as non slave log = 5
End of explanation
# extract data from zip file
cliwoc_data = extract_logbook_data('CLIWOC15.csv')
label_encoding = preprocessing.LabelEncoder().fit(cliwoc_data['LogbookIdent']).classes_
cliwoc_data['LogbookIdent'] = preprocessing.LabelEncoder().fit_transform(cliwoc_data['LogbookIdent'])
Explanation: Load and clean data
Load CLIWOC ship logs
End of explanation
# extract logs that mention slaves
slave_mask = wc.count_key_words(cliwoc_data, text_columns, slave_words)
print('Found ', len(slave_mask[slave_mask]), ' logs that mention slaves')
Explanation: Find definite slave data in CLIWOC data set
These logs will be used to test the classifier
End of explanation
# find indices of ship names that are "non-slave" ships before dropping ship name column
non_slave_log_locations = isolate_training_data(cliwoc_data, {'ShipName': non_slave_ships})
print('Found ', len(non_slave_log_locations[non_slave_log_locations==True]), ' logs that are non-slave ships')
cliwoc_data['slave_logs'] = np.zeros(len(cliwoc_data))
slave_log_locations = cliwoc_data['LogbookIdent'].isin(list(cliwoc_data['LogbookIdent']
[slave_mask].unique()))
Explanation: Clean CLIWOC data
End of explanation
cliwoc_data.loc[non_slave_log_locations,'slave_logs'] = 1
cliwoc_data.loc[slave_log_locations,'slave_logs'] = 2
cliwoc_data = cliwoc_data.sort_values('LogbookIdent', ascending=True)
cliwoc_data_all = cliwoc_data.set_index('LogbookIdent', drop= False).copy()
cliwoc_data = cliwoc_data.set_index('LogbookIdent', drop = False)
cliwoc_data = cliwoc_data.drop_duplicates('LogbookIdent')
# uncomment this if looking at ship names for manual review
#desired_columns.append('ShipName')
# remove undesired columns
cliwoc_data = isolate_columns(cliwoc_data, desired_columns)
Explanation: cliwoc_data (unclassified) = 0
cliwoc_data (no slaves) = 1
cliwoc_data (slaves) = 2
slave_data = 3
End of explanation
data_path = op.join(exploringShipLogbooks.__path__[0], 'data')
file_name = data_path + '/tastdb-exp-2010'
slave_voyage_logs = pd.read_pickle(file_name)
year_ind = ~(slave_voyage_logs['yeardep'].isnull())
slave_voyage_logs = slave_voyage_logs[year_ind]
cliwoc_ind = (slave_voyage_logs['yeardep']>cliwoc_data['Year'].min()) & (slave_voyage_logs['yeardep']<cliwoc_data['Year'].max())
slave_voyage_logs = slave_voyage_logs[cliwoc_ind]
Explanation: Load Slave Voyages data
End of explanation
slave_voyage_desired_cols = list(slave_voyage_conversions.keys())
slave_voyage_logs = isolate_columns(slave_voyage_logs, slave_voyage_desired_cols)
slave_voyage_logs.rename(columns=slave_voyage_conversions, inplace=True)
#slave_voyage_logs.columns = ['Nationality', 'ShipType', 'VoyageFrom', 'VoyageTo', 'Year']
slave_voyage_logs['slave_logs'] = 3
slave_voyage_indices = range(len(slave_voyage_logs)) + (cliwoc_data.tail(1).index[0]+1)
slave_voyage_logs = slave_voyage_logs.set_index(slave_voyage_indices)
Explanation: Clean Slave voyages data
End of explanation
all_data = pd.concat([cliwoc_data, slave_voyage_logs])
#all_data = cliwoc_data.append(slave_voyage_logs)
all_data = clean_data(all_data)
# cleanup
#del cliwoc_data, slave_voyage_logs
all_data.head()
Explanation: Join data sets
End of explanation
all_data_test = all_data.copy()
fuzz_columns = ['Nationality', 'ShipType', 'VoyageFrom', 'VoyageTo']
for col in fuzz_columns:
all_data = fuzzy_wuzzy_classification(all_data, col)
Explanation: Test of fuzzywuzzy method
End of explanation
from sklearn.preprocessing import LabelEncoder
class MultiColumnLabelEncoder:
def __init__(self,columns = None):
self.columns = columns # array of column names to encode
def fit(self,X,y=None):
return self # not relevant here
def transform(self,X):
'''
Transforms columns of X specified in self.columns using
LabelEncoder(). If no columns specified, transforms all
columns in X.
'''
output = X.copy()
if self.columns is not None:
for col in self.columns:
if is_instance(X[col][0], str):
output[col] = LabelEncoder().fit_transform(output[col])
else:
output[col] = X[col]
else:
for colname,col in output.iteritems():
output[colname] = LabelEncoder().fit_transform(col)
return output
def fit_transform(self,X,y=None):
return self.fit(X,y).transform(X)
if classifier_algorithm == "Decision Tree":
all_data = MultiColumnLabelEncoder().fit_transform(all_data)
elif classifier_algorithm == "Naive Bayes":
all_data = encode_data_df(all_data)
all_data['no_data'] = all_data['nan'].apply(lambda x: x.any(), axis=1).astype(int)
all_data = all_data.drop('nan', axis=1)
else:
raise KeyError("Please enter a valid classification type (Decision Trees or Naive Bayes)")
Explanation: Encode data
Must encode data before separating, otherwise values that do not occur in a subset will be encoded differently
End of explanation
unclassified_logs = all_data[all_data['slave_logs']==0]
#unclassified_logs = unclassified_logs.drop('slave_logs', axis=1)
validation_set_1 = all_data[all_data['slave_logs']==2]
#validation_set_1 = validation_set_1.drop('slave_logs', axis=1)
# reserve first 20% of slave_voyage_logs as validation set
validation_set_2_indices = range(slave_voyage_indices.min(),
slave_voyage_indices.min() + round(len(slave_voyage_indices)*.2))
validation_set_2 = all_data.iloc[validation_set_2_indices]
#validation_set_2 = validation_set_2.drop('slave_logs', axis=1)
training_logs_pos = all_data.drop(validation_set_2_indices)
training_logs_pos = training_logs_pos[training_logs_pos['slave_logs']==3]
#training_logs_pos = training_logs_pos.drop('slave_logs', axis=1)
# note! This relies on cliwoc data being first in all_data
# could make more robust later
training_logs_neg = all_data[all_data['slave_logs']==1]
#training_logs_neg = training_logs_neg.drop('slave_logs', axis=1)
# cleanup
#del all_data
Explanation: Extract training data, and create list of classes
End of explanation
def finding_null_values(df):
return df.isnull().sum()[df.isnull().sum()>0]
repeat_multiplier = round(len(training_logs_pos)/len(training_logs_neg))
# create list of classes for training data (0 is for non-slave, 1 is for slave)
# index matches training_data
classes = np.zeros(len(training_logs_neg)).repeat(repeat_multiplier)
#classes = np.append(classes, np.ones(len(training_logs_pos)))
classes = np.append(classes, np.ones(len(training_logs_pos)))
# join training data
neg_rep = pd.concat([training_logs_neg]*repeat_multiplier)
training_data = pd.concat([neg_rep, training_logs_pos], ignore_index = True)
# convert to numpy array
columns = list(training_data.columns)
columns.remove('slave_logs')
training_data = training_data.as_matrix(columns)
Explanation: left this code so we can check if there are any null values in each
dataframe
End of explanation
if classifier_algorithm == "Decision Tree":
classifier = MultinomialNB(alpha = 1.0, class_prior = None, fit_prior = True)
classifier.fit(training_data[::,1::], classes)
elif classifier_algorithm == "Naive Bayes":
classifier = tree.DecisionTreeClassifier()
classifier.fit(training_data[::,1::], classes)
else:
raise KeyError("Please enter a valid classification type (Decision Trees or Naive Bayes)")
Explanation: Fit training data to classifier
note! first column of numpy array is index! do not include in classification!
End of explanation
def validation_test(classifier, validation_set, expected_class):
input classifer object, validation set (data frame), and expected class
of validation set (i.e. 1 or 0). Prints successful classification rate.
columns = list(validation_set.columns)
columns.remove('slave_logs')
validation_set = validation_set.as_matrix(columns)
predictions = classifier.predict(validation_set[::,1::])
counts = collections.Counter(predictions)
percent_correct = (counts[expected_class]/(len(predictions))* 100)
print('Validation set was classified as', expected_class,
round(percent_correct,2), '% of the time')
def predict_class(classifier, data_subset):
Predict class of data, and append predictions to data frame
try:
# drop old predictions before reclassifying (if they exist)
data_subset = data_subset.drop('predictions', axis = 1)
data_to_classify = data_subset.copy()
except:
data_to_classify = data_subset.copy()
pass
# convert to numpy and classify
columns = list(data_to_classify.columns)
columns.remove('slave_logs')
data_matrix = data_to_classify.as_matrix(columns)
predictions = classifier.predict(data_matrix[::,1::])
# revalue slave_log ID column to indicate classification
data_to_classify['slave_logs'] = predictions + 4
# print statstics
counts = collections.Counter(predictions)
for key in counts:
percent = (counts[key]/(len(predictions))* 100)
print(round(percent, 2), 'of data was classified as ', key)
# update slave_log columns
return data_to_classify
print('Testing validation data from slave logs data set')
validation_test(classifier, validation_set_2, 1)
print('Testing validation data from cliwoc data set:')
validation_test(classifier, validation_set_1, 1)
unclassified_logs = predict_class(classifier, unclassified_logs)
unclassified_logs.head()
Explanation: Test classifier
check if slave logs from cliwoc data were classified correctly (want mostly classified as 1)
compare first column with slave_index
End of explanation
# export PDF with decision tree
from sklearn.externals.six import StringIO
import os
import pydot
dot_data = StringIO()
tree.export_graphviz(new_classifier, out_file=dot_data)
graph = pydot.graph_from_dot_data(dot_data.getvalue())
graph.write_pdf("test.pdf")
Explanation: try decision trees plotting
Following lines of code do not currently work, we need to install graphviz
End of explanation |
13,431 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
For more info on Myria, access to the demo cluster, and setting up a cluster
Step1: MyriaL Basics
First you would scan in whatever tables you want to use in that cell.
These are the tables visible in the Myria-Web datasets tab.
R1 = scan(cosmo8_000970);
This put all of the data in the cosmo8_000970 table into the relation R1 which can now be queried with MyriaL
R2 = select * from R1 limit 5;
Once we have a relation, we can query it, and store the result in a new relation.
Sometimes you just want to see the output of the cell you're running, or sometimes you want to store the result for later use. In either case, you have to store the relation that you want to see or store the output of, because otherwise Myria will optimize the query into an empty query.
store(R2, MyInterestingResult);
This will add MyInterestingResult to the list of datasets on Myria-Web. If you are running multiple queries and want to just see their results without storing multiple new tables, you can pick a name and overwrite it repeatedly
Step2: User Defined Functions
In MyriaL we can define our own functions that will then be applied to the results of a query. These can either be written in Python and registered with Myria or they can be written directly within a MyriaL cell (but not in Python).
When registering a Python function as a UDF, we need to specify the type of the return value. The possible types are the INTERNAL_TYPES defined in raco.types <a href="https
Step3: There is also special syntax for user defined aggregate functions, which use all of the rows to produce a single output, like a Reduce or Fold function pattern
Step4: Working with multiple snapshots
On the Myria demo cluster we only provide cosmo8_000970, but on a private cluster we could load in any number of snapshots to look for how things change over time. | Python Code:
# myria-python functionality
from myria import *
# myriaL cell functionality
%load_ext myria
# connection for myria-python functionality
connection = MyriaConnection(rest_url='http://localhost:8753')
# same as: http://ec2-52-36-55-94.us-west-2.compute.amazonaws.com:8753
# connection for myriaL cell functionality
%connect http://localhost:8753
Explanation: For more info on Myria, access to the demo cluster, and setting up a cluster: http://myria.cs.washington.edu/
Connecting to Myria
End of explanation
%%query
-- comments in MyriaL look like this
-- notice that the notebook highlighting still thinks we are writing python: in, from, for, range, return
R1 = scan(cosmo8_000970);
R2 = select * from R1 limit 5;
R3 = select iOrder from R1 limit 5;
store(R2, garbage);
%%query
-- there are some built in functions that are useful, just like regular SQL:
cosmo8 = scan(cosmo8_000970);
countRows = select count(*) as countRows from cosmo8;
store(countRows, garbage);
%%query
-- lets say we want just the number of gas particles
cosmo8 = scan(cosmo8_000970);
c = select count(*) as numGas from cosmo8 where type = 'gas';
store(c, garbage);
%%query
-- some stats about the positions of star particles
cosmo8 = scan(cosmo8_000970);
positionStats = select min(x) as min_x
, max(x) as max_x
, avg(x) as avg_x
, stdev(x) as stdev_x
, min(y) as min_y
, max(y) as max_y
, avg(y) as avg_y
, stdev(y) as stdev_y
, min(z) as min_z
, max(z) as max_z
, avg(z) as avg_z
, stdev(z) as stdev_z
from cosmo8
where type = 'star';
store(positionStats, garbage);
# we can also create constants in Python and reference them in MyriaL
low = 50000
high = 100000
destination = 'tempRangeCosmo8'
%%query
-- we can reference Python constants with '@'
cosmo8 = scan(cosmo8_000970);
temps = select iOrder, mass, type, temp
from cosmo8
where temp > @low and temp < @high
limit 10;
store(temps, @destination);
Explanation: MyriaL Basics
First you would scan in whatever tables you want to use in that cell.
These are the tables visible in the Myria-Web datasets tab.
R1 = scan(cosmo8_000970);
This put all of the data in the cosmo8_000970 table into the relation R1 which can now be queried with MyriaL
R2 = select * from R1 limit 5;
Once we have a relation, we can query it, and store the result in a new relation.
Sometimes you just want to see the output of the cell you're running, or sometimes you want to store the result for later use. In either case, you have to store the relation that you want to see or store the output of, because otherwise Myria will optimize the query into an empty query.
store(R2, MyInterestingResult);
This will add MyInterestingResult to the list of datasets on Myria-Web. If you are running multiple queries and want to just see their results without storing multiple new tables, you can pick a name and overwrite it repeatedly:
%%query
...
store(R2, temp);
...
query%%
...
store(R50, temp);
All statements need to be ended with a semicolon!
Also, note that a MyriaL cell cannot contain any Python.
These cells are Python by default, but a MyriaL cell starts with %%query and can only contain MyriaL syntax.
End of explanation
from raco.types import DOUBLE_TYPE
from myria.udf import MyriaPythonFunction
# each row is passed in as a tupl within a list
def sillyUDF(tuplList):
row = tuplList[0]
x = row[0]
y = row[1]
z = row[2]
if (x > y):
return x + y + z
else:
return z
# A python function needs to be registered to be able to
# call it from a MyriaL cell
MyriaPythonFunction(sillyUDF, DOUBLE_TYPE).register()
# To see all functions currently registered
connection.get_functions()
%%query
-- for your queries to run faster, its better to push the UDF to the smallest possible set of data
cosmo8 = scan(cosmo8_000970);
small = select * from cosmo8 limit 10;
res = select sillyUDF(x,y,z) as sillyPyRes from small;
store(res, garbage);
%%query
-- same thing but as a MyriaL UDF
def silly(x,y,z):
case
when x > y
then x + y + z
else z
end;
cosmo8 = scan(cosmo8_000970);
res = select silly(x,y,z) as sillyMyRes from cosmo8 limit 10;
store(res, garbage);
from raco.types import DOUBLE_TYPE
def distance(tuplList):
# note that libraries used inside the UDF need to be imported inside the UDF
import math
row = tuplList[0]
x1 = row[0]
y1 = row[1]
z1 = row[2]
x2 = row[3]
y2 = row[4]
z2 = row[5]
return math.sqrt((x1-x2)**2 + (y1-y2)**2 + (z1-z2)**2)
MyriaPythonFunction(distance, DOUBLE_TYPE).register()
print distance([(.1, .1, .1, .2, .2, .2)])
eps = .0042
%%query
-- here I am trying to find all points within eps distance of a given point
-- in order to avoid the expensive UDF distance() call on every point in the data,
-- I first filter the points by a simpler range query that immitates a bounding box
cosmo8 = scan(cosmo8_000970);
point = select * from cosmo8 where iOrder = 68649;
cube = select c.* from cosmo8 as c, point as p
where abs(c.x - p.x) < @eps
and abs(c.y - p.y) < @eps
and abs(c.z - p.z) < @eps;
distances = select c.*, distance(c.x, c.y, c.z, p.x, p.y, p.z) as dist from cube as c, point as p;
res = select * from distances where dist < @eps;
store(res, garbage);
%%query
cosmo8 = scan(cosmo8_000970);
point = select * from cosmo8 where iOrder = 68649;
cube = select c.* from cosmo8 as c, point as p
where abs(c.x - p.x) < @eps
and abs(c.y - p.y) < @eps
and abs(c.z - p.z) < @eps;
onlyGases = select * from cube where type = 'gas';
distances = select c.*, distance(c.x, c.y, c.z, p.x, p.y, p.z) as dist from onlyGases as c, point as p;
res = select * from distances where dist < @eps;
store(res, garbage);
Explanation: User Defined Functions
In MyriaL we can define our own functions that will then be applied to the results of a query. These can either be written in Python and registered with Myria or they can be written directly within a MyriaL cell (but not in Python).
When registering a Python function as a UDF, we need to specify the type of the return value. The possible types are the INTERNAL_TYPES defined in raco.types <a href="https://github.com/uwescience/raco/blob/4b2387aaaa82daaeac6c8960c837a6ccc7d46ff8/raco/types.py">as seen here</a>
Currently a function signature can't be registered more than once. In order to overwrite an existing registered function of the same signature, you have to use the Restart Kernel button in the Jupyter Notebook toolbar.
End of explanation
%%query
-- UDA example using MyriaL functions inside the UDA update line
def pickBasedOnValue2(val1, arg1, val2, arg2):
case
when val1 >= val2
then arg1
else arg2
end;
def maxValue2(val1, val2):
case
when val1 >= val2
then val1
else val2
end;
uda argMaxAndMax(arg, val) {
[-1 as argAcc, -1.0 as valAcc];
[pickBasedOnValue2(val, arg, valAcc, argAcc),
maxValue2(val, valAcc)];
[argAcc, valAcc];
};
cosmo8 = scan(cosmo8_000970);
res = select argMaxAndMax(iOrder, vx) from cosmo8;
store(res, garbage);
# Previously when we wrote a UDF we expected the tuplList to only hold one row
# but UDFs that are used in a UDA could be given many rows at a time, so it is
# important to loop over all of them and keep track of the state/accumulator outside
# the loop, and then return the value that is expected by the update-expr line in the UDA.
from raco.types import LONG_TYPE
def pickBasedOnValue(tuplList):
maxArg = -1
maxVal = -1.0
for tupl in tuplList:
value1 = tupl[0]
arg1 = tupl[1]
value2 = tupl[2]
arg2 = tupl[3]
if (value1 >= value2):
if (value1 >= maxVal):
maxArg = arg1
maxVal = value1
else:
if (value2 >= maxVal):
maxArg = arg2
maxVal = value2
return maxArg
MyriaPythonFunction(pickBasedOnValue, LONG_TYPE).register()
from raco.types import DOUBLE_TYPE
def maxValue(tuplList):
maxVal = -1.0
for tupl in tuplList:
value1 = tupl[0]
value2 = tupl[1]
if (value1 >= value2):
if (value1 >= maxVal):
maxVal = value1
else:
if (value2 >= maxVal):
maxVal = value2
return maxVal
MyriaPythonFunction(maxValue, DOUBLE_TYPE).register()
%%query
-- UDA example using Python functions inside the UDA update line
uda argMaxAndMax(arg, val) {
[-1 as argAcc, -1.0 as valAcc];
[pickBasedOnValue(val, arg, valAcc, argAcc),
maxValue(val, valAcc)];
[argAcc, valAcc];
};
t = scan(cosmo8_000970);
s = select argMaxAndMax(iOrder, vx) from t;
store(s, garbage);
%%query
-- of course, argMaxAndMax can be done much more simply:
c = scan(cosmo8_000970);
m = select max(vx) as mvx from c;
res = select iOrder, mvx from m,c where vx = mvx;
store(res, garbage);
Explanation: There is also special syntax for user defined aggregate functions, which use all of the rows to produce a single output, like a Reduce or Fold function pattern:
uda func-name(args) {
initialization-expr(s);
update-expr(s);
result-expr(s);
};
Where each of the inner lines is a bracketed statement with an entry for each expression that you want to output.
End of explanation
%%query
c8_000970 = scan(cosmo8_000970);
c8_000962 = scan(cosmo8_000962);
-- finding all gas particles that were destroyed between step 000962 and 000970
c1Gases = select iOrder from c8_000962 where type = 'gas';
c2Gases = select iOrder from c8_000970 where type = 'gas';
exist = select c1.iOrder from c1Gases as c1, c2Gases as c2 where c1.iOrder = c2.iOrder;
destroyed = diff(c1Gases, exist);
store(destroyed, garbage);
%%query
c8_000970 = scan(cosmo8_000970);
c8_000962 = scan(cosmo8_000962);
-- finding all particles where some property changed between step 000962 and 000970
res = select c1.iOrder
from c8_000962 as c1, c8_000970 as c2
where c1.iOrder = c2.iOrder
and c1.metals = 0.0 and c2.metals > 0.0;
store(res, garbage);
from IPython.display import HTML
HTML('''<script>
code_show_err=false;
function code_toggle_err() {
if (code_show_err){
$('div.output_stderr').hide();
} else {
$('div.output_stderr').show();
}
code_show_err = !code_show_err
}
$( document ).ready(code_toggle_err);
</script>
To toggle on/off output_stderr, click <a href="javascript:code_toggle_err()">here</a>.''')
Explanation: Working with multiple snapshots
On the Myria demo cluster we only provide cosmo8_000970, but on a private cluster we could load in any number of snapshots to look for how things change over time.
End of explanation |
13,432 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
FloPy
Quick demo on how FloPy handles external files for arrays
Step1: make an hk and vka array. We'll save hk to files - pretent that you spent months making this important model property. Then make an lpf
Step2: Let's also have some recharge with mixed args as well. Pretend the recharge in the second stress period is very important and precise
Step3: Let's look at the files that were created
Step4: We see that a copy of the hk files as well as the important recharge file were made in the model_ws.Let's looks at the lpf file
Step5: We see that the open/close approach was used - this is because ml.array_free_format is True. Notice that vka is written internally
Step6: Now change model_ws
Step7: Now when we call write_input(), a copy of external files are made in the current model_ws
Step8: Now we see that the external files were copied to the new model_ws
Using external_path
It is sometimes useful when first building a model to write the model arrays as external files for processing and parameter estimation. The model attribute external_path triggers this behavior
Step9: We can see that the model constructor created both model_ws and external_path which is relative to the model_ws
Step10: Now, when we call write_input(), any array properties that were specified as np.ndarray will be written externally. If a scalar was passed as the argument, the value remains internal to the model input files
Step11: Now, vka was also written externally, but not the storage properties.Let's verify the contents of the external path directory. We see our hard-fought hk and important_recharge arrays, as well as the vka arrays.
Step12: Fixed format
All of this behavior also works for fixed-format type models (really, really old models - I mean OLD!)
Step13: We see that now the external arrays are being handled through the name file. Let's look at the name file
Step14: "free" and "binary" format
Step15: The .how attribute
Util2d includes a .how attribute that gives finer grained control of how arrays will written
Step16: This will raise an error since our model does not support free format...
Step17: So let's reset hk layer 1 back to external... | Python Code:
import os
import sys
import shutil
import numpy as np
import flopy
print(sys.version)
print('numpy version: {}'.format(np.__version__))
print('flopy version: {}'.format(flopy.__version__))
# make a model
nlay,nrow,ncol = 10,20,5
model_ws = os.path.join("data","external_demo")
if os.path.exists(model_ws):
shutil.rmtree(model_ws)
# the place for all of your hand made and costly model inputs
array_dir = os.path.join("data","array_dir")
if os.path.exists(array_dir):
shutil.rmtree(array_dir)
os.mkdir(array_dir)
ml = flopy.modflow.Modflow(model_ws=model_ws)
dis = flopy.modflow.ModflowDis(ml,nlay=nlay,nrow=nrow,ncol=ncol,steady=False,nper=2)
Explanation: FloPy
Quick demo on how FloPy handles external files for arrays
End of explanation
hk = np.zeros((nlay,nrow,ncol)) + 5.0
vka = np.zeros_like(hk)
fnames = []
for i,h in enumerate(hk):
fname = os.path.join(array_dir,"hk_{0}.ref".format(i+1))
fnames.append(fname)
np.savetxt(fname,h)
vka[i] = i+1
lpf = flopy.modflow.ModflowLpf(ml,hk=fnames,vka=vka)
Explanation: make an hk and vka array. We'll save hk to files - pretent that you spent months making this important model property. Then make an lpf
End of explanation
warmup_recharge = np.ones((nrow,ncol))
important_recharge = np.random.random((nrow,ncol))
fname = os.path.join(array_dir,"important_recharge.ref")
np.savetxt(fname,important_recharge)
rch = flopy.modflow.ModflowRch(ml,rech={0:warmup_recharge,1:fname})
ml.write_input()
Explanation: Let's also have some recharge with mixed args as well. Pretend the recharge in the second stress period is very important and precise
End of explanation
print("model_ws:",ml.model_ws)
print('\n'.join(os.listdir(ml.model_ws)))
Explanation: Let's look at the files that were created
End of explanation
open(os.path.join(ml.model_ws,ml.name+".lpf"),'r').readlines()[:20]
Explanation: We see that a copy of the hk files as well as the important recharge file were made in the model_ws.Let's looks at the lpf file
End of explanation
ml.array_free_format
Explanation: We see that the open/close approach was used - this is because ml.array_free_format is True. Notice that vka is written internally
End of explanation
print(ml.model_ws)
ml.model_ws = os.path.join("data","new_external_demo_dir")
Explanation: Now change model_ws
End of explanation
ml.write_input()
# list the files in model_ws that have 'hk' in the name
print('\n'.join([name for name in os.listdir(ml.model_ws) if "hk" in name or "impor" in name]))
Explanation: Now when we call write_input(), a copy of external files are made in the current model_ws
End of explanation
# make a model - same code as before except for the model constructor
nlay,nrow,ncol = 10,20,5
model_ws = os.path.join("data","external_demo")
if os.path.exists(model_ws):
shutil.rmtree(model_ws)
# the place for all of your hand made and costly model inputs
array_dir = os.path.join("data","array_dir")
if os.path.exists(array_dir):
shutil.rmtree(array_dir)
os.mkdir(array_dir)
# lets make an external path relative to the model_ws
ml = flopy.modflow.Modflow(model_ws=model_ws, external_path="ref")
dis = flopy.modflow.ModflowDis(ml,nlay=nlay,nrow=nrow,ncol=ncol,steady=False,nper=2)
hk = np.zeros((nlay,nrow,ncol)) + 5.0
vka = np.zeros_like(hk)
fnames = []
for i,h in enumerate(hk):
fname = os.path.join(array_dir,"hk_{0}.ref".format(i+1))
fnames.append(fname)
np.savetxt(fname,h)
vka[i] = i+1
lpf = flopy.modflow.ModflowLpf(ml,hk=fnames,vka=vka)
warmup_recharge = np.ones((nrow,ncol))
important_recharge = np.random.random((nrow,ncol))
fname = os.path.join(array_dir,"important_recharge.ref")
np.savetxt(fname,important_recharge)
rch = flopy.modflow.ModflowRch(ml,rech={0:warmup_recharge,1:fname})
Explanation: Now we see that the external files were copied to the new model_ws
Using external_path
It is sometimes useful when first building a model to write the model arrays as external files for processing and parameter estimation. The model attribute external_path triggers this behavior
End of explanation
os.listdir(ml.model_ws)
Explanation: We can see that the model constructor created both model_ws and external_path which is relative to the model_ws
End of explanation
ml.write_input()
open(os.path.join(ml.model_ws,ml.name+".lpf"),'r').readlines()[:20]
Explanation: Now, when we call write_input(), any array properties that were specified as np.ndarray will be written externally. If a scalar was passed as the argument, the value remains internal to the model input files
End of explanation
ml.lpf.ss.how = "internal"
ml.write_input()
open(os.path.join(ml.model_ws,ml.name+".lpf"),'r').readlines()[:20]
print('\n'.join(os.listdir(os.path.join(ml.model_ws,ml.external_path))))
Explanation: Now, vka was also written externally, but not the storage properties.Let's verify the contents of the external path directory. We see our hard-fought hk and important_recharge arrays, as well as the vka arrays.
End of explanation
# make a model - same code as before except for the model constructor
nlay,nrow,ncol = 10,20,5
model_ws = os.path.join("data","external_demo")
if os.path.exists(model_ws):
shutil.rmtree(model_ws)
# the place for all of your hand made and costly model inputs
array_dir = os.path.join("data","array_dir")
if os.path.exists(array_dir):
shutil.rmtree(array_dir)
os.mkdir(array_dir)
# lets make an external path relative to the model_ws
ml = flopy.modflow.Modflow(model_ws=model_ws, external_path="ref")
# explicitly reset the free_format flag BEFORE ANY PACKAGES ARE MADE!!!
ml.array_free_format = False
dis = flopy.modflow.ModflowDis(ml,nlay=nlay,nrow=nrow,ncol=ncol,steady=False,nper=2)
hk = np.zeros((nlay,nrow,ncol)) + 5.0
vka = np.zeros_like(hk)
fnames = []
for i,h in enumerate(hk):
fname = os.path.join(array_dir,"hk_{0}.ref".format(i+1))
fnames.append(fname)
np.savetxt(fname,h)
vka[i] = i+1
lpf = flopy.modflow.ModflowLpf(ml,hk=fnames,vka=vka)
ml.lpf.ss.how = "internal"
warmup_recharge = np.ones((nrow,ncol))
important_recharge = np.random.random((nrow,ncol))
fname = os.path.join(array_dir,"important_recharge.ref")
np.savetxt(fname,important_recharge)
rch = flopy.modflow.ModflowRch(ml,rech={0:warmup_recharge,1:fname})
ml.write_input()
Explanation: Fixed format
All of this behavior also works for fixed-format type models (really, really old models - I mean OLD!)
End of explanation
open(os.path.join(ml.model_ws,ml.name+".nam"),'r').readlines()
Explanation: We see that now the external arrays are being handled through the name file. Let's look at the name file
End of explanation
ml.dis.botm[0].format.binary = True
ml.write_input()
open(os.path.join(ml.model_ws,ml.name+".nam"),'r').readlines()
open(os.path.join(ml.model_ws,ml.name+".dis"),'r').readlines()
Explanation: "free" and "binary" format
End of explanation
ml.lpf.hk[0].how
Explanation: The .how attribute
Util2d includes a .how attribute that gives finer grained control of how arrays will written
End of explanation
ml.lpf.hk[0].how = "openclose"
ml.lpf.hk[0].how
ml.write_input()
Explanation: This will raise an error since our model does not support free format...
End of explanation
ml.lpf.hk[0].how = "external"
ml.lpf.hk[0].how
ml.dis.top.how = "external"
ml.write_input()
open(os.path.join(ml.model_ws,ml.name+".dis"),'r').readlines()
open(os.path.join(ml.model_ws,ml.name+".lpf"),'r').readlines()
open(os.path.join(ml.model_ws,ml.name+".nam"),'r').readlines()
Explanation: So let's reset hk layer 1 back to external...
End of explanation |
13,433 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
Step1: Trying that with a real picture
Step2: A color image is a 3D array, where the last dimension has size 3 and represents the red, green, and blue channels
Step3: These are just numpy arrays. Making a red square is easy using just array slicing and manipulation
Step4: As we will see, this opens up many lines of analysis for free.
Exercise
Step5: Test your function like so
Step6: Bonus points
Step7: Test your function here | Python Code:
import numpy as np
r = np.random.rand(500, 500)
from matplotlib import pyplot as plt, cm
plt.imshow(r, cmap=cm.gray, interpolation='nearest')
Explanation: Introduction: images are numpy arrays
A grayscale image is just a 2D array:
End of explanation
from skimage import data
coins = data.coins()
print(type(coins))
print(coins.dtype)
print(coins.shape)
plt.imshow(coins, cmap=cm.gray, interpolation='nearest')
Explanation: Trying that with a real picture:
End of explanation
lena = data.lena()
print(lena.shape)
plt.imshow(lena, interpolation='nearest')
Explanation: A color image is a 3D array, where the last dimension has size 3 and represents the red, green, and blue channels:
End of explanation
lena[100:200, 100:200, :] = [255, 0, 0] # [red, green, blue]
plt.imshow(lena)
Explanation: These are just numpy arrays. Making a red square is easy using just array slicing and manipulation:
End of explanation
def draw_h(image, coords, in_place=True):
pass # code goes here
Explanation: As we will see, this opens up many lines of analysis for free.
Exercise: draw an H
Define a function that takes as input an RGB image and a pair of coordinates (row, column), and returns the image (optionally a copy) with green letter H overlaid at those coordinates. The coordinates should point to the top-left corner of the H.
The arms and strut of the H should have a width of 3 pixels, and the H itself should have a height of 24 pixels and width of 20 pixels.
End of explanation
lena_h = draw_h(lena, (50, -50), in_place=False)
plt.imshow(lena_h)
Explanation: Test your function like so:
End of explanation
def plot_intensity(image, row):
pass # code goes here
Explanation: Bonus points: RGB intensity plot
Plot the intensity of each channel of the image along some row.
End of explanation
plot_intensity(coins, 50)
plot_intensity(lena, 250)
Explanation: Test your function here:
End of explanation |
13,434 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vertex client library
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Restart the kernel
Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
Step3: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step4: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
Americas
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
Step6: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step7: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a custom training job using the Vertex client library, you upload a Python package
containing your training code to a Cloud Storage bucket. Vertex runs
the code from this package. In this tutorial, Vertex also saves the
trained model that results from your job in the same bucket. You can then
create an Endpoint resource based on this output in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex client library
Import the Vertex client library into our Python environment.
Step11: Vertex constants
Setup up the following constants for Vertex
Step12: Hardware Accelerators
Set the hardware accelerators (e.g., GPU), if any, for training and prediction.
Set the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify
Step13: Container (Docker) image
Next, we will set the Docker container images for training and prediction
TensorFlow 1.15
gcr.io/cloud-aiplatform/training/tf-cpu.1-15
Step14: Machine Type
Next, set the machine type to use for training and prediction.
Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for training and prediction.
machine type
n1-standard
Step15: Tutorial
Now you are ready to start creating your own custom model and training for CIFAR10.
Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
Model Service for Model resources.
Endpoint Service for deployment.
Job Service for batch jobs and custom training.
Prediction Service for serving.
Step16: Train a model
There are two ways you can train a custom model using a container image
Step17: Prepare your disk specification
(optional) Now define the disk specification for your custom training job. This tells Vertex what type and size of disk to provision in each machine instance for the training.
boot_disk_type
Step18: Define the worker pool specification
Next, you define the worker pool specification for your custom training job. The worker pool specification will consist of the following
Step19: Assemble a job specification
Now assemble the complete description for the custom job specification
Step20: Examine the training package
Package layout
Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.
PKG-INFO
README.md
setup.cfg
setup.py
trainer
__init__.py
task.py
The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.
The file trainer/task.py is the Python script for executing the custom training job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).
Package Assembly
In the following cells, you will assemble the training package.
Step21: Task.py contents
In the next cell, you write the contents of the training script task.py. We won't go into detail, it's just there for you to browse. In summary
Step22: Store training script on your Cloud Storage bucket
Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
Step23: Train the model
Now start the training of your custom training job on Vertex. Use this helper function create_custom_job, which takes the following parameter
Step24: Now get the unique identifier for the custom job you created.
Step25: Get information on a custom job
Next, use this helper function get_custom_job, which takes the following parameter
Step26: Deployment
Training the above model may take upwards of 20 minutes time.
Once your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, we will need to know the location of the saved model, which the Python script saved in your local Cloud Storage bucket at MODEL_DIR + '/saved_model.pb'.
Step27: Load the saved model
Your model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.
To load, you use the TF.Keras model.load_model() method passing it the Cloud Storage path where the model is saved -- specified by MODEL_DIR.
Step28: Evaluate the model
Now find out how good the model is.
Load evaluation data
You will load the CIFAR10 test (holdout) data from tf.keras.datasets, using the method load_data(). This will return the dataset as a tuple of two elements. The first element is the training data and the second is the test data. Each element is also a tuple of two elements
Step29: Perform the model evaluation
Now evaluate how well the model in the custom job did.
Step30: Upload the model for serving
Next, you will upload your TF.Keras model from the custom job to Vertex Model service, which will create a Vertex Model resource for your custom model. During upload, you need to define a serving function to convert data to the format your model expects. If you send encoded data to Vertex, your serving function ensures that the data is decoded on the model server before it is passed as input to your model.
How does the serving function work
When you send a request to an online prediction server, the request is received by a HTTP server. The HTTP server extracts the prediction request from the HTTP request content body. The extracted prediction request is forwarded to the serving function. For Google pre-built prediction containers, the request content is passed to the serving function as a tf.string.
The serving function consists of two parts
Step31: Get the serving function signature
You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.
For your purpose, you need the signature of the serving function. Why? Well, when we send our data for prediction as a HTTP request packet, the image data is base64 encoded, and our TF.Keras model takes numpy input. Your serving function will do the conversion from base64 to a numpy array.
When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.
Step32: Upload the model
Use this helper function upload_model to upload your model, stored in SavedModel format, up to the Model service, which will instantiate a Vertex Model resource instance for your model. Once you've done that, you can use the Model resource instance in the same way as any other Vertex Model resource instance, such as deploying to an Endpoint resource for serving predictions.
The helper function takes the following parameters
Step33: Get Model resource information
Now let's get the model information for just your model. Use this helper function get_model, with the following parameter
Step34: Deploy the Model resource
Now deploy the trained Vertex custom Model resource. This requires two steps
Step35: Now get the unique identifier for the Endpoint resource you created.
Step36: Compute instance scaling
You have several choices on scaling the compute instances for handling your online prediction requests
Step37: Deploy Model resource to the Endpoint resource
Use this helper function deploy_model to deploy the Model resource to the Endpoint resource you created for serving predictions, with the following parameters
Step38: Make a online prediction request
Now do a online prediction to your deployed model.
Get test item
You will use an example out of the test (holdout) portion of the dataset as a test item.
Step39: Send the prediction request
Ok, now you have a test image. Use this helper function predict_image, which takes the following parameters
Step40: Undeploy the Model resource
Now undeploy your Model resource from the serving Endpoint resoure. Use this helper function undeploy_model, which takes the following parameters
Step41: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG
Explanation: Vertex client library: Custom training image super resolution model for online prediction with post processing of prediction
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/custom/showcase_custom_image_super_resolution_online_post.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/custom/showcase_custom_image_super_resolution_online_post.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex client library for Python to train and deploy a custom image super resolution model for online prediction, with post-processing of the prediction.
Dataset
The dataset used for this tutorial is the CIFAR10 dataset from TensorFlow Datasets. The version of the dataset you will use is built into TensorFlow. The trained model predicts which type of class an image is from ten classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck.
Objective
In this tutorial, you will learn how to create a custom model from a Python script in a Docker container using the Vertex client library, do a prediction on the deployed model, and do post-processing on the prediction in the serving binary. You can alternatively create and deploy models using the gcloud command-line tool or online using the Google Cloud Console.
The steps performed include:
Create a Vertex custom job for training a model.
Train a TensorFlow model using a custom container.
Retrieve and load the model artifacts.
View the model evaluation.
Construct serving function for post processing.
Upload the model as a Vertex Model resource.
Deploy the Model resource to a serving Endpoint resource.
Make a prediction.
Undeploy the Model resource.
Costs
This tutorial uses billable components of Google Cloud (GCP):
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Installation
Install the latest version of Vertex client library.
End of explanation
! pip3 install -U google-cloud-storage $USER_FLAG
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
End of explanation
REGION = "us-central1" # @param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the Vertex locations documentation
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
End of explanation
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a custom training job using the Vertex client library, you upload a Python package
containing your training code to a Cloud Storage bucket. Vertex runs
the code from this package. In this tutorial, Vertex also saves the
trained model that results from your job in the same bucket. You can then
create an Endpoint resource based on this output in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
! gsutil mb -l $REGION $BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import time
from google.cloud.aiplatform import gapic as aip
from google.protobuf import json_format
from google.protobuf.struct_pb2 import Value
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex client library
Import the Vertex client library into our Python environment.
End of explanation
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
Explanation: Vertex constants
Setup up the following constants for Vertex:
API_ENDPOINT: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.
PARENT: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.
End of explanation
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)
if os.getenv("IS_TESTING_DEPOLY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPOLY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
Explanation: Hardware Accelerators
Set the hardware accelerators (e.g., GPU), if any, for training and prediction.
Set the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:
(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)
For GPU, available accelerators include:
- aip.AcceleratorType.NVIDIA_TESLA_K80
- aip.AcceleratorType.NVIDIA_TESLA_P100
- aip.AcceleratorType.NVIDIA_TESLA_P4
- aip.AcceleratorType.NVIDIA_TESLA_T4
- aip.AcceleratorType.NVIDIA_TESLA_V100
Otherwise specify (None, None) to use a container image to run on a CPU.
Note: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3 -- which is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.
End of explanation
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2-1"
if TF[0] == "2":
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf2-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf2-cpu.{}".format(TF)
else:
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf-cpu.{}".format(TF)
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION)
DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
Explanation: Container (Docker) image
Next, we will set the Docker container images for training and prediction
TensorFlow 1.15
gcr.io/cloud-aiplatform/training/tf-cpu.1-15:latest
gcr.io/cloud-aiplatform/training/tf-gpu.1-15:latest
TensorFlow 2.1
gcr.io/cloud-aiplatform/training/tf-cpu.2-1:latest
gcr.io/cloud-aiplatform/training/tf-gpu.2-1:latest
TensorFlow 2.2
gcr.io/cloud-aiplatform/training/tf-cpu.2-2:latest
gcr.io/cloud-aiplatform/training/tf-gpu.2-2:latest
TensorFlow 2.3
gcr.io/cloud-aiplatform/training/tf-cpu.2-3:latest
gcr.io/cloud-aiplatform/training/tf-gpu.2-3:latest
TensorFlow 2.4
gcr.io/cloud-aiplatform/training/tf-cpu.2-4:latest
gcr.io/cloud-aiplatform/training/tf-gpu.2-4:latest
XGBoost
gcr.io/cloud-aiplatform/training/xgboost-cpu.1-1
Scikit-learn
gcr.io/cloud-aiplatform/training/scikit-learn-cpu.0-23:latest
Pytorch
gcr.io/cloud-aiplatform/training/pytorch-cpu.1-4:latest
gcr.io/cloud-aiplatform/training/pytorch-cpu.1-5:latest
gcr.io/cloud-aiplatform/training/pytorch-cpu.1-6:latest
gcr.io/cloud-aiplatform/training/pytorch-cpu.1-7:latest
For the latest list, see Pre-built containers for training.
TensorFlow 1.15
gcr.io/cloud-aiplatform/prediction/tf-cpu.1-15:latest
gcr.io/cloud-aiplatform/prediction/tf-gpu.1-15:latest
TensorFlow 2.1
gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-1:latest
gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-1:latest
TensorFlow 2.2
gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-2:latest
gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-2:latest
TensorFlow 2.3
gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-3:latest
gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-3:latest
XGBoost
gcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-2:latest
gcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-1:latest
gcr.io/cloud-aiplatform/prediction/xgboost-cpu.0-90:latest
gcr.io/cloud-aiplatform/prediction/xgboost-cpu.0-82:latest
Scikit-learn
gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-23:latest
gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-22:latest
gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-20:latest
For the latest list, see Pre-built containers for prediction
End of explanation
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
Explanation: Machine Type
Next, set the machine type to use for training and prediction.
Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for training and prediction.
machine type
n1-standard: 3.75GB of memory per vCPU.
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: The following is not supported for training:
standard: 2 vCPUs
highcpu: 2, 4 and 8 vCPUs
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.
End of explanation
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_job_client():
client = aip.JobServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_endpoint_client():
client = aip.EndpointServiceClient(client_options=client_options)
return client
def create_prediction_client():
client = aip.PredictionServiceClient(client_options=client_options)
return client
clients = {}
clients["job"] = create_job_client()
clients["model"] = create_model_client()
clients["endpoint"] = create_endpoint_client()
clients["prediction"] = create_prediction_client()
for client in clients.items():
print(client)
Explanation: Tutorial
Now you are ready to start creating your own custom model and training for CIFAR10.
Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
Model Service for Model resources.
Endpoint Service for deployment.
Job Service for batch jobs and custom training.
Prediction Service for serving.
End of explanation
if TRAIN_GPU:
machine_spec = {
"machine_type": TRAIN_COMPUTE,
"accelerator_type": TRAIN_GPU,
"accelerator_count": TRAIN_NGPU,
}
else:
machine_spec = {"machine_type": TRAIN_COMPUTE, "accelerator_count": 0}
Explanation: Train a model
There are two ways you can train a custom model using a container image:
Use a Google Cloud prebuilt container. If you use a prebuilt container, you will additionally specify a Python package to install into the container image. This Python package contains your code for training a custom model.
Use your own custom container image. If you use your own container, the container needs to contain your code for training a custom model.
Prepare your custom job specification
Now that your clients are ready, your first step is to create a Job Specification for your custom training job. The job specification will consist of the following:
worker_pool_spec : The specification of the type of machine(s) you will use for training and how many (single or distributed)
python_package_spec : The specification of the Python package to be installed with the pre-built container.
Prepare your machine specification
Now define the machine specification for your custom training job. This tells Vertex what type of machine instance to provision for the training.
- machine_type: The type of GCP instance to provision -- e.g., n1-standard-8.
- accelerator_type: The type, if any, of hardware accelerator. In this tutorial if you previously set the variable TRAIN_GPU != None, you are using a GPU; otherwise you will use a CPU.
- accelerator_count: The number of accelerators.
End of explanation
DISK_TYPE = "pd-ssd" # [ pd-ssd, pd-standard]
DISK_SIZE = 200 # GB
disk_spec = {"boot_disk_type": DISK_TYPE, "boot_disk_size_gb": DISK_SIZE}
Explanation: Prepare your disk specification
(optional) Now define the disk specification for your custom training job. This tells Vertex what type and size of disk to provision in each machine instance for the training.
boot_disk_type: Either SSD or Standard. SSD is faster, and Standard is less expensive. Defaults to SSD.
boot_disk_size_gb: Size of disk in GB.
End of explanation
JOB_NAME = "custom_job_" + TIMESTAMP
MODEL_DIR = "{}/{}".format(BUCKET_NAME, JOB_NAME)
if not TRAIN_NGPU or TRAIN_NGPU < 2:
TRAIN_STRATEGY = "single"
else:
TRAIN_STRATEGY = "mirror"
EPOCHS = 20
STEPS = 100
DIRECT = True
if DIRECT:
CMDARGS = [
"--model-dir=" + MODEL_DIR,
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
"--distribute=" + TRAIN_STRATEGY,
]
else:
CMDARGS = [
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
"--distribute=" + TRAIN_STRATEGY,
]
worker_pool_spec = [
{
"replica_count": 1,
"machine_spec": machine_spec,
"disk_spec": disk_spec,
"python_package_spec": {
"executor_image_uri": TRAIN_IMAGE,
"package_uris": [BUCKET_NAME + "/trainer_cifar10.tar.gz"],
"python_module": "trainer.task",
"args": CMDARGS,
},
}
]
Explanation: Define the worker pool specification
Next, you define the worker pool specification for your custom training job. The worker pool specification will consist of the following:
replica_count: The number of instances to provision of this machine type.
machine_spec: The hardware specification.
disk_spec : (optional) The disk storage specification.
python_package: The Python training package to install on the VM instance(s) and which Python module to invoke, along with command line arguments for the Python module.
Let's dive deeper now into the python package specification:
-executor_image_spec: This is the docker image which is configured for your custom training job.
-package_uris: This is a list of the locations (URIs) of your python training packages to install on the provisioned instance. The locations need to be in a Cloud Storage bucket. These can be either individual python files or a zip (archive) of an entire package. In the later case, the job service will unzip (unarchive) the contents into the docker image.
-python_module: The Python module (script) to invoke for running the custom training job. In this example, you will be invoking trainer.task.py -- note that it was not neccessary to append the .py suffix.
-args: The command line arguments to pass to the corresponding Pythom module. In this example, you will be setting:
- "--model-dir=" + MODEL_DIR : The Cloud Storage location where to store the model artifacts. There are two ways to tell the training script where to save the model artifacts:
- direct: You pass the Cloud Storage location as a command line argument to your training script (set variable DIRECT = True), or
- indirect: The service passes the Cloud Storage location as the environment variable AIP_MODEL_DIR to your training script (set variable DIRECT = False). In this case, you tell the service the model artifact location in the job specification.
- "--epochs=" + EPOCHS: The number of epochs for training.
- "--steps=" + STEPS: The number of steps (batches) per epoch.
- "--distribute=" + TRAIN_STRATEGY" : The training distribution strategy to use for single or distributed training.
- "single": single device.
- "mirror": all GPU devices on a single compute instance.
- "multi": all GPU devices on all compute instances.
End of explanation
if DIRECT:
job_spec = {"worker_pool_specs": worker_pool_spec}
else:
job_spec = {
"worker_pool_specs": worker_pool_spec,
"base_output_directory": {"output_uri_prefix": MODEL_DIR},
}
custom_job = {"display_name": JOB_NAME, "job_spec": job_spec}
Explanation: Assemble a job specification
Now assemble the complete description for the custom job specification:
display_name: The human readable name you assign to this custom job.
job_spec: The specification for the custom job.
worker_pool_specs: The specification for the machine VM instances.
base_output_directory: This tells the service the Cloud Storage location where to save the model artifacts (when variable DIRECT = False). The service will then pass the location to the training script as the environment variable AIP_MODEL_DIR, and the path will be of the form: <output_uri_prefix>/model
End of explanation
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: CIFAR10 image super resolution\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: [email protected]\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
Explanation: Examine the training package
Package layout
Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.
PKG-INFO
README.md
setup.cfg
setup.py
trainer
__init__.py
task.py
The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.
The file trainer/task.py is the Python script for executing the custom training job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).
Package Assembly
In the following cells, you will assemble the training package.
End of explanation
%%writefile custom/trainer/task.py
# Single, Mirror and Multi-Machine Distributed Training for CIFAR-10
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv("AIP_MODEL_DIR"), type=str, help='Model dir.')
parser.add_argument('--lr', dest='lr',
default=0.01, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=10, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=200, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
print('DEVICES', device_lib.list_local_devices())
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
# Single Machine, multiple compute device
elif args.distribute == 'mirror':
strategy = tf.distribute.MirroredStrategy()
# Multiple Machine, multiple compute device
elif args.distribute == 'multi':
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# Multi-worker configuration
print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
# Preparing dataset
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets_unbatched():
# Scaling CIFAR10 data from (0, 255] to (0., 1.]
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255.0
return image, label
datasets, info = tfds.load(name='cifar10',
with_info=True,
as_supervised=True)
return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE).repeat()
# Build the Keras model
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(32, 32, 3)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(
loss=tf.keras.losses.sparse_categorical_crossentropy,
optimizer=tf.keras.optimizers.SGD(learning_rate=args.lr),
metrics=['accuracy'])
return model
# Train the model
NUM_WORKERS = strategy.num_replicas_in_sync
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size.
GLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS
train_dataset = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
model = build_and_compile_cnn_model()
model.fit(x=train_dataset, epochs=args.epochs, steps_per_epoch=args.steps)
model.save(args.model_dir)
Explanation: Task.py contents
In the next cell, you write the contents of the training script task.py. We won't go into detail, it's just there for you to browse. In summary:
Get the directory where to save the model artifacts from the command line (--model_dir), and if not specified, then from the environment variable AIP_MODEL_DIR.
Loads CIFAR10 dataset from TF Datasets (tfds).
Builds a model using TF.Keras model API.
Compiles the model (compile()).
Sets a training distribution strategy according to the argument args.distribute.
Trains the model (fit()) with epochs and steps according to the arguments args.epochs and args.steps
Saves the trained model (save(args.model_dir)) to the specified model directory.
End of explanation
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_cifar10.tar.gz
Explanation: Store training script on your Cloud Storage bucket
Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
End of explanation
def create_custom_job(custom_job):
response = clients["job"].create_custom_job(parent=PARENT, custom_job=custom_job)
print("name:", response.name)
print("display_name:", response.display_name)
print("state:", response.state)
print("create_time:", response.create_time)
print("update_time:", response.update_time)
return response
response = create_custom_job(custom_job)
Explanation: Train the model
Now start the training of your custom training job on Vertex. Use this helper function create_custom_job, which takes the following parameter:
-custom_job: The specification for the custom job.
The helper function calls job client service's create_custom_job method, with the following parameters:
-parent: The Vertex location path to Dataset, Model and Endpoint resources.
-custom_job: The specification for the custom job.
You will display a handful of the fields returned in response object, with the two that are of most interest are:
response.name: The Vertex fully qualified identifier assigned to this custom training job. You save this identifier for using in subsequent steps.
response.state: The current state of the custom training job.
End of explanation
# The full unique ID for the custom job
job_id = response.name
# The short numeric ID for the custom job
job_short_id = job_id.split("/")[-1]
print(job_id)
Explanation: Now get the unique identifier for the custom job you created.
End of explanation
def get_custom_job(name, silent=False):
response = clients["job"].get_custom_job(name=name)
if silent:
return response
print("name:", response.name)
print("display_name:", response.display_name)
print("state:", response.state)
print("create_time:", response.create_time)
print("update_time:", response.update_time)
return response
response = get_custom_job(job_id)
Explanation: Get information on a custom job
Next, use this helper function get_custom_job, which takes the following parameter:
name: The Vertex fully qualified identifier for the custom job.
The helper function calls the job client service'sget_custom_job method, with the following parameter:
name: The Vertex fully qualified identifier for the custom job.
If you recall, you got the Vertex fully qualified identifier for the custom job in the response.name field when you called the create_custom_job method, and saved the identifier in the variable job_id.
End of explanation
while True:
response = get_custom_job(job_id, True)
if response.state != aip.JobState.JOB_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_path_to_deploy = None
if response.state == aip.JobState.JOB_STATE_FAILED:
break
else:
if not DIRECT:
MODEL_DIR = MODEL_DIR + "/model"
model_path_to_deploy = MODEL_DIR
print("Training Time:", response.update_time - response.create_time)
break
time.sleep(60)
print("model_to_deploy:", model_path_to_deploy)
Explanation: Deployment
Training the above model may take upwards of 20 minutes time.
Once your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, we will need to know the location of the saved model, which the Python script saved in your local Cloud Storage bucket at MODEL_DIR + '/saved_model.pb'.
End of explanation
import tensorflow as tf
model = tf.keras.models.load_model(MODEL_DIR)
Explanation: Load the saved model
Your model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.
To load, you use the TF.Keras model.load_model() method passing it the Cloud Storage path where the model is saved -- specified by MODEL_DIR.
End of explanation
import numpy as np
from tensorflow.keras.datasets import cifar10
(_, _), (x_test, y_test) = cifar10.load_data()
x_test = (x_test / 255.0).astype(np.float32)
print(x_test.shape, y_test.shape)
Explanation: Evaluate the model
Now find out how good the model is.
Load evaluation data
You will load the CIFAR10 test (holdout) data from tf.keras.datasets, using the method load_data(). This will return the dataset as a tuple of two elements. The first element is the training data and the second is the test data. Each element is also a tuple of two elements: the image data, and the corresponding labels.
You don't need the training data, and hence why we loaded it as (_, _).
Before you can run the data through evaluation, you need to preprocess it:
x_test:
1. Normalize (rescaling) the pixel data by dividing each pixel by 255. This will replace each single byte integer pixel with a 32-bit floating point number between 0 and 1.
y_test:<br/>
2. The labels are currently scalar (sparse). If you look back at the compile() step in the trainer/task.py script, you will find that it was compiled for sparse labels. So we don't need to do anything more.
End of explanation
model.evaluate(x_test, y_test)
Explanation: Perform the model evaluation
Now evaluate how well the model in the custom job did.
End of explanation
CONCRETE_INPUT = "numpy_inputs"
def _preprocess(bytes_input):
decoded = tf.io.decode_jpeg(bytes_input, channels=3)
decoded = tf.image.convert_image_dtype(decoded, tf.float32)
resized = tf.image.resize(decoded, size=(16, 16))
return resized
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def preprocess_fn(bytes_inputs):
decoded_images = tf.map_fn(
_preprocess, bytes_inputs, dtype=tf.float32, back_prop=False
)
return {
CONCRETE_INPUT: decoded_images
} # User needs to make sure the key matches model's input
m_call = tf.function(model.call).get_concrete_function(
[tf.TensorSpec(shape=[None, 16, 16, 3], dtype=tf.float32, name=CONCRETE_INPUT)]
)
def _postprocess(bytes_output):
rescale = tf.cast(bytes_output * 255.0, tf.uint8)
reshape = tf.reshape(rescale, (32, 32, 3))
encoded = tf.io.encode_jpeg(reshape)
return tf.cast(encoded, tf.string)
@tf.function(input_signature=[tf.TensorSpec([None, 32, 32, 3], tf.float32)])
def postprocess_fn(bytes_outputs):
encoded_images = tf.map_fn(
_postprocess, bytes_outputs, dtype=tf.string, back_prop=False
)
return encoded_images
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def serving_fn(bytes_inputs):
images = preprocess_fn(bytes_inputs)
prob = m_call(**images)
post = postprocess_fn(prob)
return post
tf.saved_model.save(
model,
model_path_to_deploy,
signatures={
"serving_default": serving_fn,
},
)
Explanation: Upload the model for serving
Next, you will upload your TF.Keras model from the custom job to Vertex Model service, which will create a Vertex Model resource for your custom model. During upload, you need to define a serving function to convert data to the format your model expects. If you send encoded data to Vertex, your serving function ensures that the data is decoded on the model server before it is passed as input to your model.
How does the serving function work
When you send a request to an online prediction server, the request is received by a HTTP server. The HTTP server extracts the prediction request from the HTTP request content body. The extracted prediction request is forwarded to the serving function. For Google pre-built prediction containers, the request content is passed to the serving function as a tf.string.
The serving function consists of two parts:
preprocessing function:
Converts the input (tf.string) to the input shape and data type of the underlying model (dynamic graph).
Performs the same preprocessing of the data that was done during training the underlying model -- e.g., normalizing, scaling, etc.
post-processing function:
Converts the model output to format expected by the receiving application -- e.q., compresses the output.
Packages the output for the the receiving application -- e.g., add headings, make JSON object, etc.
Both the preprocessing and post-processing functions are converted to static graphs which are fused to the model. The output from the underlying model is passed to the post-processing function. The post-processing function passes the converted/packaged output back to the HTTP server. The HTTP server returns the output as the HTTP response content.
One consideration you need to consider when building serving functions for TF.Keras models is that they run as static graphs. That means, you cannot use TF graph operations that require a dynamic graph. If you do, you will get an error during the compile of the serving function which will indicate that you are using an EagerTensor which is not supported.
Serving function for image data
Preprocessing
To pass images to the prediction service, you encode the compressed (e.g., JPEG) image bytes into base 64 -- which makes the content safe from modification while transmitting binary data over the network. Since this deployed model expects input data as raw (uncompressed) bytes, you need to ensure that the base 64 encoded data gets converted back to raw bytes before it is passed as input to the deployed model.
To resolve this, define a serving function (serving_fn) and attach it to the model as a preprocessing step. Add a @tf.function decorator so the serving function is fused to the underlying model (instead of upstream on a CPU).
When you send a prediction or explanation request, the content of the request is base 64 decoded into a Tensorflow string (tf.string), which is passed to the serving function (serving_fn). The serving function preprocesses the tf.string into raw (uncompressed) numpy bytes (preprocess_fn) to match the input requirements of the model:
- io.decode_jpeg- Decompresses the JPG image which is returned as a Tensorflow tensor with three channels (RGB).
- image.convert_image_dtype - Changes integer pixel values to float 32, and rescales pixel data between 0 and 1.
- image.resize - Resizes the image to match the input shape for the model.
At this point, the data can be passed to the model (m_call).
Post-Processing
The return value from prob = m_call(**images) will be a list of tensors, one per instance in the prediction request. Each tensor will be the predicted super-resolution image of the corresponding instance.
TODO
End of explanation
loaded = tf.saved_model.load(model_path_to_deploy)
serving_input = list(
loaded.signatures["serving_default"].structured_input_signature[1].keys()
)[0]
print("Serving function input:", serving_input)
Explanation: Get the serving function signature
You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.
For your purpose, you need the signature of the serving function. Why? Well, when we send our data for prediction as a HTTP request packet, the image data is base64 encoded, and our TF.Keras model takes numpy input. Your serving function will do the conversion from base64 to a numpy array.
When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.
End of explanation
IMAGE_URI = DEPLOY_IMAGE
def upload_model(display_name, image_uri, model_uri):
model = {
"display_name": display_name,
"metadata_schema_uri": "",
"artifact_uri": model_uri,
"container_spec": {
"image_uri": image_uri,
"command": [],
"args": [],
"env": [{"name": "env_name", "value": "env_value"}],
"ports": [{"container_port": 8080}],
"predict_route": "",
"health_route": "",
},
}
response = clients["model"].upload_model(parent=PARENT, model=model)
print("Long running operation:", response.operation.name)
upload_model_response = response.result(timeout=180)
print("upload_model_response")
print(" model:", upload_model_response.model)
return upload_model_response.model
model_to_deploy_id = upload_model(
"cifar10-" + TIMESTAMP, IMAGE_URI, model_path_to_deploy
)
Explanation: Upload the model
Use this helper function upload_model to upload your model, stored in SavedModel format, up to the Model service, which will instantiate a Vertex Model resource instance for your model. Once you've done that, you can use the Model resource instance in the same way as any other Vertex Model resource instance, such as deploying to an Endpoint resource for serving predictions.
The helper function takes the following parameters:
display_name: A human readable name for the Endpoint service.
image_uri: The container image for the model deployment.
model_uri: The Cloud Storage path to our SavedModel artificat. For this tutorial, this is the Cloud Storage location where the trainer/task.py saved the model artifacts, which we specified in the variable MODEL_DIR.
The helper function calls the Model client service's method upload_model, which takes the following parameters:
parent: The Vertex location root path for Dataset, Model and Endpoint resources.
model: The specification for the Vertex Model resource instance.
Let's now dive deeper into the Vertex model specification model. This is a dictionary object that consists of the following fields:
display_name: A human readable name for the Model resource.
metadata_schema_uri: Since your model was built without an Vertex Dataset resource, you will leave this blank ('').
artificat_uri: The Cloud Storage path where the model is stored in SavedModel format.
container_spec: This is the specification for the Docker container that will be installed on the Endpoint resource, from which the Model resource will serve predictions. Use the variable you set earlier DEPLOY_GPU != None to use a GPU; otherwise only a CPU is allocated.
Uploading a model into a Vertex Model resource returns a long running operation, since it may take a few moments. You call response.result(), which is a synchronous call and will return when the Vertex Model resource is ready.
The helper function returns the Vertex fully qualified identifier for the corresponding Vertex Model instance upload_model_response.model. You will save the identifier for subsequent steps in the variable model_to_deploy_id.
End of explanation
def get_model(name):
response = clients["model"].get_model(name=name)
print(response)
get_model(model_to_deploy_id)
Explanation: Get Model resource information
Now let's get the model information for just your model. Use this helper function get_model, with the following parameter:
name: The Vertex unique identifier for the Model resource.
This helper function calls the Vertex Model client service's method get_model, with the following parameter:
name: The Vertex unique identifier for the Model resource.
End of explanation
ENDPOINT_NAME = "cifar10_endpoint-" + TIMESTAMP
def create_endpoint(display_name):
endpoint = {"display_name": display_name}
response = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint)
print("Long running operation:", response.operation.name)
result = response.result(timeout=300)
print("result")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" description:", result.description)
print(" labels:", result.labels)
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
return result
result = create_endpoint(ENDPOINT_NAME)
Explanation: Deploy the Model resource
Now deploy the trained Vertex custom Model resource. This requires two steps:
Create an Endpoint resource for deploying the Model resource to.
Deploy the Model resource to the Endpoint resource.
Create an Endpoint resource
Use this helper function create_endpoint to create an endpoint to deploy the model to for serving predictions, with the following parameter:
display_name: A human readable name for the Endpoint resource.
The helper function uses the endpoint client service's create_endpoint method, which takes the following parameter:
display_name: A human readable name for the Endpoint resource.
Creating an Endpoint resource returns a long running operation, since it may take a few moments to provision the Endpoint resource for serving. You call response.result(), which is a synchronous call and will return when the Endpoint resource is ready. The helper function returns the Vertex fully qualified identifier for the Endpoint resource: response.name.
End of explanation
# The full unique ID for the endpoint
endpoint_id = result.name
# The short numeric ID for the endpoint
endpoint_short_id = endpoint_id.split("/")[-1]
print(endpoint_id)
Explanation: Now get the unique identifier for the Endpoint resource you created.
End of explanation
MIN_NODES = 1
MAX_NODES = 1
Explanation: Compute instance scaling
You have several choices on scaling the compute instances for handling your online prediction requests:
Single Instance: The online prediction requests are processed on a single compute instance.
Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to one.
Manual Scaling: The online prediction requests are split across a fixed number of compute instances that you manually specified.
Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and online prediction requests are evenly distributed across them.
Auto Scaling: The online prediction requests are split across a scaleable number of compute instances.
Set the minimum (MIN_NODES) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.
The minimum number of compute instances corresponds to the field min_replica_count and the maximum number of compute instances corresponds to the field max_replica_count, in your subsequent deployment request.
End of explanation
DEPLOYED_NAME = "cifar10_deployed-" + TIMESTAMP
def deploy_model(
model, deployed_model_display_name, endpoint, traffic_split={"0": 100}
):
if DEPLOY_GPU:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_type": DEPLOY_GPU,
"accelerator_count": DEPLOY_NGPU,
}
else:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_count": 0,
}
deployed_model = {
"model": model,
"display_name": deployed_model_display_name,
"dedicated_resources": {
"min_replica_count": MIN_NODES,
"max_replica_count": MAX_NODES,
"machine_spec": machine_spec,
},
"disable_container_logging": False,
}
response = clients["endpoint"].deploy_model(
endpoint=endpoint, deployed_model=deployed_model, traffic_split=traffic_split
)
print("Long running operation:", response.operation.name)
result = response.result()
print("result")
deployed_model = result.deployed_model
print(" deployed_model")
print(" id:", deployed_model.id)
print(" model:", deployed_model.model)
print(" display_name:", deployed_model.display_name)
print(" create_time:", deployed_model.create_time)
return deployed_model.id
deployed_model_id = deploy_model(model_to_deploy_id, DEPLOYED_NAME, endpoint_id)
Explanation: Deploy Model resource to the Endpoint resource
Use this helper function deploy_model to deploy the Model resource to the Endpoint resource you created for serving predictions, with the following parameters:
model: The Vertex fully qualified model identifier of the model to upload (deploy) from the training pipeline.
deploy_model_display_name: A human readable name for the deployed model.
endpoint: The Vertex fully qualified endpoint identifier to deploy the model to.
The helper function calls the Endpoint client service's method deploy_model, which takes the following parameters:
endpoint: The Vertex fully qualified Endpoint resource identifier to deploy the Model resource to.
deployed_model: The requirements specification for deploying the model.
traffic_split: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.
If only one model, then specify as { "0": 100 }, where "0" refers to this model being uploaded and 100 means 100% of the traffic.
If there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { "0": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100.
Let's now dive deeper into the deployed_model parameter. This parameter is specified as a Python dictionary with the minimum required fields:
model: The Vertex fully qualified model identifier of the (upload) model to deploy.
display_name: A human readable name for the deployed model.
disable_container_logging: This disables logging of container events, such as execution failures (default is container logging is enabled). Container logging is typically enabled when debugging the deployment and then disabled when deployed for production.
dedicated_resources: This refers to how many compute instances (replicas) that are scaled for serving prediction requests.
machine_spec: The compute instance to provision. Use the variable you set earlier DEPLOY_GPU != None to use a GPU; otherwise only a CPU is allocated.
min_replica_count: The number of compute instances to initially provision, which you set earlier as the variable MIN_NODES.
max_replica_count: The maximum number of compute instances to scale to, which you set earlier as the variable MAX_NODES.
Traffic Split
Let's now dive deeper into the traffic_split parameter. This parameter is specified as a Python dictionary. This might at first be a tad bit confusing. Let me explain, you can deploy more than one instance of your model to an endpoint, and then set how much (percent) goes to each instance.
Why would you do that? Perhaps you already have a previous version deployed in production -- let's call that v1. You got better model evaluation on v2, but you don't know for certain that it is really better until you deploy to production. So in the case of traffic split, you might want to deploy v2 to the same endpoint as v1, but it only get's say 10% of the traffic. That way, you can monitor how well it does without disrupting the majority of users -- until you make a final decision.
Response
The method returns a long running operation response. We will wait sychronously for the operation to complete by calling the response.result(), which will block until the model is deployed. If this is the first time a model is deployed to the endpoint, it may take a few additional minutes to complete provisioning of resources.
End of explanation
test_image = x_test_lr[0]
test_label = y_test[0]
print(test_image.shape)
Explanation: Make a online prediction request
Now do a online prediction to your deployed model.
Get test item
You will use an example out of the test (holdout) portion of the dataset as a test item.
End of explanation
def predict_image(image, endpoint, parameters_dict):
# The format of each instance should conform to the deployed model's prediction input schema.
instances_list = [{serving_input: {"b64": image}}]
instances = [json_format.ParseDict(s, Value()) for s in instances_list]
response = clients["prediction"].predict(
endpoint=endpoint, instances=instances, parameters=parameters_dict
)
print("response")
print(" deployed_model_id:", response.deployed_model_id)
predictions = response.predictions
print("predictions")
for prediction in predictions:
print(" prediction:", prediction)
predict_image(b64str, endpoint_id, None)
Explanation: Send the prediction request
Ok, now you have a test image. Use this helper function predict_image, which takes the following parameters:
image: The test image data as a numpy array.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource was deployed to.
parameters_dict: Additional parameters for serving.
This function calls the prediction client service predict method with the following parameters:
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource was deployed to.
instances: A list of instances (encoded images) to predict.
parameters: Additional parameters for serving.
To pass the image data to the prediction service, in the previous step you encoded the bytes into base64 -- which makes the content safe from modification when transmitting binary data over the network. You need to tell the serving binary where your model is deployed to, that the content has been base64 encoded, so it will decode it on the other end in the serving binary.
Each instance in the prediction request is a dictionary entry of the form:
{serving_input: {'b64': content}}
input_name: the name of the input layer of the underlying model.
'b64': A key that indicates the content is base64 encoded.
content: The compressed JPG image bytes as a base64 encoded string.
Since the predict() service can take multiple images (instances), you will send your single image as a list of one image. As a final step, you package the instances list into Google's protobuf format -- which is what we pass to the predict() service.
The response object returns a list, where each element in the list corresponds to the corresponding image in the request. You will see in the output for each prediction:
predictions: Confidence level for the prediction, between 0 and 1, for each of the classes.
End of explanation
def undeploy_model(deployed_model_id, endpoint):
response = clients["endpoint"].undeploy_model(
endpoint=endpoint, deployed_model_id=deployed_model_id, traffic_split={}
)
print(response)
undeploy_model(deployed_model_id, endpoint_id)
Explanation: Undeploy the Model resource
Now undeploy your Model resource from the serving Endpoint resoure. Use this helper function undeploy_model, which takes the following parameters:
deployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed to.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model is deployed to.
This function calls the endpoint client service's method undeploy_model, with the following parameters:
deployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource is deployed.
traffic_split: How to split traffic among the remaining deployed models on the Endpoint resource.
Since this is the only deployed model on the Endpoint resource, you simply can leave traffic_split empty by setting it to {}.
End of explanation
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the Vertex fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the Vertex fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
Explanation: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation |
13,435 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Distribution
the way in which something is shared out among a group or spread over an area
Random Variable
a variable whose value is subject to variations due to chance (i.e. randomness, in a mathematical sense). A random variable can take on a set of possible different values (similarly to other mathematical variables), each with an associated probability wiki
Types
Discrete Random Variables <br>
Eg
Step1: Question If you randomly select a car, what is the probability, with equal chances of selecting any of the make available in our dataset, that the mileage will be greater than 25?
Step2: Using scipy to use distribution
Step3: Standard Error
It is a measure of how far the estimate to be off, on average. More technically, it is the standard deviation of the sampling distribution of a statistic(mostly the mean). Please do not confuse it with standard deviation. Standard deviation is a measure of the variability of the observed quantity. Standard error, on the other hand, describes variability of the estimate.
To illustrate this, let's do the following.
Not all the make and models are available in our dataset. Also, we had to impute some of the values.
Let's say that a leading automobile magazine did an extensive survey and printed that the mean mileage is 22.7.
Compute standard deviation and standard error for the mean for our dataset
Step4: We'll follow the same procedures we did in the resampling.ipynb. We will bootstrap samples from actual observed data 10,000 times and compute difference between sample mean and actual mean. Find root mean squared error to get standard error | Python Code:
import pandas as pd
import seaborn as sns
sns.set(color_codes=True)
%matplotlib inline
#Import the data
cars = pd.read_csv("cars_v1.csv", encoding="ISO-8859-1")
#Replace missing values in Mileage with mean
cars.Mileage.fillna(cars.Mileage.mean(), inplace=True)
sns.distplot(cars.Mileage, kde=False)
Explanation: Distribution
the way in which something is shared out among a group or spread over an area
Random Variable
a variable whose value is subject to variations due to chance (i.e. randomness, in a mathematical sense). A random variable can take on a set of possible different values (similarly to other mathematical variables), each with an associated probability wiki
Types
Discrete Random Variables <br>
Eg: Genders of the buyers buying shoe
Continuous Random Variables <br>
Eg: Shoe Sales in a quarter
Probability Distribution
Assigns a probability to each measurable subset of the possible outcomes of a random experiment, survey, or procedure of statistical inference. wiki
Probability Mass Function
probability mass function (pmf) is a function that gives the probability that a discrete random variable is exactly equal to some value
Discrete probability distribution(Cumulative Mass Function)
probability distribution characterized by a probability mass function
Probability Density Function
function that describes the relative likelihood for this random variable to take on a given value
Continuous probability distribution(Cumulative Density function)
probability that the variable takes a value less than or equal to x
Central Limit Theorem
Given certain conditions, the arithmetic mean of a sufficiently large number of iterates of independent random variables, each with a well-defined expected value and well-defined variance, will be approximately normally distributed, regardless of the underlying distribution. wiki
Normal Distribution
A bell shaped distribution. It is also called Gaussian distribution
<img style="float: left;" src="img/normaldist.png" height="220" width="220">
<br>
<br>
<br>
<br>
PDF
<br>
<br>
<img style="float: left;" src="img/normal_pdf.png" height="320" width="320">
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
CDF
<br>
<br>
<img style="float: left;" src="img/normal_cdf.png" height="320" width="320">
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
Skewness
Measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. wiki
<img style="float: left;" src="img/skewness.png" height="620" width="620">
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
Kurtosis
Measure of the "peakedness" of the probability distribution of a real-valued random variable wiki
<br>
<br>
<img style="float: left;" src="img/kurtosis.png" height="420" width="420">
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
Binomial Distribution
Binomial distribution with parameters n and p is the discrete probability distribution of the number of successes in a sequence of n independent yes/no experiments, each of which yields success with probability p. A success/failure experiment is also called a Bernoulli experiment or Bernoulli trial; when n = 1, the binomial distribution is a Bernoulli distribution wiki
<br>
<br>
<img style="float: left;" src="img/binomial_pmf.png" height="420" width="420">
<br>
<br>
<br>
Exponential Distribution
Probability distribution that describes the time between events in a Poisson process, i.e. a process in which events occur continuously and independently at a constant average rate. It has the key property of being memoryless. wiki
<br>
<br>
<img style="float: left;" src="img/exponential_pdf.png" height="420" width="420">
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
Uniform distribution
All values have the same frequency wiki
<br>
<br>
<img style="float: left;" src="img/uniform.png" height="420" width="420">
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
6-sigma philosophy
<img style="float: left;" src="img/6sigma.png" height="520" width="520">
Histograms
Most commonly used representation of a distribution.
Let's plot distribution of weed prices for 2014
End of explanation
sns.distplot(cars.Mileage, bins=range(0,50,1))
Explanation: Question If you randomly select a car, what is the probability, with equal chances of selecting any of the make available in our dataset, that the mileage will be greater than 25?
End of explanation
from scipy import stats
import scipy as sp
import numpy as np
import matplotlib as mpl
from matplotlib import pyplot as plt
%matplotlib inline
#Generate random numbers that are normally distributed
random_normal = sp.randn(100)
plt.scatter(range(100), random_normal)
print "mean:", random_normal.mean(), " variance:", random_normal.var()
#Create a normal distribution with mean 2.5 and standard deviation 1.7
n = stats.norm(loc=2.5, scale=1.7)
#Generate random number from that distribution
n.rvs()
#for the above normal distribution, what is the pdf at 0.3?
n.pdf(0.3)
#Binomial distribution with `p` = 0.4 and number of trials as 15
stats.binom.pmf(range(15), 10, 0.4)
Explanation: Using scipy to use distribution
End of explanation
cars.head()
#Mean and standard deviation of car's mileage
print(" Sample Mean:", cars.Mileage.mean(), "\n", "Sample Standard Deviation:", cars.Mileage.std())
print(" Max Mileage:", cars.Mileage.max(), "\n", "Min Mileage:", cars.Mileage.min())
Explanation: Standard Error
It is a measure of how far the estimate to be off, on average. More technically, it is the standard deviation of the sampling distribution of a statistic(mostly the mean). Please do not confuse it with standard deviation. Standard deviation is a measure of the variability of the observed quantity. Standard error, on the other hand, describes variability of the estimate.
To illustrate this, let's do the following.
Not all the make and models are available in our dataset. Also, we had to impute some of the values.
Let's say that a leading automobile magazine did an extensive survey and printed that the mean mileage is 22.7.
Compute standard deviation and standard error for the mean for our dataset
End of explanation
def squared_error(bootstrap_sample, actual_mean):
return np.square(bootstrap_sample.mean() - actual_mean)
def experiment_for_computing_standard_error(observed_mileage, number_of_times, actual_mean):
bootstrap_mean = np.empty([number_of_times, 1], dtype=np.int32)
bootstrap_sample = np.random.choice(observed_mileage, size=[observed_mileage.size, number_of_times], replace=True)
bootstrap_squared_error = np.apply_along_axis(squared_error, 1, bootstrap_sample, actual_mean)
return np.sqrt(bootstrap_squared_error.mean())
#Standard error of the estimate for mean
experiment_for_computing_standard_error(np.array(cars.Mileage), 10, 22.7)
Explanation: We'll follow the same procedures we did in the resampling.ipynb. We will bootstrap samples from actual observed data 10,000 times and compute difference between sample mean and actual mean. Find root mean squared error to get standard error
End of explanation |
13,436 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Практическое задание к уроку 1 (2 неделя).
Линейная регрессия
Step1: Мы будем работать с датасетом "bikes_rent.csv", в котором по дням записаны календарная информация и погодные условия, характеризующие автоматизированные пункты проката велосипедов, а также число прокатов в этот день. Последнее мы будем предсказывать; таким образом, мы будем решать задачу регрессии.
Знакомство с данными
Загрузите датасет с помощью функции pandas.read_csv в переменную df. Выведите первые 5 строчек, чтобы убедиться в корректном считывании данных
Step2: Для каждого дня проката известны следующие признаки (как они были указаны в источнике данных)
Step3: Блок 1. Ответьте на вопросы (каждый 0.5 балла)
Step4: В выборке есть признаки, коррелирующие с целевым, а значит, задачу можно решать линейными методами.
По графикам видно, что некоторые признаки похожи друг на друга. Поэтому давайте также посчитаем корреляции между вещественными признаками.
Step5: На диагоналях, как и полагается, стоят единицы. Однако в матрице имеются еще две пары сильно коррелирующих столбцов
Step6: Признаки имеют разный масштаб, значит для дальнейшей работы нам лучше нормировать матрицу объекты-признаки.
Проблема первая
Step7: Давайте обучим линейную регрессию на наших данных и посмотрим на веса признаков.
Step8: Мы видим, что веса при линейно-зависимых признаках по модулю значительно больше, чем при других признаках.
Чтобы понять, почему так произошло, вспомним аналитическую формулу, по которой вычисляются веса линейной модели в методе наименьших квадратов
Step9: Проблема вторая
Step10: Визуализируем динамику весов при увеличении параметра регуляризации
Step11: Ответы на следующие вопросы можно давать, глядя на графики или выводя коэффициенты на печать.
Блок 2. Ответьте на вопросы (каждый 0.25 балла)
Step12: Итак, мы выбрали некоторый параметр регуляризации. Давайте посмотрим, какие бы мы выбирали alpha, если бы делили выборку только один раз на обучающую и тестовую, то есть рассмотрим траектории MSE, соответствующие отдельным блокам выборки. | Python Code:
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
Explanation: Практическое задание к уроку 1 (2 неделя).
Линейная регрессия: переобучение и регуляризация
В этом задании мы на примерах увидим, как переобучаются линейные модели, разберем, почему так происходит, и выясним, как диагностировать и контролировать переобучение.
Во всех ячейках, где написан комментарий с инструкциями, нужно написать код, выполняющий эти инструкции. Остальные ячейки с кодом (без комментариев) нужно просто выполнить. Кроме того, в задании требуется отвечать на вопросы; ответы нужно вписывать после выделенного слова "Ответ:".
Напоминаем, что посмотреть справку любого метода или функции (узнать, какие у нее аргументы и что она делает) можно с помощью комбинации Shift+Tab. Нажатие Tab после имени объекта и точки позволяет посмотреть, какие методы и переменные есть у этого объекта.
End of explanation
# (0 баллов)
# Считайте данные и выведите первые 5 строк
df = pd.read_csv('bikes_rent.csv')
df.head()
Explanation: Мы будем работать с датасетом "bikes_rent.csv", в котором по дням записаны календарная информация и погодные условия, характеризующие автоматизированные пункты проката велосипедов, а также число прокатов в этот день. Последнее мы будем предсказывать; таким образом, мы будем решать задачу регрессии.
Знакомство с данными
Загрузите датасет с помощью функции pandas.read_csv в переменную df. Выведите первые 5 строчек, чтобы убедиться в корректном считывании данных:
End of explanation
fig, axes = plt.subplots(nrows=3, ncols=4, figsize=(15, 10))
for idx, feature in enumerate(df.columns[:-1]):
df.plot(feature, "cnt", subplots=True, kind="scatter", ax=axes[idx / 4, idx % 4])
Explanation: Для каждого дня проката известны следующие признаки (как они были указаны в источнике данных):
* season: 1 - весна, 2 - лето, 3 - осень, 4 - зима
* yr: 0 - 2011, 1 - 2012
* mnth: от 1 до 12
* holiday: 0 - нет праздника, 1 - есть праздник
* weekday: от 0 до 6
* workingday: 0 - нерабочий день, 1 - рабочий день
* weathersit: оценка благоприятности погоды от 1 (чистый, ясный день) до 4 (ливень, туман)
* temp: температура в Цельсиях
* atemp: температура по ощущениям в Цельсиях
* hum: влажность
* windspeed(mph): скорость ветра в милях в час
* windspeed(ms): скорость ветра в метрах в секунду
* cnt: количество арендованных велосипедов (это целевой признак, его мы будем предсказывать)
Итак, у нас есть вещественные, бинарные и номинальные (порядковые) признаки, и со всеми из них можно работать как с вещественными. С номинальныеми признаками тоже можно работать как с вещественными, потому что на них задан порядок. Давайте посмотрим на графиках, как целевой признак зависит от остальных
End of explanation
# Код 1.1 (0.5 балла)
# Посчитайте корреляции всех признаков, кроме последнего, с последним с помощью метода corrwith:
df_new = df.drop('cnt',axis = 1)
print 'Все кроме последнего: \n', df_new.corr()
print '\n Последний признак: \n', df.corrwith(df['cnt'])
Explanation: Блок 1. Ответьте на вопросы (каждый 0.5 балла):
1. Каков характер зависимости числа прокатов от месяца?
* ответ: линейная
1. Укажите один или два признака, от которых число прокатов скорее всего зависит линейно
* ответ: день недели, месяц.
Давайте более строго оценим уровень линейной зависимости между признаками и целевой переменной. Хорошей мерой линейной зависимости между двумя векторами является корреляция Пирсона. В pandas ее можно посчитать с помощью двух методов датафрейма: corr и corrwith. Метод df.corr вычисляет матрицу корреляций всех признаков из датафрейма. Методу df.corrwith нужно подать еще один датафрейм в качестве аргумента, и тогда он посчитает попарные корреляции между признаками из df и этого датафрейма.
End of explanation
# Код 1.2 (0.5 балла)
# Посчитайте попарные корреляции между признаками temp, atemp, hum, windspeed(mph), windspeed(ms) и cnt
# с помощью метода corr:
df_new = df.drop(['season', 'mnth','yr','holiday','weekday','workingday','weathersit'], axis=1)
df_new.corr()
Explanation: В выборке есть признаки, коррелирующие с целевым, а значит, задачу можно решать линейными методами.
По графикам видно, что некоторые признаки похожи друг на друга. Поэтому давайте также посчитаем корреляции между вещественными признаками.
End of explanation
# Код 1.3 (0.5 балла)
# Выведите средние признаков
df.mean()
Explanation: На диагоналях, как и полагается, стоят единицы. Однако в матрице имеются еще две пары сильно коррелирующих столбцов: temp и atemp (коррелируют по своей природе) и два windspeed (потому что это просто перевод одних единиц в другие). Далее мы увидим, что этот факт негативно сказывается на обучении линейной модели.
Напоследок посмотрим средние признаков (метод mean), чтобы оценить масштаб признаков и доли 1 у бинарных признаков.
End of explanation
from sklearn.preprocessing import scale
from sklearn.utils import shuffle
df_shuffled = shuffle(df, random_state=123)
X = scale(df_shuffled[df_shuffled.columns[:-1]])
y = df_shuffled["cnt"]
Explanation: Признаки имеют разный масштаб, значит для дальнейшей работы нам лучше нормировать матрицу объекты-признаки.
Проблема первая: коллинеарные признаки
Итак, в наших данных один признак дублирует другой, и есть еще два очень похожих. Конечно, мы могли бы сразу удалить дубликаты, но давайте посмотрим, как бы происходило обучение модели, если бы мы не заметили эту проблему.
Для начала проведем масштабирование, или стандартизацию признаков: из каждого признака вычтем его среднее и поделим на стандартное отклонение. Это можно сделать с помощью метода scale.
Кроме того, нужно перемешать выборку, это потребуется для кросс-валидации.
End of explanation
from sklearn.linear_model import LinearRegression
# Код 2.1 (1 балл)
# Создайте объект линейного регрессора, обучите его на всех данных и выведите веса модели
# (веса хранятся в переменной coef_ класса регрессора).
# Можно выводить пары (название признака, вес), воспользовавшись функцией zip, встроенной в язык python
# Названия признаков хранятся в переменной df.columns
lin_reg = LinearRegression()
lin_reg.fit(X,y)
print zip(df.columns,lin_reg.coef_)
Explanation: Давайте обучим линейную регрессию на наших данных и посмотрим на веса признаков.
End of explanation
from sklearn.linear_model import Lasso, Ridge
# Код 2.2 (0.5 балла)
# Обучите линейную модель с L1-регуляризацией и выведите веса
lass = Lasso()
lass.fit(X,y)
print zip(df.columns,lass.coef_)
# Код 2.3 (0.5 балла)
# Обучите линейную модель с L2-регуляризацией и выведите веса
rid = Ridge()
rid.fit(X,y)
print zip(df.columns,rid.coef_)
Explanation: Мы видим, что веса при линейно-зависимых признаках по модулю значительно больше, чем при других признаках.
Чтобы понять, почему так произошло, вспомним аналитическую формулу, по которой вычисляются веса линейной модели в методе наименьших квадратов:
$w = (X^TX)^{-1} X^T y$.
Если в X есть коллинеарные (линейно-зависимые) столбцы, матрица $X^TX$ становится вырожденной, и формула перестает быть корректной. Чем более зависимы признаки, тем меньше определитель этой матрицы и тем хуже аппроксимация $Xw \approx y$. Такая ситуацию называют проблемой мультиколлинеарности, вы обсуждали ее на лекции.
С парой temp-atemp чуть менее коррелирующих переменных такого не произошло, однако на практике всегда стоит внимательно следить за коэффициентами при похожих признаках.
Решение проблемы мультиколлинеарности состоит в регуляризации линейной модели. К оптимизируемому функционалу прибавляют L1 или L2 норму весов, умноженную на коэффициент регуляризации $\alpha$. В первом случае метод называется Lasso, а во втором --- Ridge. Подробнее об этом также рассказано в лекции.
Обучите регрессоры Ridge и Lasso с параметрами по умолчанию и убедитесь, что проблема с весами решилась.
End of explanation
# Код 3.1 (1 балл)
alphas = np.arange(1, 500, 50)
coefs_lasso = np.zeros((alphas.shape[0], X.shape[1])) # матрица весов размера (число регрессоров) x (число признаков)
coefs_ridge = np.zeros((alphas.shape[0], X.shape[1]))
# Для каждого значения коэффициента из alphas обучите регрессор Lasso
# и запишите веса в соответствующую строку матрицы coefs_lasso (вспомните встроенную в python функцию enumerate),
# а затем обучите Ridge и запишите веса в coefs_ridge.
i=0
for alpha in alphas:
for coef in enumerate(lass.coef_):
lass=Lasso(alpha)
lass.fit(X, y)
coefs_lasso[i,coef[0]]=lass.coef_[coef[0]]
i+=1
print 'Lasso\n',zip(coefs_lasso,alphas)
i=0
for alpha in alphas:
for coef in enumerate(rid.coef_):
rid=Ridge(alpha)
rid.fit(X, y)
coefs_ridge[i,coef[0]]=rid.coef_[coef[0]]
i+=1
print 'Ridge\n',zip(coefs_ridge,alphas)
Explanation: Проблема вторая: неинформативные признаки
В отличие от L2-регуляризации, L1 обнуляет веса при некоторых признаках. Объяснение данному факту дается в одной из лекций курса.
Давайте пронаблюдаем, как меняются веса при увеличении коэффициента регуляризации $\alpha$ (в лекции коэффициент при регуляризаторе мог быть обозначен другой буквой).
End of explanation
plt.figure(figsize=(8, 5))
for coef, feature in zip(coefs_lasso.T, df.columns):
plt.plot(alphas, coef, label=feature, color=np.random.rand(3))
plt.legend(loc="upper right", bbox_to_anchor=(1.4, 0.95))
plt.xlabel("alpha")
plt.ylabel("feature weight")
plt.title("Lasso")
plt.figure(figsize=(8, 5))
for coef, feature in zip(coefs_ridge.T, df.columns):
plt.plot(alphas, coef, label=feature, color=np.random.rand(3))
plt.legend(loc="upper right", bbox_to_anchor=(1.4, 0.95))
plt.xlabel("alpha")
plt.ylabel("feature weight")
plt.title("Ridge")
Explanation: Визуализируем динамику весов при увеличении параметра регуляризации:
End of explanation
from sklearn.linear_model import LassoCV
# Код 3.2 (1 балл)
# Обучите регрессор LassoCV на всех параметрах регуляризации из alpha
# Постройте график _усредненного_ по строкам MSE в зависимости от alpha.
# Выведите выбранное alpha, а также пары "признак-коэффициент" для обученного вектора коэффициентов
alphas = np.arange(1, 100, 5)
lass_cv=LassoCV(alphas=alphas)
lass_cv.fit(X,y)
print 'Выбранное alp = ',lass_cv.alpha_
print '\n Признак-коэфф: \n', zip(df.columns,lass_cv.coef_)
average = map(lambda elem: sum(elem) / 3, lass_cv.mse_path_[:])
plt.plot(lass_cv.alphas_, average)
plt.title('MSE & alphas')
plt.xlabel('alphas')
plt.ylabel('MSE')
Explanation: Ответы на следующие вопросы можно давать, глядя на графики или выводя коэффициенты на печать.
Блок 2. Ответьте на вопросы (каждый 0.25 балла):
1. Какой регуляризатор (Ridge или Lasso) агрессивнее уменьшает веса при одном и том же alpha?
* Ответ: Lass
1. Что произойдет с весами Lasso, если alpha сделать очень большим? Поясните, почему так происходит.
* Ответ: При большом alpha коэффициенты обнулятся из-за низкой предсказательной способности.
1. Можно ли утверждать, что Lasso исключает один из признаков windspeed при любом значении alpha > 0? А Ridge? Ситается, что регуляризатор исключает признак, если коэффициент при нем < 1e-3.
* Ответ: Да. Нет.
1. Какой из регуляризаторов подойдет для отбора неинформативных признаков?
* Ответ: Lasso
Далее будем работать с Lasso.
Итак, мы видим, что при изменении alpha модель по-разному подбирает коэффициенты признаков. Нам нужно выбрать наилучшее alpha.
Для этого, во-первых, нам нужна метрика качества. Будем использовать в качестве метрики сам оптимизируемый функционал метода наименьших квадратов, то есть Mean Square Error.
Во-вторых, нужно понять, на каких данных эту метрику считать. Нельзя выбирать alpha по значению MSE на обучающей выборке, потому что тогда мы не сможем оценить, как модель будет делать предсказания на новых для нее данных. Если мы выберем одно разбиение выборки на обучающую и тестовую (это называется holdout), то настроимся на конкретные "новые" данные, и вновь можем переобучиться. Поэтому будем делать несколько разбиений выборки, на каждом пробовать разные значения alpha, а затем усреднять MSE. Удобнее всего делать такие разбиения кросс-валидацией, то есть разделить выборку на K частей, или блоков, и каждый раз брать одну из них как тестовую, а из оставшихся блоков составлять обучающую выборку.
Делать кросс-валидацию для регрессии в sklearn совсем просто: для этого есть специальный регрессор, LassoCV, который берет на вход список из alpha и для каждого из них вычисляет MSE на кросс-валидации. После обучения (если оставить параметр cv=3 по умолчанию) регрессор будет содержать переменную mse_path_, матрицу размера len(alpha) x k, k = 3 (число блоков в кросс-валидации), содержащую значения MSE на тесте для соответствующих запусков. Кроме того, в переменной alpha_ будет храниться выбранное значение параметра регуляризации, а в coef_, традиционно, обученные веса, соответствующие этому alpha_.
Обратите внимание, что регрессор может менять порядок, в котором он проходит по alphas; для сопоставления с матрицей MSE лучше использовать переменную регрессора alphas_.
End of explanation
# Код 3.3 (1 балл)
# Выведите значения alpha, соответствующие минимумам MSE на каждом разбиении (то есть по столбцам).
# На трех отдельных графиках визуализируйте столбцы .mse_path_
min_mse = lass_cv.mse_path_.argmin(axis = 0)
print 'Минимумы mse на кождом разбиении = ', min_mse
def plt_mse(min_mse):
plt.figure()
plt.plot(lass_cv.alphas_, min_mse)
plt.title("MSE")
plt.xlabel("alpha")
plt.ylabel("MSE")
mse_arr = lass_cv.mse_path_.T
for i in range(len(mse_arr)):
plt_mse(mse_arr[i])
Explanation: Итак, мы выбрали некоторый параметр регуляризации. Давайте посмотрим, какие бы мы выбирали alpha, если бы делили выборку только один раз на обучающую и тестовую, то есть рассмотрим траектории MSE, соответствующие отдельным блокам выборки.
End of explanation |
13,437 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SciPy 시작하기
SciPy란
과학기술계산용 함수 및 알고리즘 제공
Home
http
Step1: scipy.constants 상수
특별 상수
scipy.pi
기타 상수
scipy.constants.XXXX
단위
yotta, zetta, exa, peta, tera, giga, mega, kilo, hecto, deka
deci, centi, milli, micro, nano, pico, femto, atto, zepto
lb, oz, degree
inch, foot, yard, mile, au, light_year, parsec
hectare, acre, gallon
mph, mach, knot
Step2: scipy.special 수학 함수
Gamma, Beta, Erf, Logit
Bessel, Legendre
Step3: scipy.linalg 선형대수
inv, pinv, det
Step4: scipy.interpolate 보간
자료 사이의 빠진 부분을 유추
1차원 보간
2차원 보간
interpolate는 점들 간 이어지게끔 하는 곡선(선형과 거의 유사함)
하지만 데이터분석에서는 안 쓴다. 과최적화가 걸린다. 예쁘게 만들기 위해서만 쓴다.
Step5: scipy.optimize 최적화
함수의 최소값 찾기
Step6: scipy.fftpack 고속 퓨리에 변환 Fast Fourier transforms
신호를 주파수(frequency)영역으로 변환
스펙트럼(spectrum) | Python Code:
rv = sp.stats.norm(loc=10, scale=10)
rv.rvs(size=(3, 10), random_state=1)
sns.distplot(rv.rvs(size=10000, random_state=1))
xx = np.linspace(-40, 60, 1000)
pdf = rv.pdf(xx)
plt.plot(xx, pdf)
cdf = rv.cdf(xx)
plt.plot(xx, cdf)
Explanation: SciPy 시작하기
SciPy란
과학기술계산용 함수 및 알고리즘 제공
Home
http://www.scipy.org/
Documentation
http://docs.scipy.org/doc/
Tutorial
http://docs.scipy.org/doc/scipy/reference/tutorial/index.html
http://www.scipy-lectures.org/intro/scipy.html
SciPy Subpackages
scipy.stats
통계 Statistics
scipy.constants
물리/수학 상수 Physical and mathematical constants
scipy.special
수학 함수 Any special mathematical functions
scipy.linalg
선형 대수 Linear algebra routines
scipy.interpolate
보간 Interpolation
scipy.optimize
최적화 Optimization
scipy.fftpack
Fast Fourier transforms
scipy.stats 통계
Random Variable
확률 밀도 함수, 누적 확률 함수
샘플 생성
Parameter Estimation (fitting)
Test
scipy.stats 에서 제공하는 확률 모형
http://docs.scipy.org/doc/scipy/reference/stats.html
Continuous
http://docs.scipy.org/doc/scipy/reference/tutorial/stats/continuous.html#continuous-distributions-in-scipy-stats
uniform: A uniform continuous random variable.
norm: A normal continuous random variable.
beta: A beta continuous random variable.
gamma: A gamma continuous random variable.
t: A Student’s T continuous random variable.
chi2: A chi-squared continuous random variable.
f: An F continuous random variable.
multivariate_normal: A multivariate normal random variable.
dirichlet: A Dirichlet random variable.
wishart: A Wishart random variable.
Discrete
http://docs.scipy.org/doc/scipy/reference/tutorial/stats/discrete.html#discrete-distributions-in-scipy-stats
bernoulli: A Bernoulli discrete random variable.
binom: A binomial discrete random variable.
boltzmann: A Boltzmann (Truncated Discrete Exponential) random variable.
random variable 사용 방법
파라미터를 주고 random variable object 생성
method 사용
Common Method
rvs: 샘플 생성
pdf or pmf: Probability Density Function
cdf: Cumulative Distribution Function
stats: Return mean, variance, (Fisher’s) skew, or (Fisher’s) kurtosis
moment: non-central moments of the distribution
fit: parameter estimation
Common Parameters
parameter는 모형 마다 달라진다.
random_state: seed
size: 생성하려는 샘플의 shape
loc: 일반적으로 평균의 값
scale: 일반적으로 표준편차의 값
End of explanation
sp.pi
import scipy.constants
sp.constants.c # speed of light
Explanation: scipy.constants 상수
특별 상수
scipy.pi
기타 상수
scipy.constants.XXXX
단위
yotta, zetta, exa, peta, tera, giga, mega, kilo, hecto, deka
deci, centi, milli, micro, nano, pico, femto, atto, zepto
lb, oz, degree
inch, foot, yard, mile, au, light_year, parsec
hectare, acre, gallon
mph, mach, knot
End of explanation
x = np.linspace(-3, 3, 1000)
y1 = sp.special.erf(x)
a = plt.subplot(211)
plt.plot(x, y1)
plt.title("elf")
a.xaxis.set_ticklabels([])
y2 = sp.special.expit(x)
plt.subplot(212)
plt.plot(x, y2)
plt.title("logistic")
Explanation: scipy.special 수학 함수
Gamma, Beta, Erf, Logit
Bessel, Legendre
End of explanation
A = np.array([[1, 2],
[3, 4]])
sp.linalg.inv(A)
sp.linalg.det(A)
Explanation: scipy.linalg 선형대수
inv, pinv, det
End of explanation
from scipy.interpolate import interp1d
x = np.linspace(0, 10, num=11, endpoint=True)
y = np.cos(-x**2/9.0)
f = interp1d(x, y)
f2 = interp1d(x, y, kind='cubic')
xnew = np.linspace(0, 10, num=41)
plt.plot(x, y, 'o', xnew, f(xnew), '-', xnew, f2(xnew), '--')
plt.legend(['data', 'linear', 'cubic'])
x, y = np.mgrid[-1:1:20j, -1:1:20j]
z = (x+y) * np.exp(-6.0*(x*x+y*y))
plt.pcolormesh(x, y, z)
xnew, ynew = np.mgrid[-1:1:100j, -1:1:100j]
tck = sp.interpolate.bisplrep(x, y, z, s=0)
znew = sp.interpolate.bisplev(xnew[:, 0], ynew[0, :], tck)
plt.pcolormesh(xnew, ynew, znew)
Explanation: scipy.interpolate 보간
자료 사이의 빠진 부분을 유추
1차원 보간
2차원 보간
interpolate는 점들 간 이어지게끔 하는 곡선(선형과 거의 유사함)
하지만 데이터분석에서는 안 쓴다. 과최적화가 걸린다. 예쁘게 만들기 위해서만 쓴다.
End of explanation
from scipy import optimize
def f(x):
return x**2 + 10*np.sin(x)
x = np.arange(-10, 10, 0.1)
plt.plot(x, f(x))
result = optimize.minimize(f, 4)
print(result)
x0 = result['x']
x0
plt.plot(x, f(x));
plt.hold(True)
plt.scatter(x0, f(x0), s=200)
def sixhump(x):
return (4 - 2.1*x[0]**2 + x[0]**4 / 3.) * x[0]**2 + x[0] * x[1] + (-4 + \
4*x[1]**2) * x[1] **2
x = np.linspace(-2, 2)
y = np.linspace(-1, 1)
xg, yg = np.meshgrid(x, y)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
x = np.linspace(-2, 2)
y = np.linspace(-1, 1)
xg, yg = np.meshgrid(x, y)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
surf = ax.plot_surface(xg, yg, sixhump([xg, yg]), rstride=1, cstride=1,
cmap=plt.cm.jet, linewidth=0, antialiased=False)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('f(x, y)')
ax.set_title('Six-hump Camelback function')
plt.show()
x1 = optimize.minimize(sixhump, (1, 1))['x']
x2 = optimize.minimize(sixhump, (-1, -1))['x']
print(x1, x2)
Explanation: scipy.optimize 최적화
함수의 최소값 찾기
End of explanation
time_step = 0.02
period = 5.
time_vec = np.arange(0, 20, time_step)
sig = np.sin(2 * np.pi / period * time_vec) + 0.5 * np.random.randn(time_vec.size)
plt.plot(sig)
import scipy.fftpack
sample_freq = sp.fftpack.fftfreq(sig.size, d=time_step)
sig_fft = sp.fftpack.fft(sig)
pidxs = np.where(sample_freq > 0)
freqs, power = sample_freq[pidxs], np.abs(sig_fft)[pidxs]
freq = freqs[power.argmax()]
plt.stem(freqs[:50], power[:50])
plt.xlabel('Frequency [Hz]')
plt.ylabel('plower')
Explanation: scipy.fftpack 고속 퓨리에 변환 Fast Fourier transforms
신호를 주파수(frequency)영역으로 변환
스펙트럼(spectrum)
End of explanation |
13,438 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Competition site
http
Step1: Constants
Step2: Data Loading
- Costs Data
Step3: - Elevation Data
Step4: - Sample Data
Parcels that have already been auctioned and exploited (see the gold_available column)
Why are the Northing and Eastig shifted by .5?
Step5: - Auction Parcels
The objective is to figure out on which of these to bid and how much
Step6: Fixed costs calculator
Step7: Variable costs calculator
Use a simple linear regression
Step8: Estimate the average amount of gold under each biddable parcel
Naive approach
Step9: Radius = 3 seems to fit the bill
Compute the total extraction costs and estimated profit
Step10: Select top 5 most promising parcels that also match Kevin's predictions
Step11: Remove the total costs from the available budget
Step12: Place bids using an empiric "Gauss Distribution"
Offer 7 million for the middle 3 parcels
Divide the rest evenly for the remaining 2 | Python Code:
import pandas as pd
pd.set_option('display.float_format', '{:.2f}'.format)
import numpy as np
import matplotlib.pyplot as plt
%matplotlib notebook
Explanation: Competition site
http://www.kaybensoft.com/dublinr/10_competition_site.html
Imports
End of explanation
total_budget = 50000000
gold_price = 1500 # dollars per ounce
Explanation: Constants
End of explanation
costs_data = pd.read_csv("data/costs_data.csv")
costs_data.head()
Explanation: Data Loading
- Costs Data
End of explanation
elevation_data = pd.read_csv("data/elevation_data.csv")
elevation_data.head()
Explanation: - Elevation Data
End of explanation
sample_data = pd.read_csv("data/sample_data.csv")
sample_data.head()
Explanation: - Sample Data
Parcels that have already been auctioned and exploited (see the gold_available column)
Why are the Northing and Eastig shifted by .5?
End of explanation
auction_parcels = pd.read_csv("data/auction_parcels.csv")
auction_parcels.head()
Explanation: - Auction Parcels
The objective is to figure out on which of these to bid and how much
End of explanation
def fixed_cost(row):
if row["elevation"] < 0:
return 4000000.0
elif row["elevation"] > 0 and row["elevation"] <= 500:
return 3000000.0
else:
return 8000000.0
Explanation: Fixed costs calculator
End of explanation
from sklearn import linear_model
regressions = {}
for elevation in ["low", "med", "high"]:
regression = linear_model.LinearRegression()
regression.fit(costs_data.loc[:, ["gold_amount"]], costs_data.loc[:,elevation])
regressions[elevation] = regression
def variable_cost(row):
if row["elevation"] < 0:
return regressions['low'].predict(row["elevation"])[0]
elif row["elevation"] >= 0 and row["elevation"] <= 700:
return regressions['med'].predict(row["elevation"])[0]
else:
return regressions['high'].predict(row["elevation"])[0]
Explanation: Variable costs calculator
Use a simple linear regression
End of explanation
plt.gca().set_autoscale_on(False)
plt.axis([0.0, 150.0, 0.0, 150.0])
plt.scatter(sample_data.Easting, sample_data.Northing, s=1, color='y', label='Sample Data')
plt.scatter(auction_parcels.Easting, auction_parcels.Northing, marker='x', color='r', label='Auction Parcels')
plt.scatter(auction_parcels.Easting, auction_parcels.Northing, s=1000, facecolors='none', edgecolors='b', label='Radius')
lgnd = plt.legend(scatterpoints=1, fontsize=10)
lgnd.legendHandles[0]._sizes = [50]
lgnd.legendHandles[1]._sizes = [50]
lgnd.legendHandles[2]._sizes = [50]
plt.xlabel('Easting')
plt.ylabel('Northing')
def estimate_gold(radius):
gold_estimations = []
for idx_ap, ap in auction_parcels.iterrows():
sum = 0
count = 0
for idx_sd, sd in sample_data.iterrows():
if (radius >= np.linalg.norm(np.array([sd['Easting'], sd['Northing']]) - np.array([ap['Easting'], ap['Northing']]))):
sum += sd['gold_available']
count += 1
sum = sum / count if count > 0 else 0
estimated_gold_column = 'estimated_gold_r{:d}'.format(radius)
included_samples_column = 'included_samples_r{:d}'.format(radius)
gold_estimations.append({'parcel_id': ap['parcel_id'], estimated_gold_column: sum, included_samples_column: count})
return gold_estimations
gold_estimations = auction_parcels.loc[:,['parcel_id']]
for radius in range(1, 10):
gold_estimations = gold_estimations.merge(pd.DataFrame(estimate_gold(radius)), on='parcel_id')
gold_estimations
Explanation: Estimate the average amount of gold under each biddable parcel
Naive approach: average the known quantities of gold in a circle of a given radius around each parcel
Use the smallest circle radius for which the gold amount estimations start to increase monotonically
run the algorithm for radius in range(1, 10) and eyeball a decent value
End of explanation
total_costs = gold_estimations.loc[:, ['parcel_id', 'estimated_gold_r3']]
total_costs = total_costs.merge(elevation_data.loc[:, ['parcel_id', 'elevation']], on='parcel_id')
total_costs['total_cost'] = total_costs.apply(lambda row: fixed_cost(row), axis = 1) + total_costs.apply(lambda row: variable_cost(row), axis = 1)
total_costs['estimated_profit'] = total_costs.apply(lambda row: gold_price * row['estimated_gold_r3'] - row['total_cost'], axis = 1)
total_costs.sort_values(by=['estimated_profit'])
Explanation: Radius = 3 seems to fit the bill
Compute the total extraction costs and estimated profit
End of explanation
# Parcel IDs from Kevin: [7837, 19114, 20194, 11489,10905,1790,13249,14154,12810,11614,12221]
selected_parcels = [19114, 20194, 11489, 11614, 12810]
selected_total_costs = total_costs.loc[total_costs.parcel_id.isin(selected_parcels), :].sort_values(by=['estimated_profit'])
selected_total_costs
Explanation: Select top 5 most promising parcels that also match Kevin's predictions
End of explanation
total_cost = selected_total_costs.total_cost.sum()
bid_money = total_budget - total_cost
bid_money
Explanation: Remove the total costs from the available budget
End of explanation
bids = selected_total_costs.loc[:, ['parcel_id']]
max_bid = 7000000
remaining_money = bid_money - (max_bid * 3)
bids['bid_amount'] = pd.Series([remaining_money / 2, max_bid, max_bid, max_bid, remaining_money / 2]).values
bids
bids.to_csv("kevin_mihai.csv", cols=['parcel_id', 'bid_amount'], index=False)
Explanation: Place bids using an empiric "Gauss Distribution"
Offer 7 million for the middle 3 parcels
Divide the rest evenly for the remaining 2
End of explanation |
13,439 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: 컨볼루셔널 변이형 오토인코더
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: MNIST 데이터세트 로드하기
각 MNIST 이미지는 원래 각각 0-255 사이인 784개의 정수로 구성된 벡터이며 픽셀 강도를 나타냅니다. 모델에서 Bernoulli 분포를 사용하여 각 픽셀을 모델링하고 데이터세트를 정적으로 이진화합니다.
Step3: tf.data를 사용하여 데이터 배치 및 셔플 처리하기
Step5: tf.keras.Sequential을 사용하여 인코더 및 디코더 네트워크 정의하기
이 VAE 예제에서는 인코더 및 디코더 네트워크에 두 개의 작은 ConvNet을 사용합니다. 문헌에서, 이들 네트워크는 각각 추론/인식 및 생성 모델로도 지칭됩니다. 구현을 단순화하기 위해 tf.keras.Sequential을 사용합니다. 다음 설명에서 $x$ 및 $z$는 각각 관측 값과 잠재 변수를 나타냅니다.
인코더 네트워크
이것은 근사 사후 분포 $q(z|x)$를 정의합니다. 이 분포는 관측 값을 입력으로 받고 잠재 표현 $z$의 조건부 분포를 지정하기 위한 매개변수 세트를 출력합니다. 이 예에서는 분포를 대각선 가우스로 간단히 모델링하고 네트워크는 인수 분해된 가우스의 평균 및 로그-분산 매개변수를 출력합니다. 수치 안정성을 위해 분산을 직접 출력하지 않고 로그-분산을 출력합니다.
디코더 네트워크
잠재 샘플 $z$를 입력으로 사용하여 관측 값의 조건부 분포에 대한 매개변수를 출력하는 관측 값 $p(x|z)$의 조건부 분포를 정의합니다. 잠재 이전 분포 $p(z)$를 단위 가우스로 모델링합니다.
재매개변수화 트릭
훈련 중에 디코더에 대해 샘플 $z$를 생성하기 위해, 입력 관측 값 $x$가 주어졌을 때 인코더에 의해 출력된 매개변수로 정의된 잠재 분포로부터 샘플링할 수 있습니다. 그러나 역전파가 무작위 노드를 통해 흐를 수 없기 때문에 이 샘플링 작업에서 병목 현상이 발생합니다.
이를 해결하기 위해 재매개변수화 트릭을 사용합니다. 이 예에서는 디코더 매개변수와 다른 매개변수 $\epsilon$을 다음과 같이 사용하여 $z$를 근사시킵니다.
$$z = \mu + \sigma \odot \epsilon$$
여기서 $\mu$ 및 $\sigma$는 각각 가우스 분포의 평균 및 표준 편차를 나타냅니다. 이들은 디코더 출력에서 파생될 수 있습니다. $\epsilon$은 $z$의 무질서도를 유지하는 데 사용되는 무작위 노이즈로 생각할 수 있습니다. 표준 정규 분포에서 $\epsilon$을 생성합니다.
잠재 변수 $z$는 이제 $\mu$, $\sigma$ 및 $\epsilon$의 함수에 의해 생성되며, 이를 통해 모델은 각각 $\mu$ 및 $\sigma$를 통해 인코더의 그래디언트를 역전파하면서 $epsepson$를 통해 무질서도를 유지할 수 있습니다.
네트워크 아키텍처
인코더 네트워크의 경우 두 개의 컨볼루션 레이어, 그리고 이어서 완전히 연결된 레이어를 사용합니다. 디코더 네트워크에서 완전히 연결된 레이어와 그 뒤에 세 개의 컨볼루션 전치 레이어(일부 컨텍스트에서는 디컨볼루션 레이어라고도 함)를 사용하여 이 아키텍처를 미러링합니다. 미니 배치 사용으로 인한 추가 무질서도가 샘플링의 무질서에 더해 불안정성을 높일 수 있으므로 VAE 훈련시 배치 정규화를 사용하지 않는 것이 일반적입니다.
Step7: 손실 함수 및 옵티마이저 정의하기
VAE는 한계 로그-우도에 대한 ELBO(evidence lower bound)를 최대화하여 훈련합니다.
$$\log p(x) \ge \text{ELBO} = \mathbb{E}_{q(z|x)}\left[\log \frac{p(x, z)}{q(z|x)}\right].$$
실제로, 이 예상에 대한 단일 샘플 Monte Carlo 추정값을 최적화합니다.
$$\log p(x| z) + \log p(z) - \log q(z|x),$$ 여기서 $z$는 $q(z|x)$에서 샘플링됩니다.
참고
Step8: 훈련하기
데이터세트를 반복하여 시작합니다.
반복하는 동안 매번 이미지를 인코더로 전달하여 근사적인 사후 $q(z|x)$의 평균 및 로그-분산 매개변수 세트를 얻습니다.
그런 다음 $q(z|x)$에서 샘플링하기 위해 재매개변수화 트릭을 적용합니다.
마지막으로, 생성된 분포 $p(x|z)$의 로짓을 얻기 위해 재매개변수화된 샘플을 디코더로 전달합니다.
참고
Step9: 마지막 훈련 epoch에서 생성된 이미지 표시하기
Step10: 저장된 모든 이미지의 애니메이션 GIF 표시하기
Step12: 잠재 공간에서 숫자의 2D 형태 표시하기
아래 코드를 실행하면 다른 숫자 클래스의 연속 분포가 표시되며 각 숫자는 2D 잠재 공간에서 다른 숫자로 모핑됩니다. 여기서는 잠재 공간에 대한 표준 정규 분포를 생성하기 위해 TensorFlow Probability를 사용합니다. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
!pip install tensorflow-probability
# to generate gifs
!pip install imageio
!pip install git+https://github.com/tensorflow/docs
from IPython import display
import glob
import imageio
import matplotlib.pyplot as plt
import numpy as np
import PIL
import tensorflow as tf
import tensorflow_probability as tfp
import time
Explanation: 컨볼루셔널 변이형 오토인코더
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/tutorials/generative/cvae"><img src="https://www.tensorflow.org/images/tf_logo_32px.png"> TensorFlow.org에서보기</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/tutorials/generative/cvae.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/tutorials/generative/cvae.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서 소스 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/tutorials/generative/cvae.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드</a></td>
</table>
이 노트북은 MNIST 데이터세트에서 변이형 오토인코더(VAE, Variational Autoencoder)를 훈련하는 방법을 보여줍니다(1 , 2). VAE는 오토인코더의 확률론적 형태로, 높은 차원의 입력 데이터를 더 작은 표현으로 압축하는 모델입니다. 입력을 잠재 벡터에 매핑하는 기존의 오토인코더와 달리 VAE는 입력 데이터를 가우스 평균 및 분산과 같은 확률 분포의 매개변수에 매핑합니다. 이 방식은 연속적이고 구조화된 잠재 공간을 생성하므로 이미지 생성에 유용합니다.
설정
End of explanation
(train_images, _), (test_images, _) = tf.keras.datasets.mnist.load_data()
def preprocess_images(images):
images = images.reshape((images.shape[0], 28, 28, 1)) / 255.
return np.where(images > .5, 1.0, 0.0).astype('float32')
train_images = preprocess_images(train_images)
test_images = preprocess_images(test_images)
train_size = 60000
batch_size = 32
test_size = 10000
Explanation: MNIST 데이터세트 로드하기
각 MNIST 이미지는 원래 각각 0-255 사이인 784개의 정수로 구성된 벡터이며 픽셀 강도를 나타냅니다. 모델에서 Bernoulli 분포를 사용하여 각 픽셀을 모델링하고 데이터세트를 정적으로 이진화합니다.
End of explanation
train_dataset = (tf.data.Dataset.from_tensor_slices(train_images)
.shuffle(train_size).batch(batch_size))
test_dataset = (tf.data.Dataset.from_tensor_slices(test_images)
.shuffle(test_size).batch(batch_size))
Explanation: tf.data를 사용하여 데이터 배치 및 셔플 처리하기
End of explanation
class CVAE(tf.keras.Model):
Convolutional variational autoencoder.
def __init__(self, latent_dim):
super(CVAE, self).__init__()
self.latent_dim = latent_dim
self.encoder = tf.keras.Sequential(
[
tf.keras.layers.InputLayer(input_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(
filters=32, kernel_size=3, strides=(2, 2), activation='relu'),
tf.keras.layers.Conv2D(
filters=64, kernel_size=3, strides=(2, 2), activation='relu'),
tf.keras.layers.Flatten(),
# No activation
tf.keras.layers.Dense(latent_dim + latent_dim),
]
)
self.decoder = tf.keras.Sequential(
[
tf.keras.layers.InputLayer(input_shape=(latent_dim,)),
tf.keras.layers.Dense(units=7*7*32, activation=tf.nn.relu),
tf.keras.layers.Reshape(target_shape=(7, 7, 32)),
tf.keras.layers.Conv2DTranspose(
filters=64, kernel_size=3, strides=2, padding='same',
activation='relu'),
tf.keras.layers.Conv2DTranspose(
filters=32, kernel_size=3, strides=2, padding='same',
activation='relu'),
# No activation
tf.keras.layers.Conv2DTranspose(
filters=1, kernel_size=3, strides=1, padding='same'),
]
)
@tf.function
def sample(self, eps=None):
if eps is None:
eps = tf.random.normal(shape=(100, self.latent_dim))
return self.decode(eps, apply_sigmoid=True)
def encode(self, x):
mean, logvar = tf.split(self.encoder(x), num_or_size_splits=2, axis=1)
return mean, logvar
def reparameterize(self, mean, logvar):
eps = tf.random.normal(shape=mean.shape)
return eps * tf.exp(logvar * .5) + mean
def decode(self, z, apply_sigmoid=False):
logits = self.decoder(z)
if apply_sigmoid:
probs = tf.sigmoid(logits)
return probs
return logits
Explanation: tf.keras.Sequential을 사용하여 인코더 및 디코더 네트워크 정의하기
이 VAE 예제에서는 인코더 및 디코더 네트워크에 두 개의 작은 ConvNet을 사용합니다. 문헌에서, 이들 네트워크는 각각 추론/인식 및 생성 모델로도 지칭됩니다. 구현을 단순화하기 위해 tf.keras.Sequential을 사용합니다. 다음 설명에서 $x$ 및 $z$는 각각 관측 값과 잠재 변수를 나타냅니다.
인코더 네트워크
이것은 근사 사후 분포 $q(z|x)$를 정의합니다. 이 분포는 관측 값을 입력으로 받고 잠재 표현 $z$의 조건부 분포를 지정하기 위한 매개변수 세트를 출력합니다. 이 예에서는 분포를 대각선 가우스로 간단히 모델링하고 네트워크는 인수 분해된 가우스의 평균 및 로그-분산 매개변수를 출력합니다. 수치 안정성을 위해 분산을 직접 출력하지 않고 로그-분산을 출력합니다.
디코더 네트워크
잠재 샘플 $z$를 입력으로 사용하여 관측 값의 조건부 분포에 대한 매개변수를 출력하는 관측 값 $p(x|z)$의 조건부 분포를 정의합니다. 잠재 이전 분포 $p(z)$를 단위 가우스로 모델링합니다.
재매개변수화 트릭
훈련 중에 디코더에 대해 샘플 $z$를 생성하기 위해, 입력 관측 값 $x$가 주어졌을 때 인코더에 의해 출력된 매개변수로 정의된 잠재 분포로부터 샘플링할 수 있습니다. 그러나 역전파가 무작위 노드를 통해 흐를 수 없기 때문에 이 샘플링 작업에서 병목 현상이 발생합니다.
이를 해결하기 위해 재매개변수화 트릭을 사용합니다. 이 예에서는 디코더 매개변수와 다른 매개변수 $\epsilon$을 다음과 같이 사용하여 $z$를 근사시킵니다.
$$z = \mu + \sigma \odot \epsilon$$
여기서 $\mu$ 및 $\sigma$는 각각 가우스 분포의 평균 및 표준 편차를 나타냅니다. 이들은 디코더 출력에서 파생될 수 있습니다. $\epsilon$은 $z$의 무질서도를 유지하는 데 사용되는 무작위 노이즈로 생각할 수 있습니다. 표준 정규 분포에서 $\epsilon$을 생성합니다.
잠재 변수 $z$는 이제 $\mu$, $\sigma$ 및 $\epsilon$의 함수에 의해 생성되며, 이를 통해 모델은 각각 $\mu$ 및 $\sigma$를 통해 인코더의 그래디언트를 역전파하면서 $epsepson$를 통해 무질서도를 유지할 수 있습니다.
네트워크 아키텍처
인코더 네트워크의 경우 두 개의 컨볼루션 레이어, 그리고 이어서 완전히 연결된 레이어를 사용합니다. 디코더 네트워크에서 완전히 연결된 레이어와 그 뒤에 세 개의 컨볼루션 전치 레이어(일부 컨텍스트에서는 디컨볼루션 레이어라고도 함)를 사용하여 이 아키텍처를 미러링합니다. 미니 배치 사용으로 인한 추가 무질서도가 샘플링의 무질서에 더해 불안정성을 높일 수 있으므로 VAE 훈련시 배치 정규화를 사용하지 않는 것이 일반적입니다.
End of explanation
optimizer = tf.keras.optimizers.Adam(1e-4)
def log_normal_pdf(sample, mean, logvar, raxis=1):
log2pi = tf.math.log(2. * np.pi)
return tf.reduce_sum(
-.5 * ((sample - mean) ** 2. * tf.exp(-logvar) + logvar + log2pi),
axis=raxis)
def compute_loss(model, x):
mean, logvar = model.encode(x)
z = model.reparameterize(mean, logvar)
x_logit = model.decode(z)
cross_ent = tf.nn.sigmoid_cross_entropy_with_logits(logits=x_logit, labels=x)
logpx_z = -tf.reduce_sum(cross_ent, axis=[1, 2, 3])
logpz = log_normal_pdf(z, 0., 0.)
logqz_x = log_normal_pdf(z, mean, logvar)
return -tf.reduce_mean(logpx_z + logpz - logqz_x)
@tf.function
def train_step(model, x, optimizer):
Executes one training step and returns the loss.
This function computes the loss and gradients, and uses the latter to
update the model's parameters.
with tf.GradientTape() as tape:
loss = compute_loss(model, x)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
Explanation: 손실 함수 및 옵티마이저 정의하기
VAE는 한계 로그-우도에 대한 ELBO(evidence lower bound)를 최대화하여 훈련합니다.
$$\log p(x) \ge \text{ELBO} = \mathbb{E}_{q(z|x)}\left[\log \frac{p(x, z)}{q(z|x)}\right].$$
실제로, 이 예상에 대한 단일 샘플 Monte Carlo 추정값을 최적화합니다.
$$\log p(x| z) + \log p(z) - \log q(z|x),$$ 여기서 $z$는 $q(z|x)$에서 샘플링됩니다.
참고: KL 항을 분석적으로 계산할 수도 있지만 여기서는 단순화를 위해 Monte Carlo 예측 도구에 세 항을 모두 통합합니다.
End of explanation
epochs = 10
# set the dimensionality of the latent space to a plane for visualization later
latent_dim = 2
num_examples_to_generate = 16
# keeping the random vector constant for generation (prediction) so
# it will be easier to see the improvement.
random_vector_for_generation = tf.random.normal(
shape=[num_examples_to_generate, latent_dim])
model = CVAE(latent_dim)
def generate_and_save_images(model, epoch, test_sample):
mean, logvar = model.encode(test_sample)
z = model.reparameterize(mean, logvar)
predictions = model.sample(z)
fig = plt.figure(figsize=(4, 4))
for i in range(predictions.shape[0]):
plt.subplot(4, 4, i + 1)
plt.imshow(predictions[i, :, :, 0], cmap='gray')
plt.axis('off')
# tight_layout minimizes the overlap between 2 sub-plots
plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))
plt.show()
# Pick a sample of the test set for generating output images
assert batch_size >= num_examples_to_generate
for test_batch in test_dataset.take(1):
test_sample = test_batch[0:num_examples_to_generate, :, :, :]
generate_and_save_images(model, 0, test_sample)
for epoch in range(1, epochs + 1):
start_time = time.time()
for train_x in train_dataset:
train_step(model, train_x, optimizer)
end_time = time.time()
loss = tf.keras.metrics.Mean()
for test_x in test_dataset:
loss(compute_loss(model, test_x))
elbo = -loss.result()
display.clear_output(wait=False)
print('Epoch: {}, Test set ELBO: {}, time elapse for current epoch: {}'
.format(epoch, elbo, end_time - start_time))
generate_and_save_images(model, epoch, test_sample)
Explanation: 훈련하기
데이터세트를 반복하여 시작합니다.
반복하는 동안 매번 이미지를 인코더로 전달하여 근사적인 사후 $q(z|x)$의 평균 및 로그-분산 매개변수 세트를 얻습니다.
그런 다음 $q(z|x)$에서 샘플링하기 위해 재매개변수화 트릭을 적용합니다.
마지막으로, 생성된 분포 $p(x|z)$의 로짓을 얻기 위해 재매개변수화된 샘플을 디코더로 전달합니다.
참고: 훈련 세트에 60k 데이터 포인트와 테스트 세트에 10k 데이터 포인트가 있는 keras에 의해 로드된 데이터세트를 사용하기 때문에 테스트세트에 대한 결과 ELBO는 Larochelle MNIST의 동적 이진화를 사용하는 문헌에서 보고된 결과보다 약간 높습니다.
이미지 생성하기
훈련을 마쳤으면 이미지를 생성할 차례입니다.
우선, 단위 가우스 사전 분포 $p(z)$에서 잠재 벡터 세트를 샘플링합니다.
그러면 생성기가 잠재 샘플 $z$를 관측 값의 로짓으로 변환하여 분포 $p(x|z)$를 제공합니다.
여기서 Bernoulli 분포의 확률을 플롯합니다.
End of explanation
def display_image(epoch_no):
return PIL.Image.open('image_at_epoch_{:04d}.png'.format(epoch_no))
plt.imshow(display_image(epoch))
plt.axis('off') # Display images
Explanation: 마지막 훈련 epoch에서 생성된 이미지 표시하기
End of explanation
anim_file = 'cvae.gif'
with imageio.get_writer(anim_file, mode='I') as writer:
filenames = glob.glob('image*.png')
filenames = sorted(filenames)
for filename in filenames:
image = imageio.imread(filename)
writer.append_data(image)
image = imageio.imread(filename)
writer.append_data(image)
import tensorflow_docs.vis.embed as embed
embed.embed_file(anim_file)
Explanation: 저장된 모든 이미지의 애니메이션 GIF 표시하기
End of explanation
def plot_latent_images(model, n, digit_size=28):
Plots n x n digit images decoded from the latent space.
norm = tfp.distributions.Normal(0, 1)
grid_x = norm.quantile(np.linspace(0.05, 0.95, n))
grid_y = norm.quantile(np.linspace(0.05, 0.95, n))
image_width = digit_size*n
image_height = image_width
image = np.zeros((image_height, image_width))
for i, yi in enumerate(grid_x):
for j, xi in enumerate(grid_y):
z = np.array([[xi, yi]])
x_decoded = model.sample(z)
digit = tf.reshape(x_decoded[0], (digit_size, digit_size))
image[i * digit_size: (i + 1) * digit_size,
j * digit_size: (j + 1) * digit_size] = digit.numpy()
plt.figure(figsize=(10, 10))
plt.imshow(image, cmap='Greys_r')
plt.axis('Off')
plt.show()
plot_latent_images(model, 20)
Explanation: 잠재 공간에서 숫자의 2D 형태 표시하기
아래 코드를 실행하면 다른 숫자 클래스의 연속 분포가 표시되며 각 숫자는 2D 잠재 공간에서 다른 숫자로 모핑됩니다. 여기서는 잠재 공간에 대한 표준 정규 분포를 생성하기 위해 TensorFlow Probability를 사용합니다.
End of explanation |
13,440 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tracking Parameters and Metrics for Vertex AI Custom Training Jobs
Learning objectives
In this notebook, you learn how to
Step1: Please ignore the incompatibility errors.
Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
Step2: Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Enable the Vertex AI API and Compute Engine API.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step3: Otherwise, set your project ID here.
Step4: Set gcloud config to your project ID.
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
Step6: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a training job using the Cloud SDK, you upload a Python package
containing your training code to a Cloud Storage bucket. Vertex AI runs
the code from this package. In this tutorial, Vertex AI also saves the
trained model that results from your job in the same bucket. Using this model artifact, you can then
create Vertex AI model and endpoint resources in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are
available. You may
not use a Multi-Regional Storage bucket for training with Vertex AI.
Step7: Only if your bucket doesn't already exist
Step8: Finally, validate access to your Cloud Storage bucket by examining its contents
Step9: Import libraries and define constants
Import required libraries.
Step10: Initialize Vertex AI and set an experiment
Define experiment name.
Step11: If EXEPERIMENT_NAME is not set, set a default one below
Step12: Initialize the client for Vertex AI.
Step13: Tracking parameters and metrics in Vertex AI custom training jobs
This example uses the Abalone Dataset. For more information about this dataset please visit
Step14: Create a managed tabular dataset from a CSV
A Managed dataset can be used to create an AutoML model or a custom model.
Step15: Write the training script
Run the following cell to create the training script that is used in the sample custom training job.
Step16: Launch a custom training job and track its training parameters on Vertex AI ML Metadata
Step17: Start a new experiment run to track training parameters and start the training job. Note that this operation will take around 10 mins.
Step18: Deploy Model and calculate prediction metrics
Deploy model to Google Cloud. This operation will take 10-20 mins.
Step19: Once model is deployed, perform online prediction using the abalone_test dataset and calculate prediction metrics.
Prepare the prediction dataset.
Step20: Perform online prediction.
Step21: Calculate and track prediction evaluation metrics.
Step22: Extract all parameters and metrics created during this experiment.
Step23: View data in the Cloud Console
Parameters and metrics can also be viewed in the Cloud Console.
Step24: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
# Install additional packages
! pip3 install -U tensorflow $USER_FLAG
! python3 -m pip install {USER_FLAG} google-cloud-aiplatform --upgrade
! pip3 install scikit-learn {USER_FLAG}
Explanation: Tracking Parameters and Metrics for Vertex AI Custom Training Jobs
Learning objectives
In this notebook, you learn how to:
Track training parameters and prediction metrics for a custom training job.
Extract and perform analysis for all parameters and metrics within an experiment.
Overview
This notebook demonstrates how to track metrics and parameters for Vertex AI custom training jobs, and how to perform detailed analysis using this data.
Dataset
This example uses the Abalone Dataset. For more information about this dataset please visit: https://archive.ics.uci.edu/ml/datasets/abalone
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Install additional packages
Install additional package dependencies not installed in your notebook environment.
End of explanation
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Please ignore the incompatibility errors.
Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
import os
PROJECT_ID = "qwiklabs-gcp-03-aaf99941e8b2" # Replace your project ID here
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
Explanation: Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Enable the Vertex AI API and Compute Engine API.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
Set your project ID
If you don't know your project ID, you may be able to get your project ID using gcloud.
End of explanation
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "qwiklabs-gcp-03-aaf99941e8b2" # Replace your project ID here
Explanation: Otherwise, set your project ID here.
End of explanation
!gcloud config set project $PROJECT_ID
Explanation: Set gcloud config to your project ID.
End of explanation
# Import necessary library and define Timestamp
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
End of explanation
BUCKET_URI = "gs://qwiklabs-gcp-03-aaf99941e8b2" # Replace your bucket name here
REGION = "us-central1" # @param {type:"string"}
if BUCKET_URI == "" or BUCKET_URI is None or BUCKET_URI == "gs://qwiklabs-gcp-03-aaf99941e8b2": # Replace your bucket name here
BUCKET_URI = "gs://" + PROJECT_ID + "-aip-" + TIMESTAMP
if REGION == "[your-region]":
REGION = "us-central1"
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a training job using the Cloud SDK, you upload a Python package
containing your training code to a Cloud Storage bucket. Vertex AI runs
the code from this package. In this tutorial, Vertex AI also saves the
trained model that results from your job in the same bucket. Using this model artifact, you can then
create Vertex AI model and endpoint resources in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are
available. You may
not use a Multi-Regional Storage bucket for training with Vertex AI.
End of explanation
# Create your bucket
! gsutil mb -l $REGION $BUCKET_URI
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_URI
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
# Import required libraries
import pandas as pd
from google.cloud import aiplatform
from sklearn.metrics import mean_absolute_error, mean_squared_error
from tensorflow.python.keras.utils import data_utils
Explanation: Import libraries and define constants
Import required libraries.
End of explanation
EXPERIMENT_NAME = "new" # Give your experiment a name of you choice
Explanation: Initialize Vertex AI and set an experiment
Define experiment name.
End of explanation
if EXPERIMENT_NAME == "" or EXPERIMENT_NAME is None:
EXPERIMENT_NAME = "my-experiment-" + TIMESTAMP
Explanation: If EXEPERIMENT_NAME is not set, set a default one below:
End of explanation
aiplatform.init(
project=PROJECT_ID,
location=REGION,
staging_bucket=BUCKET_URI,
experiment=EXPERIMENT_NAME,
)
Explanation: Initialize the client for Vertex AI.
End of explanation
Download and copy the csv file in your bucket
!wget https://storage.googleapis.com/download.tensorflow.org/data/abalone_train.csv
!gsutil cp abalone_train.csv {BUCKET_URI}/data/
gcs_csv_path = f"{BUCKET_URI}/data/abalone_train.csv"
Explanation: Tracking parameters and metrics in Vertex AI custom training jobs
This example uses the Abalone Dataset. For more information about this dataset please visit: https://archive.ics.uci.edu/ml/datasets/abalone
End of explanation
# Create a managed tabular dataset
# TODO 1
ds = aiplatform.TabularDataset.create(display_name="abalone", gcs_source=[gcs_csv_path])
ds.resource_name
Explanation: Create a managed tabular dataset from a CSV
A Managed dataset can be used to create an AutoML model or a custom model.
End of explanation
%%writefile training_script.py
import pandas as pd
import argparse
import os
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
parser = argparse.ArgumentParser()
parser.add_argument('--epochs', dest='epochs',
default=10, type=int,
help='Number of epochs.')
parser.add_argument('--num_units', dest='num_units',
default=64, type=int,
help='Number of unit for first layer.')
args = parser.parse_args()
# uncomment and bump up replica_count for distributed training
# strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# tf.distribute.experimental_set_strategy(strategy)
col_names = ["Length", "Diameter", "Height", "Whole weight", "Shucked weight", "Viscera weight", "Shell weight", "Age"]
target = "Age"
def aip_data_to_dataframe(wild_card_path):
return pd.concat([pd.read_csv(fp.numpy().decode(), names=col_names)
for fp in tf.data.Dataset.list_files([wild_card_path])])
def get_features_and_labels(df):
return df.drop(target, axis=1).values, df[target].values
def data_prep(wild_card_path):
return get_features_and_labels(aip_data_to_dataframe(wild_card_path))
model = tf.keras.Sequential([layers.Dense(args.num_units), layers.Dense(1)])
model.compile(loss='mse', optimizer='adam')
model.fit(*data_prep(os.environ["AIP_TRAINING_DATA_URI"]),
epochs=args.epochs ,
validation_data=data_prep(os.environ["AIP_VALIDATION_DATA_URI"]))
print(model.evaluate(*data_prep(os.environ["AIP_TEST_DATA_URI"])))
# save as Vertex AI Managed model
tf.saved_model.save(model, os.environ["AIP_MODEL_DIR"])
Explanation: Write the training script
Run the following cell to create the training script that is used in the sample custom training job.
End of explanation
# Define the training parameters
job = aiplatform.CustomTrainingJob(
display_name="train-abalone-dist-1-replica",
script_path="training_script.py",
container_uri="us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-8:latest",
requirements=["gcsfs==0.7.1"],
model_serving_container_image_uri="us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-8:latest",
)
Explanation: Launch a custom training job and track its training parameters on Vertex AI ML Metadata
End of explanation
aiplatform.start_run("custom-training-run-1") # Change this to your desired run name
parameters = {"epochs": 10, "num_units": 64}
aiplatform.log_params(parameters)
# Launch the training job
# TODO 2
model = job.run(
ds,
replica_count=1,
model_display_name="abalone-model",
args=[f"--epochs={parameters['epochs']}", f"--num_units={parameters['num_units']}"],
)
Explanation: Start a new experiment run to track training parameters and start the training job. Note that this operation will take around 10 mins.
End of explanation
# Deploy the model
# TODO 3
endpoint = model.deploy(machine_type="n1-standard-4")
Explanation: Deploy Model and calculate prediction metrics
Deploy model to Google Cloud. This operation will take 10-20 mins.
End of explanation
def read_data(uri):
dataset_path = data_utils.get_file("abalone_test.data", uri)
col_names = [
"Length",
"Diameter",
"Height",
"Whole weight",
"Shucked weight",
"Viscera weight",
"Shell weight",
"Age",
]
dataset = pd.read_csv(
dataset_path,
names=col_names,
na_values="?",
comment="\t",
sep=",",
skipinitialspace=True,
)
return dataset
def get_features_and_labels(df):
target = "Age"
return df.drop(target, axis=1).values, df[target].values
test_dataset, test_labels = get_features_and_labels(
read_data(
"https://storage.googleapis.com/download.tensorflow.org/data/abalone_test.csv"
)
)
Explanation: Once model is deployed, perform online prediction using the abalone_test dataset and calculate prediction metrics.
Prepare the prediction dataset.
End of explanation
# Perform online prediction using endpoint
# TODO 4
prediction = endpoint.predict(test_dataset.tolist())
prediction
Explanation: Perform online prediction.
End of explanation
mse = mean_squared_error(test_labels, prediction.predictions)
mae = mean_absolute_error(test_labels, prediction.predictions)
aiplatform.log_metrics({"mse": mse, "mae": mae})
Explanation: Calculate and track prediction evaluation metrics.
End of explanation
# Extract all parameters and metrics of the experiment
# TODO 5
aiplatform.get_experiment_df()
Explanation: Extract all parameters and metrics created during this experiment.
End of explanation
print("Vertex AI Experiments:")
print(
f"https://console.cloud.google.com/ai/platform/experiments/experiments?folder=&organizationId=&project={PROJECT_ID}"
)
Explanation: View data in the Cloud Console
Parameters and metrics can also be viewed in the Cloud Console.
End of explanation
# Warning: Setting this to true will delete everything in your bucket
delete_bucket = False
# Delete dataset
ds.delete()
# Delete the training job
job.delete()
# Undeploy model from endpoint
endpoint.undeploy_all()
# Delete the endpoint
endpoint.delete()
# Delete the model
model.delete()
if delete_bucket or os.getenv("IS_TESTING"):
! gsutil -m rm -r $BUCKET_URI
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Training Job
Model
Cloud Storage Bucket
Vertex AI Dataset
Training Job
Model
Endpoint
Cloud Storage Bucket
End of explanation |
13,441 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1><span style="color
Step1: Simulate a gene tree with 14 tips and MRCA of 1M generations
Step2: Simulate sequences on single gene tree and write to NEXUS
When Ne is greater the gene tree is more likely to deviate from the species tree topology and branch lengths. By setting recombination rate to 0 there will only be one true underlying genealogy for the gene tree. We set nsamples=2 because we want to simulate diploid individuals.
Step3: View an example locus
This shows the 2 haploid samples simulated for each tip in the species tree.
Step4: (1) Infer a tree under a relaxed molecular clock model
Step5: (2) Concatenated sequences from a species tree
Here we use concatenated sequence data from 100 loci where each represents one or more distinct genealogies. In addition, Ne is increased to 1e5, allowing for more genealogical variation. We expect the accuracy of estimated edge lengths will decrease since we are not adequately modeling the genealogical variation when using concatenation. Here I set the recombination rate within loci to be zero. There is free recombination among loci, however, since they are unlinked.
Step6: To see the NEXUS file (data, parameters, priors)
Step7: (3) Tree inference (not fixed topology) and plotting support values
Here we will try to infer the topology from a concatenated data set (i.e., not set a constraint on the topology). I increased the ngen setting since the MCMC chain takes longer to converge when searching over topology space. Take note that the support values from mrbayes newick files are available in the "prob{percent}" feature, as shown below.
Step8: The tree topology was correctly inferred
Step9: The branch lengths are not very accurate in this case | Python Code:
# conda install ipyrad -c conda-forge -c bioconda
# conda install mrbayes -c conda-forge -c bioconda
# conda install ipcoal -c conda-forge
import toytree
import ipcoal
import ipyrad.analysis as ipa
Explanation: <h1><span style="color:gray">ipyrad-analysis toolkit:</span> mrbayes</h1>
In these analyses our interest is primarily in inferring accurate branch lengths under a relaxed molecular clock model. This means that tips are forced to line up at the present (time) but that rates of substitutions are allowed to vary among branches to best explain the variation in the sequence data.
There is a huge range of models that can be employed using mrbayes by employing different combinations of parameter settings, model definitions, and prior settings. The ipyrad-analysis tool here is intended to make it easy to run such jobs many times (e.g., distributed in parallel) once you have decided on your settings. In addition, we provide a number of pre-set models (e.g., clock_model=2) that may be useful for simple scenarios.
Here we use simulations to demonstrate the accuracy of branch length estimation when sequences come from a single versus multiple distinct genealogies (e.g., gene tree vs species tree), and show an option to fix the topology to speed up analyses when your only interest is to estimate branch lengths.
End of explanation
TREE = toytree.rtree.bdtree(ntips=8, b=0.8, d=0.2, seed=123)
TREE = TREE.mod.node_scale_root_height(1e6)
TREE.draw(ts='o', layout='d', scalebar=True);
Explanation: Simulate a gene tree with 14 tips and MRCA of 1M generations
End of explanation
# init simulator
model = ipcoal.Model(TREE, Ne=2e4, nsamples=2, recomb=0)
# simulate sequence data on coalescent genealogies
model.sim_loci(nloci=1, nsites=20000)
# write results to database file
model.write_concat_to_nexus(name="mbtest-1", outdir='/tmp', diploid=True)
# the simulated genealogy of haploid alleles
gene = model.df.genealogy[0]
# draw the genealogy
toytree.tree(gene).draw(ts='o', layout='d', scalebar=True);
Explanation: Simulate sequences on single gene tree and write to NEXUS
When Ne is greater the gene tree is more likely to deviate from the species tree topology and branch lengths. By setting recombination rate to 0 there will only be one true underlying genealogy for the gene tree. We set nsamples=2 because we want to simulate diploid individuals.
End of explanation
model.draw_seqview(idx=0, start=0, end=50);
Explanation: View an example locus
This shows the 2 haploid samples simulated for each tip in the species tree.
End of explanation
# init the mb object
mb = ipa.mrbayes(
data="/tmp/mbtest-1.nex",
name="itest-1",
workdir="/tmp",
clock_model=2,
constraints=TREE,
ngen=int(1e6),
nruns=2,
)
# modify a parameter
mb.params.clockratepr = "normal(0.01,0.005)"
mb.params.samplefreq = 5000
# summary of priors/params
print(mb.params)
# start the run
mb.run(force=True)
# load the inferred tree
mbtre = toytree.tree("/tmp/itest-1.nex.con.tre", 10)
# scale root node to 1e6
mbtre = mbtre.mod.node_scale_root_height(1e6)
# draw inferred tree
c, a, m = mbtre.draw(ts='o', layout='d', scalebar=True);
# draw TRUE tree in orange on the same axes
TREE.draw(
axes=a,
ts='o',
layout='d',
scalebar=True,
edge_colors="darkorange",
node_sizes=0,
fixed_order=mbtre.get_tip_labels(),
);
# check convergence statistics
mb.convergence_stats
Explanation: (1) Infer a tree under a relaxed molecular clock model
End of explanation
# init simulator
model = ipcoal.Model(TREE, Ne=1e5, nsamples=2, recomb=0)
# simulate sequence data on coalescent genealogies
model.sim_loci(nloci=100, nsites=200)
# write results to database file
model.write_concat_to_nexus(name="mbtest-2", outdir='/tmp', diploid=True)
# the simulated genealogies of haploid alleles
genes = model.df.genealogy[:4]
# draw the genealogies of the first four loci
toytree.mtree(genes).draw(ts='o', layout='r', height=250);
# init the mb object
mb = ipa.mrbayes(
data="/tmp/mbtest-2.nex",
workdir="/tmp",
name="itest-2",
clock_model=2,
constraints=TREE,
ngen=int(1e6),
nruns=2,
)
# summary of priors/params
print(mb.params)
# start the run
mb.run(force=True)
# load the inferred tree
mbtre = toytree.tree("/tmp/itest-2.nex.con.tre", 10)
# scale root node from unitless to 1e6
mbtre = mbtre.mod.node_scale_root_height(1e6)
# draw inferred tree
c, a, m = mbtre.draw(ts='o', layout='d', scalebar=True);
# draw true tree in orange on the same axes
TREE.draw(
axes=a,
ts='o',
layout='d',
scalebar=True,
edge_colors="darkorange",
node_sizes=0,
fixed_order=mbtre.get_tip_labels(),
);
mb.convergence_stats
Explanation: (2) Concatenated sequences from a species tree
Here we use concatenated sequence data from 100 loci where each represents one or more distinct genealogies. In addition, Ne is increased to 1e5, allowing for more genealogical variation. We expect the accuracy of estimated edge lengths will decrease since we are not adequately modeling the genealogical variation when using concatenation. Here I set the recombination rate within loci to be zero. There is free recombination among loci, however, since they are unlinked.
End of explanation
mb.print_nexus_string()
Explanation: To see the NEXUS file (data, parameters, priors):
End of explanation
# init the mb object
mb = ipa.mrbayes(
data="/tmp/mbtest-2.nex",
name="itest-3",
workdir="/tmp",
clock_model=2,
ngen=int(2e6),
nruns=2,
)
# summary of priors/params
print(mb.params)
# start run
mb.run(force=True)
Explanation: (3) Tree inference (not fixed topology) and plotting support values
Here we will try to infer the topology from a concatenated data set (i.e., not set a constraint on the topology). I increased the ngen setting since the MCMC chain takes longer to converge when searching over topology space. Take note that the support values from mrbayes newick files are available in the "prob{percent}" feature, as shown below.
End of explanation
# load the inferred tree
mbtre = toytree.tree("/tmp/itest-3.nex.con.tre", 10)
# scale root node from unitless to 1e6
mbtre = mbtre.mod.node_scale_root_height(1e6)
# draw inferred tree
c, a, m = mbtre.draw(
layout='d',
scalebar=True,
node_sizes=18,
node_labels="prob{percent}",
);
Explanation: The tree topology was correctly inferred
End of explanation
# load the inferred tree
mbtre = toytree.tree("/tmp/itest-3.nex.con.tre", 10)
# scale root node from unitless to 1e6
mbtre = mbtre.mod.node_scale_root_height(1e6)
# draw inferred tree
c, a, m = mbtre.draw(ts='o', layout='d', scalebar=True);
# draw true tree in orange on the same axes
TREE.draw(
axes=a,
ts='o',
layout='d',
scalebar=True,
edge_colors="darkorange",
node_sizes=0,
fixed_order=mbtre.get_tip_labels(),
);
Explanation: The branch lengths are not very accurate in this case:
End of explanation |
13,442 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The EOH (Evolution of Hamiltonian) Algorithm
This notebook demonstrates how to use the Qiskit Aqua library to invoke the EOH algorithm and process the result.
Further information may be found for the algorithms in the online Aqua documentation.
For this particular demonstration, we illustrate the EOH algorithm. First, two Operator instances we created are randomly generated Hamiltonians.
Step1: For EOH, we would like to evolve some initial state (e.g. the uniform superposition state) with evoOp and do a measurement using qubitOp. Below, we illustrate how such an example dynamics process can be easily prepared.
Step2: With all the necessary pieces prepared, we can then proceed to run the algorithm and examine the result. | Python Code:
import numpy as np
from qiskit_aqua.operator import Operator
num_qubits = 2
temp = np.random.random((2 ** num_qubits, 2 ** num_qubits))
qubitOp = Operator(matrix=temp + temp.T)
temp = np.random.random((2 ** num_qubits, 2 ** num_qubits))
evoOp = Operator(matrix=temp + temp.T)
Explanation: The EOH (Evolution of Hamiltonian) Algorithm
This notebook demonstrates how to use the Qiskit Aqua library to invoke the EOH algorithm and process the result.
Further information may be found for the algorithms in the online Aqua documentation.
For this particular demonstration, we illustrate the EOH algorithm. First, two Operator instances we created are randomly generated Hamiltonians.
End of explanation
from qiskit_aqua.input import get_input_instance
params = {
'problem': {
'name': 'eoh'
},
'algorithm': {
'name': 'EOH',
'num_time_slices': 1
},
'initial_state': {
'name': 'CUSTOM',
'state': 'uniform'
},
'backend': {
'name': 'statevector_simulator'
}
}
algo_input = get_input_instance('EnergyInput')
algo_input.qubit_op = qubitOp
algo_input.add_aux_op(evoOp)
Explanation: For EOH, we would like to evolve some initial state (e.g. the uniform superposition state) with evoOp and do a measurement using qubitOp. Below, we illustrate how such an example dynamics process can be easily prepared.
End of explanation
from qiskit_aqua import run_algorithm
ret = run_algorithm(params, algo_input)
print('The result is\n{}'.format(ret))
Explanation: With all the necessary pieces prepared, we can then proceed to run the algorithm and examine the result.
End of explanation |
13,443 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Sample programs
Before working on the following programming examples, it is necessary to upload some text files to Colab. For that, select the folder icon on the left side of Colab. Then, move the cursor on the sample data folder in the opening File panel. After that, click on the three dots and select Upload. Lastly, upload a text file with easting, northin, elevation data or use the code block provided below to copy the file from Google Drive to the sample_data folder of Colab.
Step2: Simple solution (following the flowchart above) to list the content of a text file.
Step3: Example to add ordinal number to rows in a file using Pythonic code.
Step4: Is the result satisfactory? Note that there are empty lines between the data rows. This happens because the line read from the file has an EOL character and the print command add another EOL to the line. Thereby, we remove EOL char(s).
Step5: Let's create a new file with the row numbers.
Step6: We can check if the new file exists.
Step7: Let's find the bounding box from the coordinates stored in the file. Fields are separated by space. For that, we will use pandas.
Step8: To avoid unneccesary decimals, let's use f-string to format output.
Step9: Pandas handles data set of records, each having an index number. Some examples to access data in a data set are provided on the code blocks below
Step10: Adding new column to the data set, the distance from the origin and some statistical data.
Step11: Pandas has some plotting capabilities as well. The picture below illustrates a 2D view of the point cloud.
Step12: Alternative plot with point IDs, direct use of matplotlib.
Step13: Let's create a text file from the Pandas data set which contains only the east and the north coordinates.
Step14: Note
Pandas can read/write data from/to relational databases too (SQLite, PostgreSQL, etc.).
Parse Nikon recorded observations
Nikon total stations save observations into variable length and variable structure delimited text file. Pandas is not a solution for this situation.
Sample from the file
Step17: First, we will make auxiliary functions. Mean directions and zenith angles are in DMS format as a pseudo decimal number (e.g. DDD.MMSS). Based on that, we'll write a function to change the DDD.MMSS values to radians, and another function to convert radian to DDD-MM-SS format.
Step18: We won't process all record types from the file, only ST, F1 and SS record will be considered. Given this, the next part of the code parses the input file.
Step19: The next and last part will print the field-book in human readable form.
Step20: Complex example for self-study
Finally let's try a more complex example. The following program can be downloaded from the Internet, as follows (it is also available on GitHub)
Step21: This program gets parameters from the command line, so it can be used in automation processes.
Step22: The next command shows how to get help about the program. On your own machine, do not use the "!" given that it is a Colab feature. You may use python3 instead of python.
Step23: The next command shows how to use the program on the gcp.txt file previously downloaded (First example of Sample Programs). The code block below can be reas as follows | Python Code:
from IPython.display import Image
from IPython.core.display import HTML
Image(url= "https://github.com/OSGeoLabBp/tutorials/blob/master/english/data_processing/lessons/images/file_proc.png?raw=true")
Explanation: <a href="https://colab.research.google.com/github/OSGeoLabBp/tutorials/blob/master/english/data_processing/lessons/text_files.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Text file processing in Python
There are two main groups of files stored on a computer: text files and binary files. Text files are human readable, usually edited by notepad, notepad++, etc. (e.g. .txt, .csv, .html, .xml). These files consist of lines that are separated by the end of line marker (EOL). On the other hand, binary files are created/read by special programs (for example .jpg, .exe, .las, .doc, .xls).
Text files consist of lines, lines are separated by end of line markers (EOL).
Operating system | EOL marker
-----------------|------------
Windows | \r\n
Linux/Unix | \n
OS X | \r
Type of text files
Have you ever seen such files shown in the following formats?
CSV file with header line, comma separated, fixed or variable record structure, header is optional
Psz,X,Y,Z,
11,91515.440,2815.220,111.920
12,90661.580,1475.280,
13,84862.540,3865.360,
14,91164.160,4415.080,130.000
15,86808.180,347.660,
16,90050.240,3525.120,
231,88568.240,2281.760,
232,88619.860,3159.880,
5001,,,100.000
5002,,,138.800
...
Stanford Triangle Format (Polygon File Format) for point clouds and meshes, several header lines, space separated records with fixed structure
ply
format ascii 1.0
element vertex 1978561
property float x
property float y
property float z
property float nx
property float ny
property float nz
property uchar diffuse_red
property uchar diffuse_green
property uchar diffuse_blue
end_header
0.445606 -10.6263 16.0626 -0.109425 -0.0562636 -0.992401 63 68 83
0.460964 -10.6142 16.0604 -0.255715 -0.00303709 -0.966747 43 52 72
0.434582 -10.4337 16.0433 -0.252035 0.171206 -0.952453 32 36 44
0.449782 -10.3186 16.0506 -0.175198 -0.0186472 -0.984357 40 42 53
...
ESRI ASCII GRID format, six header lines, space separated, fixed record structure
ncols 9
nrows 11
xllcorner 576540
yllcorner 188820
cellsize 30
nodata_value -9999
-9999 -9999 139.37 139.81 140.77 141.97 143.32 144.16 -9999
-9999 137.29 137.61 138.00 138.93 140.02 141.40 141.60 140.81
-9999 135.78 135.69 135.89 137.04 138.25 139.44 139.76 139.19
133.94 134.15 133.98 134.03 135.28 136.79 137.69 137.92 137.87
132.76 132.77 132.99 132.58 133.76 135.16 135.73 135.77 135.80
131.76 131.53 131.64 130.81 132.26 133.44 133.85 133.93 -9999
-9999 -9999 130.75 130.15 130.52 132.00 132.46 -9999 -9999
...
Leica GSI file, fixed field width, space separated, variable record length
*110001+0000000000002014 81..10+0000000000663190 82..10+0000000000288540 83..10-0000000000001377
*110002+0000000000002015 81..10+0000000000649270 82..10+0000000000319760 83..10-0000000000000995
*110003+0000000000002019 81..10+0000000000593840 82..10+0000000000253050 83..10-0000000000001486
*110004+0000000000002020 81..10+0000000000562890 82..10+0000000000274730 83..10-0000000000001309
*110005+00000000000000AE 81..10+0000000000664645 82..10+0000000000245619 83..10+0000000000001505
*110006+00000000000000EL 81..10+0000000000714787 82..10+0000000000300190 83..10+0000000000002396
*110007+00000000000000HK 81..10+0000000000633941 82..10+0000000000269764 83..10+0000000000000362
*410014+0000000000000021 42....+000000000000BP04 43....+0000000000001538
*110015+000000000000BP03 21.322+0000000016901313 22.322+0000000009955914 31..00+0000000000029462 51..1.+00000008+0000000
...
GeoJSON file, free format, label - value pairs, vectors, hierarchical structure
{ "type": "FeatureCollection",
"features": [
{ "type": "Feature",
"geometry": {"type": "Point", "coordinates": [102.0, 0.5]},
"properties": {"prop0": "value0"}
},
{ "type": "Feature",
"geometry": {
"type": "LineString",
"coordinates": [
[102.0, 0.0], [103.0, 1.0], [104.0, 0.0], [105.0, 1.0]
]
},
...
GML (XML), free format, hierarchical structure, tags, international standard
```
<?xml version="1.0" encoding="utf-8" ?>
<ogr:FeatureCollection
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://ogr.maptools.org/ xxx.xsd"
xmlns:ogr="http://ogr.maptools.org/"
xmlns:gml="http://www.opengis.net/gml">
<gml:boundedBy>
<gml:Box>
<gml:coord><gml:X>632897.91</gml:X><gml:Y>134104.66</gml:Y></gml:coord>
<gml:coord><gml:X>636129.8</gml:X><gml:Y>138914.58</gml:Y></gml:coord>
</gml:Box>
</gml:boundedBy>
<gml:featureMember>
<ogr:xxx fid="xxx.0">
<ogr:geometryProperty><gml:Point srsName="EPSG:23700"><gml:coordinates>635474.17,137527.75</gml:coordinates></gml:Point></ogr:geometryProperty>
...
```
Processing patterns
In automated processing of text files, command line interface and command line parameters are used. There is no need for GUI (Graphical User Interface), once there will be no user to communicate with it.
Redirection of standard input and output
------- ------------ --------
| input | | processing | | output |
| file | ---> | script/prg | ---> | file |
------- ------------ --------
command input_file(s) > output_file
command < input_file > output file
Redirection and pipes
------- ------------ ------------ --------
| input | | processing | | processing | | output |
| file | ---> | 1st step | ---> | 2nd step | ---> ... | file |
------- ------------ ------------ --------
command1 input_file(s) | command2 > output_file
command1 < input file | command2 > output_file
End of explanation
!gdown --id 18SkltcBaEiMMKA3siyVUKdOWl8FhSB-m -O sample_data/gcp.txt
Explanation: Sample programs
Before working on the following programming examples, it is necessary to upload some text files to Colab. For that, select the folder icon on the left side of Colab. Then, move the cursor on the sample data folder in the opening File panel. After that, click on the three dots and select Upload. Lastly, upload a text file with easting, northin, elevation data or use the code block provided below to copy the file from Google Drive to the sample_data folder of Colab.
End of explanation
fp = open('sample_data/gcp.txt', 'r') # open file for read
line = "not EOF" # dummy value
while len(line) > 0: # repeat until end of file (EOF)
line = fp.readline() # returns empty string at EOF
print(line) # print line to screen
fp.close()
Explanation: Simple solution (following the flowchart above) to list the content of a text file.
End of explanation
i = 1 # line counter
with open('sample_data/gcp.txt') as fp: # open input file
for line in fp: # for each line of file
print(i, line) # echo line number & line
i += 1 # increment line number
Explanation: Example to add ordinal number to rows in a file using Pythonic code.
End of explanation
i = 1
with open('sample_data/gcp.txt', 'r') as fp:
for line in fp:
print(i, line.strip('\n\r')) # print(i, line, end='') would also be fine
i += 1
Explanation: Is the result satisfactory? Note that there are empty lines between the data rows. This happens because the line read from the file has an EOL character and the print command add another EOL to the line. Thereby, we remove EOL char(s).
End of explanation
i = 1
with open('sample_data/gcp.txt') as fp:
with open('sample_data/gcp1.txt', 'w') as fo: # open file for write
for line in fp:
print(i, line, end='', file=fo)
i += 1
Explanation: Let's create a new file with the row numbers.
End of explanation
!ls sample_data
!more sample_data/gcp1.txt
Explanation: We can check if the new file exists.
End of explanation
import pandas as pd
names = ['id', 'east', 'north', 'elev'] # column names in text file
data = pd.read_csv('sample_data/gcp1.txt', sep=' ', names=names)
mi = data.min()
ma = data.max()
print(mi['east'], ma['east'], mi['north'], ma['north'], mi['elev'], ma['elev'])
Explanation: Let's find the bounding box from the coordinates stored in the file. Fields are separated by space. For that, we will use pandas.
End of explanation
print(f"{mi['east']:.2f} {ma['east']:.2f} {mi['north']:.2f} {ma['north']:.2f} {mi['elev']:.2f} {ma['elev']:.2f}")
Explanation: To avoid unneccesary decimals, let's use f-string to format output.
End of explanation
data.iloc[[0]] # get first row
data["east"] # get a column
data["north"][2] # get a field
Explanation: Pandas handles data set of records, each having an index number. Some examples to access data in a data set are provided on the code blocks below:
End of explanation
data["dist"] = (data['east'] ** 2 + data["north"] ** 2 + data["elev"] ** 2) ** 0.5
data["dist"].describe()
Explanation: Adding new column to the data set, the distance from the origin and some statistical data.
End of explanation
data.plot.scatter(x='east', y='north', c='blue')
Explanation: Pandas has some plotting capabilities as well. The picture below illustrates a 2D view of the point cloud.
End of explanation
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.scatter(data['east'], data['north'])
for index, row in data.iterrows():
ax.annotate(int(row['id']), (row['east']+0.5, row['north']))
Explanation: Alternative plot with point IDs, direct use of matplotlib.
End of explanation
data.to_csv('sample_data/xy.csv', columns=['east', 'north'], index=False)
Explanation: Let's create a text file from the Pandas data set which contains only the east and the north coordinates.
End of explanation
!gdown --id 1AxSK6qqLITEpYV0u0Fr9w4avEqfIF7MN -O sample_data/nikon.raw
Explanation: Note
Pandas can read/write data from/to relational databases too (SQLite, PostgreSQL, etc.).
Parse Nikon recorded observations
Nikon total stations save observations into variable length and variable structure delimited text file. Pandas is not a solution for this situation.
Sample from the file:
CO,HA Raw data: Azimuth
CO,Tilt Correction: VA:OFF HA:OFF
CO, TOPO <JOB> Created 01-Jan-2001 05:00:27
MP,OMZ1,,4950153.530,5173524.258,341.532,
MP,OMZ2,,4950392.611,5173092.830,306.781,
CO,Temp:30C Press:740mmHg Prism:0 01-Jan-2001 05:39:35
ST,OMZ1,,OMZ2,,0.000,298.5937,298.5937
F1,OMZ2,0.000,494.429,0.0000,94.0142,05:39:35
SS,1,0.000,111.109,268.2305,101.4007,05:46:01,
The first two characters in each line mark the type of the record.
We'll write a program that changes the Nikon raw format into a human readable table. Furthermore, we'll divide our program into two parts. First, the whole data file will be processed and the data will be stored in a list of dictionaries structure. Then, in the second part, we'll write out data. The second part can be replaced for processed data to calculate coordinates, for example.
End of explanation
from math import pi
def to_rad(pseudo_dms):
convert pseudo DMS string (DDD.MMSS) to radians
w = pseudo_dms.split('.') # separate degree and MMSS
degree = int(w[0])
minute = int(w[1][:2])
second = int(w[1][2:])
return (degree + minute / 60 + second / 3600) / 180 * pi
to_rad('180.0000') # to test the function
ro = 180 / pi * 3600 # one radian in seconds
def to_dms(angle):
convert angle from radian to DMS string
s = int(angle * ro)
degree, s = divmod(s, 3600)
minute, s = divmod(s, 60)
return f'{degree:d}-{int(minute):02d}-{int(s):02d}'
to_dms(pi)
Explanation: First, we will make auxiliary functions. Mean directions and zenith angles are in DMS format as a pseudo decimal number (e.g. DDD.MMSS). Based on that, we'll write a function to change the DDD.MMSS values to radians, and another function to convert radian to DDD-MM-SS format.
End of explanation
field_book = [] # list to store field-book data
with open('sample_data/nikon.raw') as f:
for line in f: # process file line by line
rec_list = line.strip('\n\r').split(',') # remove EOL marker(s) and slip by comma
rec_dict = {} # empty dictionary for needed data
if rec_list[0] == 'ST': # station record
rec_dict['station'] = rec_list[1] # station id
rec_dict['ih'] = float(rec_list[5]) # instrument height
field_book.append(rec_dict)
elif rec_list[0] in ('F1', 'SS'): # observation in face left
rec_dict['target'] = rec_list[1] # target id
rec_dict['th'] = float(rec_list[2]) # target height
rec_dict['sd'] = float(rec_list[3]) # slope distance
rec_dict['ha'] = to_rad(rec_list[4]) # mean direction
rec_dict['za'] = to_rad(rec_list[5]) # zenith angle
if len(rec_list) > 7 and len(rec_list[7]) > 0:
rec_dict['cd'] = rec_list[7] # point code
else:
rec_dict['cd'] = ''
field_book.append(rec_dict)
field_book[:4] # for test the first 3 item
Explanation: We won't process all record types from the file, only ST, F1 and SS record will be considered. Given this, the next part of the code parses the input file.
End of explanation
header1 = '----------------------------------------------------------'
header2 = '| station| target | HA | VA | SD | Code |'
print(header1); print(header2); print(header1)
for rec in field_book:
if 'station' in rec:
print(f"|{rec['station']:8s}| | | | | |")
elif 'target' in rec:
print(f"| |{rec['target']:8s}|{to_dms(rec['ha']):>9s}|{to_dms(rec['za']):>9s}|{rec['sd']:9.3f}|{rec['cd']:8s}|")
print(header1)
Explanation: The next and last part will print the field-book in human readable form.
End of explanation
!gdown --id 1Le7CO2-klyJMmlrT7inMWaaLggEqz2DT
Explanation: Complex example for self-study
Finally let's try a more complex example. The following program can be downloaded from the Internet, as follows (it is also available on GitHub):
End of explanation
!cat filt.py
Explanation: This program gets parameters from the command line, so it can be used in automation processes.
End of explanation
! python filt.py -h
Explanation: The next command shows how to get help about the program. On your own machine, do not use the "!" given that it is a Colab feature. You may use python3 instead of python.
End of explanation
!python filt.py -i " " -r 2 -d 2 -n sample_data/gcp.txt
Explanation: The next command shows how to use the program on the gcp.txt file previously downloaded (First example of Sample Programs). The code block below can be reas as follows: Input separator is space, keep every second rows, use two decimals and add row numbers to output.
End of explanation |
13,444 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Lecture 3
Step2: <h1>Discrete Fourier Series</h1>
Consider a function $f$ periodic over a domain $0\leq x\leq 2\pi$, discretized by $N_x$ points. The longest wavelength wave that can be contained in the domain is $L_x$. A phyiscal understanding of Fourier series is the representation of a system as the sum of many waves fo wavelengths smaller or equal to $L_x$. In a discrete sense, the series of wave used to decompose the system is defined as
Step3: <h2>Spectrum</h2>
For now we will define the spectrum f $f$ as
<p class='alert alert-danger'>
$$
F(k_n) = \hat{f}_n.\hat{f}_n^*
$$
</p>
which can be interpreted as the energy contained in the $k_n$ wavenumber. This is helpful when searching for the most energitic scales or waves in our system. Thanks to the symmetries of the FFT, the spectrum is defined over $n=0$ to $N_x/2$
Step4: <h2>Low-Pass Filter</h2>
The following code filters the original signal by half the wavenumbers using FFT and compares to exact filtered function
Step5: <h2> High-Pass Filter</h2>
From the example below, develop a function for a high-pass filter. | Python Code:
%matplotlib inline
# plots graphs within the notebook
%config InlineBackend.figure_format='svg' # not sure what this does, may be default images to svg format
from IPython.display import Image
from IPython.core.display import HTML
def header(text):
raw_html = '<h4>' + str(text) + '</h4>'
return raw_html
def box(text):
raw_html = '<div style="border:1px dotted black;padding:2em;">'+str(text)+'</div>'
return HTML(raw_html)
def nobox(text):
raw_html = '<p>'+str(text)+'</p>'
return HTML(raw_html)
def addContent(raw_html):
global htmlContent
htmlContent += raw_html
class PDF(object):
def __init__(self, pdf, size=(200,200)):
self.pdf = pdf
self.size = size
def _repr_html_(self):
return '<iframe src={0} width={1[0]} height={1[1]}></iframe>'.format(self.pdf, self.size)
def _repr_latex_(self):
return r'\includegraphics[width=1.0\textwidth]{{{0}}}'.format(self.pdf)
class ListTable(list):
Overridden list class which takes a 2-dimensional list of
the form [[1,2,3],[4,5,6]], and renders an HTML Table in
IPython Notebook.
def _repr_html_(self):
html = ["<table>"]
for row in self:
html.append("<tr>")
for col in row:
html.append("<td>{0}</td>".format(col))
html.append("</tr>")
html.append("</table>")
return ''.join(html)
font = {'family' : 'serif',
'color' : 'black',
'weight' : 'normal',
'size' : 18,
}
Explanation: Lecture 3: Accuracy in Fourier's Space
End of explanation
import matplotlib.pyplot as plt
import numpy as np
Lx = 2.*np.pi
Nx = 256
u = np.zeros(Nx,dtype='float64')
du = np.zeros(Nx,dtype='float64')
ddu = np.zeros(Nx,dtype='float64')
k_0 = 2.*np.pi/Lx
x = np.linspace(Lx/Nx,Lx,Nx)
Nwave = 32
uwave = np.zeros((Nx,Nwave),dtype='float64')
duwave = np.zeros((Nx,Nwave),dtype='float64')
dduwave = np.zeros((Nx,Nwave),dtype='float64')
#ampwave = np.array([0., 1.0, 2.0, 3.0])
ampwave = np.random.random(Nwave)
#print(ampwave)
#phasewave = np.array([0.0, 0.0, np.pi/2, np.pi/2])
phasewave = np.random.random(Nwave)*2*np.pi
#print(phasewave)
for iwave in range(Nwave):
uwave[:,iwave] = ampwave[iwave]*np.cos(k_0*iwave*x+phasewave[iwave])
duwave[:,iwave] = -k_0*iwave*ampwave[iwave]*np.sin(k_0*iwave*x+phasewave[iwave])
dduwave[:,iwave] = -(k_0*iwave)**2*ampwave[iwave]*np.cos(k_0*iwave*x+phasewave[iwave])
u = np.sum(uwave,axis=1)
#print(u)
plt.plot(x,u,lw=2)
plt.xlim(0,Lx)
plt.legend(loc=3, bbox_to_anchor=[0, 1],
ncol=3, shadow=True, fancybox=True)
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$u$', fontdict = font)
plt.show()
plt.show()
#check FT^-1(FT(u))
u_hat = np.fft.fft(u)
v = np.real(np.fft.ifft(u_hat))
plt.plot(x,u,'r-',lw=2,label='u')
plt.plot(x,v,'b--',lw=2,label='after ifft(fft(u))')
plt.xlim(0,Lx)
plt.legend(loc=3, bbox_to_anchor=[0, 1],
ncol=3, shadow=True, fancybox=True)
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$u$', fontdict = font)
plt.show()
print('error',np.linalg.norm(u-v,np.inf))
Explanation: <h1>Discrete Fourier Series</h1>
Consider a function $f$ periodic over a domain $0\leq x\leq 2\pi$, discretized by $N_x$ points. The longest wavelength wave that can be contained in the domain is $L_x$. A phyiscal understanding of Fourier series is the representation of a system as the sum of many waves fo wavelengths smaller or equal to $L_x$. In a discrete sense, the series of wave used to decompose the system is defined as:
$$
a_n\exp\left(\hat{\jmath}\frac{2\pi n}{Lx}\right)
$$
such that
<p class='alert alert-danger'>
$$
f(x) = \sum_{n=-\infty}^{\infty}a_n\exp\left(\hat{\jmath}\frac{2\pi nx}{Lx}\right)
$$
</p>
and
<p class='alert alert-danger'>
$$
a_n = \frac{1}{L_x}\int_Lf(x)\exp\left(-\hat{\jmath}\frac{2\pi nx}{Lx}\right)dx
$$
</p>
Often the reduction to wavenumber is used, where
<p class='alert alert-danger'>
$$
k_n = \frac{2\pi n}{L_x}
$$
</p>
Note that if $x$ is time instead of distance, $L_x$ is a time $T$ and the smallest frequency contained in the domain is $f_0=1/T_0$ and the wavenumber $n$ is $k_n=2\pi f_0n=2\pi f_n$ with $f_n$ for $\vert n\vert >1$ are the higher frequencies.
<h1>Discrete Fourier Transform (DFT)</h1>
In scientific computing we are interested in applying Fourier series on vectors or matrices, containing a integer number of samples. The DFT is the fourier series for the number of samples. DFT functions available in python or any other language only care about the number of samples, therefore the wavenumber is
<p class='alert alert-danger'>
$$
k_n=\frac{2\pi n}{N_x}
$$
</p>
Consider a function $f$ periodic over a domain $0\leq x\leq 2\pi$, discretized by $N_x$ points. The nodal value is $f_i$ located at $x_i=(i+1)\Delta x$ with $\Delta x=L_x/Nx$. The DFT is defined as
<p class='alert alert-danger'>
$$
\hat{f}_k=\sum_{i=0}^{N_x-1}f_i\exp\left(-2\pi\hat{\jmath}\frac{ik}{N_x}\right)
$$
</p>
The inverse DFT is defined as
<p class='alert alert-danger'>
$$
f_i=\sum_{k=0}^{N_x-1}\hat{f}_k\exp\left(2\pi\hat{\jmath}\frac{ik}{N_x}\right)
$$
</p>
<h1>Fast Fourier Transform (FFT)</h1>
Using symmetries, the FFT reduces computational costs and stores in the following way:
<p class='alert alert-danger'>
$$
\hat{f}_k=\sum_{i=-Nx/2+1}^{N_x/2}f_i\exp\left(-2\pi\mathbf{j}\frac{ik}{N_x}\right)
$$
</p>
<p class='alert alert-info'>
Compared to the Fourier series, DFT or FFT assumes that the system can be accurately captured by a finite number of waves. It is up to the user to ensure that the number of computational points is sufficient to capture the smallest scale, or smallest wavelength or highest frequence. Remember that the function on which FT is applied must be periodic over the domain and the grid spacing must be uniform.
</p>
There are FT algorithms for unevenly space data, but this is beyond the scope of this notebook.
<h1>Example 1: Filtering</h1>
The following provides examples of low- and high-pass filters based on Fourier transform. A ideal low-(high-) pass filter passes frequencies that are lower than a threshold without attenuation and removes frequencies that are higher than the threshold.
When applied to spatial data (function of $x$ rather than $t$-time), the FT (Fourier Transform) of a variable is function of wavenumbers
$$
k_n=\frac{2\pi n}{L_x}
$$
or wavelengths
$$
\lambda_n=\frac{2\pi}{k_n}
$$
End of explanation
F = np.zeros(Nx/2+1,dtype='float64')
F = np.real(u_hat[0:Nx/2+1]*np.conj(u_hat[0:Nx/2+1]))
k = np.hstack((np.arange(0,Nx/2+1),np.arange(-Nx/2+1,0)))
plt.loglog(k[0:Nx/2+1],F,'r-',lw=2,label='F_u')
plt.legend(loc=3, bbox_to_anchor=[0, 1],
ncol=3, shadow=True, fancybox=True)
plt.xlabel('$k$', fontdict = font)
plt.ylabel('$F_u(k)$', fontdict = font)
plt.show()
plt.show
Explanation: <h2>Spectrum</h2>
For now we will define the spectrum f $f$ as
<p class='alert alert-danger'>
$$
F(k_n) = \hat{f}_n.\hat{f}_n^*
$$
</p>
which can be interpreted as the energy contained in the $k_n$ wavenumber. This is helpful when searching for the most energitic scales or waves in our system. Thanks to the symmetries of the FFT, the spectrum is defined over $n=0$ to $N_x/2$
End of explanation
# filtering the smaller waves
def low_pass_filter_fourier(a,k,kcutoff):
N = a.shape[0]
a_hat = np.fft.fft(u)
filter_mask = np.where(np.abs(k) > kcut)
a_hat[filter_mask] = 0.0 + 0.0j
a_filter = np.real(np.fft.ifft(a_hat))
return a_filter
kcut=Nwave/2+1
k = np.hstack((np.arange(0,Nx/2+1),np.arange(-Nx/2+1,0)))
v = low_pass_filter_fourier(u,k,kcut)
u_filter_exact = np.sum(uwave[:,0:kcut+1],axis=1)
plt.plot(x,v,'r-',lw=2,label='filtered with fft')
plt.plot(x,u_filter_exact,'b--',lw=2,label='filtered (exact)')
plt.plot(x,u,'g:',lw=2,label='original')
plt.legend(loc=3, bbox_to_anchor=[0, 1],
ncol=3, shadow=True, fancybox=True)
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$u$', fontdict = font)
plt.show()
print('error:',np.linalg.norm(v-u_filter_exact,np.inf))
F = np.zeros(Nx/2+1,dtype='float64')
F_filter = np.zeros(Nx/2+1,dtype='float64')
u_hat = np.fft.fft(u)
F = np.real(u_hat[0:Nx/2+1]*np.conj(u_hat[0:Nx/2+1]))
v_hat = np.fft.fft(v)
F_filter = np.real(v_hat[0:Nx/2+1]*np.conj(v_hat[0:Nx/2+1]))
k = np.hstack((np.arange(0,Nx/2+1),np.arange(-Nx/2+1,0)))
plt.loglog(k[0:Nx/2+1],F,'r-',lw=2,label='F_u')
plt.loglog(k[0:Nx/2+1],F_filter,'b-',lw=2,label='F_v')
plt.legend(loc=3, bbox_to_anchor=[0, 1],
ncol=3, shadow=True, fancybox=True)
plt.xlabel('$k$', fontdict = font)
plt.ylabel('$F_u(k)$', fontdict = font)
plt.show()
plt.show
Explanation: <h2>Low-Pass Filter</h2>
The following code filters the original signal by half the wavenumbers using FFT and compares to exact filtered function
End of explanation
u_hat = np.fft.fft(u)
kfilter = 3
k = np.linspace(0,Nx-1,Nx)
filter_mask = np.where((k < kfilter) | (k > Nx-kfilter) )
u_hat[filter_mask] = 0.+0.j
v = np.real(np.fft.ifft(u_hat))
plt.plot(x,v,'r-',lw=2)
plt.plot(x,uwave[:,3],'b--',lw=2)
plt.show()
Explanation: <h2> High-Pass Filter</h2>
From the example below, develop a function for a high-pass filter.
End of explanation |
13,445 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SVC Test
This is an example use case of a support vector classifier. We will infer a data classification function from a set of training data.
We will use the SVC implementation in the scikit-learn toolbox.
Step1: We begin by defining a set of training points. This is the set which the classifier will use to infer the data classification function. Each row represents a data point, with x,y coordinates and classification value.
Step2: To understand the data set, we can plot the points from both classes (1 and 0). Points of class 1 are in black, and points from class 0 in red.
Step3: The SVC uses pandas data frames to represent data. The data frame is a convenient data structure for tabular data, which enables column labels.
Step4: We need to select the set of columns with the data features. In our example, those are the x and y coordinates.
Step5: We are now able to build and train the classifier.
Step6: The classifier is now trained with the fit points, and is ready to be evaluated with a set of test points, which have a similiar structure as the fit points
Step7: We separate the features and values to make clear were the data comes from.
Step8: We build the test points dataframe with the features.
Step9: We can add the values to the dataframe.
Step10: Right now we have a dataframe similar to the one with the fit points. We'll use the classifier to add a fourth column with the predicted values. Our goal is to have the same value in both real_value and predicted_value columns.
Step11: THe classifier is pretty successfull at predicting values from the x and ycoordinates. We may also apply the classifier to the fit points - it's somewhat pointless, because those are the points used to infer the data classification function.
Step12: To better understand the data separation between values 1 and 0, we'll plot both the fit points and the test points.
Following the same color code as before, points that belong to class 1 are represented in black, and points that belong to class 0 in red. Fit points are represented in a full cirle, and the test points are represented by circunferences. | Python Code:
from sklearn import svm
import pandas as pd
import pylab as pl
import seaborn as sns
%matplotlib inline
Explanation: SVC Test
This is an example use case of a support vector classifier. We will infer a data classification function from a set of training data.
We will use the SVC implementation in the scikit-learn toolbox.
End of explanation
fit_points = [
[2,1,1],
[1,2,1],
[3,2,1],
[4,2,0],
[4,4,0],
[5,1,0]
]
Explanation: We begin by defining a set of training points. This is the set which the classifier will use to infer the data classification function. Each row represents a data point, with x,y coordinates and classification value.
End of explanation
sns.set(style="darkgrid")
pl.scatter([point[0] if point[2]==1 else None for point in fit_points],
[point[1] for point in fit_points],
color = 'black')
pl.scatter([point[0] if point[2]==0 else None for point in fit_points],
[point[1] for point in fit_points],
color = 'red')
pl.grid(True)
pl.show()
Explanation: To understand the data set, we can plot the points from both classes (1 and 0). Points of class 1 are in black, and points from class 0 in red.
End of explanation
df_fit = pd.DataFrame(fit_points, columns=["x", "y", "value"])
print(df_fit)
Explanation: The SVC uses pandas data frames to represent data. The data frame is a convenient data structure for tabular data, which enables column labels.
End of explanation
train_cols = ["x", "y"]
Explanation: We need to select the set of columns with the data features. In our example, those are the x and y coordinates.
End of explanation
clf = svm.SVC()
clf.fit(df_fit[train_cols], df_fit.value)
Explanation: We are now able to build and train the classifier.
End of explanation
test_points = [
[5,3],
[4,5],
[2,5],
[2,3],
[1,1]
]
Explanation: The classifier is now trained with the fit points, and is ready to be evaluated with a set of test points, which have a similiar structure as the fit points: x, y coordinates, and a value.
End of explanation
test_points_values = [0,0,0,1,1]
Explanation: We separate the features and values to make clear were the data comes from.
End of explanation
df_test = pd.DataFrame(test_points, columns=['x','y'])
print(df_test)
Explanation: We build the test points dataframe with the features.
End of explanation
df_test['real_value'] = test_points_values
print(df_test)
Explanation: We can add the values to the dataframe.
End of explanation
df_test['predicted_value'] = clf.predict(test_points)
print(df_test)
Explanation: Right now we have a dataframe similar to the one with the fit points. We'll use the classifier to add a fourth column with the predicted values. Our goal is to have the same value in both real_value and predicted_value columns.
End of explanation
df_fit[''] = clf.predict([x[0:2] for x in fit_points])
print(df_fit)
Explanation: THe classifier is pretty successfull at predicting values from the x and ycoordinates. We may also apply the classifier to the fit points - it's somewhat pointless, because those are the points used to infer the data classification function.
End of explanation
sns.set(style="darkgrid")
for i in range(0,2):
pl.scatter(df_fit[df_fit.value==i].x,
df_fit[df_fit.value==i].y,
color = 'black' if i == 1 else 'red')
pl.scatter(df_test[df_test.predicted_value==i].x,
df_test[df_test.predicted_value==i].y,
marker='o',
facecolor='none',
color='black' if i == 1 else 'red')
pl.grid(True)
pl.show()
Explanation: To better understand the data separation between values 1 and 0, we'll plot both the fit points and the test points.
Following the same color code as before, points that belong to class 1 are represented in black, and points that belong to class 0 in red. Fit points are represented in a full cirle, and the test points are represented by circunferences.
End of explanation |
13,446 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright (c) 2017-2020 Serpent-Tools developers team, GTRC
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Data files are not included with the python package, but can be downloaded from the GitHub repository. For this tutorial, the files are placed in the directory identified with the SERPENT_TOOLS_DATA environment variable.
Step1: MicroXSReader
Basic Operation
This notebook demonstrates the capabilities of the serpentTools in regards to reading group micro cross-section files. SERPENT [1] produces a micro depletion file, containing independent and cumulative fission yields as well as group cross-sections for the isotopes and reactions defined by the user.
The MicroXSReader is capable of reading this file, and storing the data directly on the reader. The reader has two methods to retrieve the data and ease the analysis. Note
Step2: The fission yields read in from the file are stored in the nfy dictionary, where the keys represent a specific (parent, energy) pair and the corresponding values is a dictionary with fission products ids and corresponding fission yield values.
Step3: Each pair represents the isotope undergoing fission and the impending neutron energy in MeV.
Step4: The results for each pair are dictionaries that contain three fields
Step5: Fluxes ratios and uncertainties are stored in the fluxRatio and fluxUnc dictionaries, where the keys represent a specific universe and the corresponding values are group fluxes values.
Step6: Cross sections and their uncertainties are stored in the xsVal and xsUnc dictionaries, where the keys represent a specific universe and the corresponding values are dictionaries.
Step7: Each key has three entries (isotope, reaction, flag)
isotope ID of the isotope (ZZAAA0/1), int or float
reaction MT reaction, e.g., 102 (n,gamma)
flag special flag to describe isomeric state or fission yield distribution number
For each such key (isotope, reaction, flag) the xsVal and xsVal store the group-wise flux values and uncertainties respectively.
Step8: Data Retrieval
The MicroXSReader object has two get methods.
getFY method obtains the independent and cumulative fission yields for a specific parent (ZZAAA0/1), daughter (ZZAAA0/1), neutron energy (MeV). If no parent or daaughter is found, the method raises an exception. The method also has a special flag that indicates whether the user wants to obtain the value corresponding to the nearest energy.
getXS method to obtain the group-wise cross-sections for a specific universe, isotope and reaction.
Step9: By default, the method includes a flag that allows to obtain the values for the closest energy defined by the user.
Step10: The user can set this boolean flag to False if only the values at existing energies are of interest.
Step11: getXS method is used to obtain the group cross-sections for a specific universe, isotope and reaction. The method returns the values and uncertainties.
Step12: The method includes a special flag isomeric, which is set to zero by default.
The special flag either describes the isomeric state or fission yield distribution number.
Step13: If the universe exist, but the isotope or reaction do not exist, the method raises an error.
Settings
The MicroXSReader also has a collection of settings to control what data is stored. If none of these settings are modified, the default is to store all the data from the output file.
Step14: microxs.getFY | Python Code:
import os
mdxFile = os.path.join(
os.environ["SERPENT_TOOLS_DATA"],
"ref_mdx0.m",
)
Explanation: Copyright (c) 2017-2020 Serpent-Tools developers team, GTRC
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Data files are not included with the python package, but can be downloaded from the GitHub repository. For this tutorial, the files are placed in the directory identified with the SERPENT_TOOLS_DATA environment variable.
End of explanation
import serpentTools
mdx = serpentTools.read(mdxFile)
Explanation: MicroXSReader
Basic Operation
This notebook demonstrates the capabilities of the serpentTools in regards to reading group micro cross-section files. SERPENT [1] produces a micro depletion file, containing independent and cumulative fission yields as well as group cross-sections for the isotopes and reactions defined by the user.
The MicroXSReader is capable of reading this file, and storing the data directly on the reader. The reader has two methods to retrieve the data and ease the analysis. Note: in order to obtain the micro depletion files, the user must set the mdep card in the input file.
End of explanation
# All the (parent, energy) pairs can be obtained by using '.keys()'
pairs = mdx.nfy.keys()
list(pairs)[0:5] # list only the first five pairs
Explanation: The fission yields read in from the file are stored in the nfy dictionary, where the keys represent a specific (parent, energy) pair and the corresponding values is a dictionary with fission products ids and corresponding fission yield values.
End of explanation
pair = list(pairs)[0] # obtain the first (isotope, energy) pair
print('Isotope= {: 1.0f}'.format(pair[0]))
print('Energy= {} MeV'.format(pair[1]))
Explanation: Each pair represents the isotope undergoing fission and the impending neutron energy in MeV.
End of explanation
# Obtain the keys in the nfy dictionary
mdx.nfy[pair].keys()
# Print only the five first fission products
print(mdx.nfy[pair]['fissProd'][0:5])
# Print only the five first fission independent yields
print(mdx.nfy[pair]['indYield'][0:5])
# Print only the five first fission cumulative yields
print(mdx.nfy[pair]['cumYield'][0:5])
Explanation: The results for each pair are dictionaries that contain three fields:
fissProd list of fission products ids
indYield corresponding list of independent fission yields
cumYield corresponding list of cumulative fission yields
End of explanation
# obtain the universes
print(mdx.fluxRatio.keys())
Explanation: Fluxes ratios and uncertainties are stored in the fluxRatio and fluxUnc dictionaries, where the keys represent a specific universe and the corresponding values are group fluxes values.
End of explanation
# The keys within the nested dictionary describe the isotope, reaction and special flag
print(mdx.xsVal['0'].keys())
Explanation: Cross sections and their uncertainties are stored in the xsVal and xsUnc dictionaries, where the keys represent a specific universe and the corresponding values are dictionaries.
End of explanation
val = mdx.xsVal['0']
unc = mdx.xsUnc['0']
# Print flux values
print(val[(10010, 102, 0)])
# Print flux uncertainties
print(unc[(10010, 102, 0)])
Explanation: Each key has three entries (isotope, reaction, flag)
isotope ID of the isotope (ZZAAA0/1), int or float
reaction MT reaction, e.g., 102 (n,gamma)
flag special flag to describe isomeric state or fission yield distribution number
For each such key (isotope, reaction, flag) the xsVal and xsVal store the group-wise flux values and uncertainties respectively.
End of explanation
indYield, cumYield = mdx.getFY(parent=922350, energy=2.53e-08, daughter=541350 )
print('Independent yield = {}'.format(indYield))
print('Cumulative yield = {}'.format(cumYield))
Explanation: Data Retrieval
The MicroXSReader object has two get methods.
getFY method obtains the independent and cumulative fission yields for a specific parent (ZZAAA0/1), daughter (ZZAAA0/1), neutron energy (MeV). If no parent or daaughter is found, the method raises an exception. The method also has a special flag that indicates whether the user wants to obtain the value corresponding to the nearest energy.
getXS method to obtain the group-wise cross-sections for a specific universe, isotope and reaction.
End of explanation
indYield, cumYield = mdx.getFY(parent=922350, energy=1e-06, daughter=541350 )
print('Independent yield = {}'.format(indYield))
print('Cumulative yield = {}'.format(cumYield))
Explanation: By default, the method includes a flag that allows to obtain the values for the closest energy defined by the user.
End of explanation
indYield, cumYield = mdx.getFY(parent=922350, energy=2.53e-08, daughter=541350, flagEnergy=False )
Explanation: The user can set this boolean flag to False if only the values at existing energies are of interest.
End of explanation
# Obtain the group cross-sections
vals, unc = mdx.getXS(universe='0', isotope=10010, reaction=102)
# Print group flux values
print(vals)
# Print group flux uncertainties values
print(unc)
Explanation: getXS method is used to obtain the group cross-sections for a specific universe, isotope and reaction. The method returns the values and uncertainties.
End of explanation
# Example of how to use the isomeric flag
vals, unc = mdx.getXS(universe='0', isotope=10010, reaction=102, isomeric=0)
Explanation: The method includes a special flag isomeric, which is set to zero by default.
The special flag either describes the isomeric state or fission yield distribution number.
End of explanation
from serpentTools.settings import rc
rc['microxs.getFY'] = False # True/False only
rc['microxs.getXS'] = True # True/False only
rc['microxs.getFlx'] = True # True/False only
Explanation: If the universe exist, but the isotope or reaction do not exist, the method raises an error.
Settings
The MicroXSReader also has a collection of settings to control what data is stored. If none of these settings are modified, the default is to store all the data from the output file.
End of explanation
mdx = serpentTools.read(mdxFile)
# fission yields are not stored on the reader
mdx.nfy.keys()
Explanation: microxs.getFY: True or False, store fission yields
microxs.getXS: True or False, store group cross-sections and uncertainties
microxs.getFlx: True or False, store flux ratios and uncertainties
End of explanation |
13,447 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
python生成器
本文讲解一下python生成器的基本用法。
生成器的使用场景
当数据量很大的时候,比如从一个超大文本文件中读取内容,如果一下子把数据全部放在列表中,相当于一下子把大量数据放在了内存中,有可能造成内存溢出。那么如何解决呢?
解决方案:不存储所有的数据,而是存储列表元素的生成算法(相当于递推公式),只在使用的时候再根据生成算法生成相应的元素(惰性计算),这就是生成器。
创建生成器的方式
把列表生成式的中括号改为小括号
Step1: 用yield关键字
如果生成器的递推算法比较复杂,列表生成式的方式已经无法满足要求,那么可以用函数+yield关键字的方式来创建生成器。
如果一个函数中出现了yield关键字,那么这个函数就不再是一个普通函数了,而变成了一个生成器,例如:
Step2: 函数遇到yield就中断的特性
Step3: 生成器的应用:生成斐波那契数列 | Python Code:
a = (x for x in range(3))
print '【Output】'
print type(a)
print a.next()
print '-----'
for x in a:
print x
Explanation: python生成器
本文讲解一下python生成器的基本用法。
生成器的使用场景
当数据量很大的时候,比如从一个超大文本文件中读取内容,如果一下子把数据全部放在列表中,相当于一下子把大量数据放在了内存中,有可能造成内存溢出。那么如何解决呢?
解决方案:不存储所有的数据,而是存储列表元素的生成算法(相当于递推公式),只在使用的时候再根据生成算法生成相应的元素(惰性计算),这就是生成器。
创建生成器的方式
把列表生成式的中括号改为小括号
End of explanation
def getNum(max):
x = 0
while x < max:
yield x # 相当于把普通函数的return语句变成了yield语句
x += 1
a = getNum(3)
print '【Output】'
print type(a)
for x in a:
print x
Explanation: 用yield关键字
如果生成器的递推算法比较复杂,列表生成式的方式已经无法满足要求,那么可以用函数+yield关键字的方式来创建生成器。
如果一个函数中出现了yield关键字,那么这个函数就不再是一个普通函数了,而变成了一个生成器,例如:
End of explanation
def get():
for i in range(3):
print 'step' + str(i)
yield i
yield 111
for i in range(10,12):
print 'step' + str(i)
yield i
yield 222
a = get()
print '【Output】'
for x in a:
print x
Explanation: 函数遇到yield就中断的特性
End of explanation
def fib(max):
a,m,n = 0,1,1
while(a < max):
yield m
m,n = n,m+n
a += 1
print '【Output】'
for x in fib(6):
print x
Explanation: 生成器的应用:生成斐波那契数列
End of explanation |
13,448 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
svgpathtools
svgpathtools is a collection of tools for manipulating and analyzing SVG Path objects and Bézier curves.
Features
svgpathtools contains functions designed to easily read, write and display SVG files as well as a large selection of geometrically-oriented tools to transform and analyze path elements.
Additionally, the submodule bezier.py contains tools for for working with general nth order Bezier curves stored as n-tuples.
Some included tools
Step1: The Path class is a mutable sequence, so it behaves much like a list.
So segments can appended, inserted, set by index, deleted, enumerated, sliced out, etc.
Step2: Reading SVGSs
The svg2paths() function converts an svgfile to a list of Path objects and a separate list of dictionaries containing the attributes of each said path.
Note
Step3: Writing SVGSs (and some geometric functions and methods)
The wsvg() function creates an SVG file from a list of path. This function can do many things (see docstring in paths2svg.py for more information) and is meant to be quick and easy to use.
Note
Step4: There will be many more examples of writing and displaying path data below.
The .point() method and transitioning between path and path segment parameterizations
SVG Path elements and their segments have official parameterizations.
These parameterizations can be accessed using the Path.point(), Line.point(), QuadraticBezier.point(), CubicBezier.point(), and Arc.point() methods.
All these parameterizations are defined over the domain 0 <= t <= 1.
Note
Step5: Bezier curves as NumPy polynomial objects
Another great way to work with the parameterizations for Line, QuadraticBezier, and CubicBezier objects is to convert them to numpy.poly1d objects. This is done easily using the Line.poly(), QuadraticBezier.poly() and CubicBezier.poly() methods.
There's also a polynomial2bezier() function in the pathtools.py submodule to convert polynomials back to Bezier curves.
Note
Step6: The ability to convert between Bezier objects to NumPy polynomial objects is very useful. For starters, we can take turn a list of Bézier segments into a NumPy array
Numpy Array operations on Bézier path segments
Example available here
To further illustrate the power of being able to convert our Bezier curve objects to numpy.poly1d objects and back, lets compute the unit tangent vector of the above CubicBezier object, b, at t=0.5 in four different ways.
Tangent vectors (and more on NumPy polynomials)
Step7: Translations (shifts), reversing orientation, and normal vectors
Step8: Rotations and Translations
Step9: arc length and inverse arc length
Here we'll create an SVG that shows off the parametric and geometric midpoints of the paths from test.svg. We'll need to compute use the Path.length(), Line.length(), QuadraticBezier.length(), CubicBezier.length(), and Arc.length() methods, as well as the related inverse arc length methods .ilength() function to do this.
Step10: Intersections between Bezier curves
Step12: An Advanced Application | Python Code:
from __future__ import division, print_function
# Coordinates are given as points in the complex plane
from svgpathtools import Path, Line, QuadraticBezier, CubicBezier, Arc
seg1 = CubicBezier(300+100j, 100+100j, 200+200j, 200+300j) # A cubic beginning at (300, 100) and ending at (200, 300)
seg2 = Line(200+300j, 250+350j) # A line beginning at (200, 300) and ending at (250, 350)
path = Path(seg1, seg2) # A path traversing the cubic and then the line
# We could alternatively created this Path object using a d-string
from svgpathtools import parse_path
path_alt = parse_path('M 300 100 C 100 100 200 200 200 300 L 250 350')
# Let's check that these two methods are equivalent
print(path)
print(path_alt)
print(path == path_alt)
# On a related note, the Path.d() method returns a Path object's d-string
print(path.d())
print(parse_path(path.d()) == path)
Explanation: svgpathtools
svgpathtools is a collection of tools for manipulating and analyzing SVG Path objects and Bézier curves.
Features
svgpathtools contains functions designed to easily read, write and display SVG files as well as a large selection of geometrically-oriented tools to transform and analyze path elements.
Additionally, the submodule bezier.py contains tools for for working with general nth order Bezier curves stored as n-tuples.
Some included tools:
read, write, and display SVG files containing Path (and other) SVG elements
convert Bézier path segments to numpy.poly1d (polynomial) objects
convert polynomials (in standard form) to their Bézier form
compute tangent vectors and (right-hand rule) normal vectors
compute curvature
break discontinuous paths into their continuous subpaths.
efficiently compute intersections between paths and/or segments
find a bounding box for a path or segment
reverse segment/path orientation
crop and split paths and segments
smooth paths (i.e. smooth away kinks to make paths differentiable)
transition maps from path domain to segment domain and back (T2t and t2T)
compute area enclosed by a closed path
compute arc length
compute inverse arc length
convert RGB color tuples to hexadecimal color strings and back
Prerequisites
numpy
svgwrite
Setup
If not already installed, you can install the prerequisites using pip.
bash
$ pip install numpy
bash
$ pip install svgwrite
Then install svgpathtools:
bash
$ pip install svgpathtools
Alternative Setup
You can download the source from Github and install by using the command (from inside the folder containing setup.py):
bash
$ python setup.py install
Credit where credit's due
Much of the core of this module was taken from the svg.path (v2.0) module. Interested svg.path users should see the compatibility notes at bottom of this readme.
Basic Usage
Classes
The svgpathtools module is primarily structured around four path segment classes: Line, QuadraticBezier, CubicBezier, and Arc. There is also a fifth class, Path, whose objects are sequences of (connected or disconnected<sup id="a1">1</sup>) path segment objects.
Line(start, end)
Arc(start, radius, rotation, large_arc, sweep, end) Note: See docstring for a detailed explanation of these parameters
QuadraticBezier(start, control, end)
CubicBezier(start, control1, control2, end)
Path(*segments)
See the relevant docstrings in path.py or the official SVG specifications for more information on what each parameter means.
<u id="f1">1</u> Warning: Some of the functionality in this library has not been tested on discontinuous Path objects. A simple workaround is provided, however, by the Path.continuous_subpaths() method. ↩
End of explanation
# Let's append another to the end of it
path.append(CubicBezier(250+350j, 275+350j, 250+225j, 200+100j))
print(path)
# Let's replace the first segment with a Line object
path[0] = Line(200+100j, 200+300j)
print(path)
# You may have noticed that this path is connected and now is also closed (i.e. path.start == path.end)
print("path is continuous? ", path.iscontinuous())
print("path is closed? ", path.isclosed())
# The curve the path follows is not, however, smooth (differentiable)
from svgpathtools import kinks, smoothed_path
print("path contains non-differentiable points? ", len(kinks(path)) > 0)
# If we want, we can smooth these out (Experimental and only for line/cubic paths)
# Note: smoothing will always works (except on 180 degree turns), but you may want
# to play with the maxjointsize and tightness parameters to get pleasing results
# Note also: smoothing will increase the number of segments in a path
spath = smoothed_path(path)
print("spath contains non-differentiable points? ", len(kinks(spath)) > 0)
print(spath)
# Let's take a quick look at the path and its smoothed relative
# The following commands will open two browser windows to display path and spaths
from svgpathtools import disvg
from time import sleep
disvg(path)
sleep(1) # needed when not giving the SVGs unique names (or not using timestamp)
disvg(spath)
print("Notice that path contains {} segments and spath contains {} segments."
"".format(len(path), len(spath)))
Explanation: The Path class is a mutable sequence, so it behaves much like a list.
So segments can appended, inserted, set by index, deleted, enumerated, sliced out, etc.
End of explanation
# Read SVG into a list of path objects and list of dictionaries of attributes
from svgpathtools import svg2paths, wsvg
paths, attributes = svg2paths('test.svg')
# Update: You can now also extract the svg-attributes by setting
# return_svg_attributes=True, or with the convenience function svg2paths2
from svgpathtools import svg2paths2
paths, attributes, svg_attributes = svg2paths2('test.svg')
# Let's print out the first path object and the color it was in the SVG
# We'll see it is composed of two CubicBezier objects and, in the SVG file it
# came from, it was red
redpath = paths[0]
redpath_attribs = attributes[0]
print(redpath)
print(redpath_attribs['stroke'])
Explanation: Reading SVGSs
The svg2paths() function converts an svgfile to a list of Path objects and a separate list of dictionaries containing the attributes of each said path.
Note: Line, Polyline, Polygon, and Path SVG elements can all be converted to Path objects using this function.
End of explanation
# Let's make a new SVG that's identical to the first
wsvg(paths, attributes=attributes, svg_attributes=svg_attributes, filename='output1.svg')
Explanation: Writing SVGSs (and some geometric functions and methods)
The wsvg() function creates an SVG file from a list of path. This function can do many things (see docstring in paths2svg.py for more information) and is meant to be quick and easy to use.
Note: Use the convenience function disvg() (or set 'openinbrowser=True') to automatically attempt to open the created svg file in your default SVG viewer.
End of explanation
# Example:
# Let's check that the first segment of redpath starts
# at the same point as redpath
firstseg = redpath[0]
print(redpath.point(0) == firstseg.point(0) == redpath.start == firstseg.start)
# Let's check that the last segment of redpath ends on the same point as redpath
lastseg = redpath[-1]
print(redpath.point(1) == lastseg.point(1) == redpath.end == lastseg.end)
# This next boolean should return False as redpath is composed multiple segments
print(redpath.point(0.5) == firstseg.point(0.5))
# If we want to figure out which segment of redpoint the
# point redpath.point(0.5) lands on, we can use the path.T2t() method
k, t = redpath.T2t(0.5)
print(redpath[k].point(t) == redpath.point(0.5))
Explanation: There will be many more examples of writing and displaying path data below.
The .point() method and transitioning between path and path segment parameterizations
SVG Path elements and their segments have official parameterizations.
These parameterizations can be accessed using the Path.point(), Line.point(), QuadraticBezier.point(), CubicBezier.point(), and Arc.point() methods.
All these parameterizations are defined over the domain 0 <= t <= 1.
Note: In this document and in inline documentation and doctrings, I use a capital T when referring to the parameterization of a Path object and a lower case t when referring speaking about path segment objects (i.e. Line, QaudraticBezier, CubicBezier, and Arc objects).
Given a T value, the Path.T2t() method can be used to find the corresponding segment index, k, and segment parameter, t, such that path.point(T)=path[k].point(t).
There is also a Path.t2T() method to solve the inverse problem.
End of explanation
# Example:
b = CubicBezier(300+100j, 100+100j, 200+200j, 200+300j)
p = b.poly()
# p(t) == b.point(t)
print(p(0.235) == b.point(0.235))
# What is p(t)? It's just the cubic b written in standard form.
bpretty = "{}*(1-t)^3 + 3*{}*(1-t)^2*t + 3*{}*(1-t)*t^2 + {}*t^3".format(*b.bpoints())
print("The CubicBezier, b.point(x) = \n\n" +
bpretty + "\n\n" +
"can be rewritten in standard form as \n\n" +
str(p).replace('x','t'))
Explanation: Bezier curves as NumPy polynomial objects
Another great way to work with the parameterizations for Line, QuadraticBezier, and CubicBezier objects is to convert them to numpy.poly1d objects. This is done easily using the Line.poly(), QuadraticBezier.poly() and CubicBezier.poly() methods.
There's also a polynomial2bezier() function in the pathtools.py submodule to convert polynomials back to Bezier curves.
Note: cubic Bezier curves are parameterized as $$\mathcal{B}(t) = P_0(1-t)^3 + 3P_1(1-t)^2t + 3P_2(1-t)t^2 + P_3t^3$$
where $P_0$, $P_1$, $P_2$, and $P_3$ are the control points start, control1, control2, and end, respectively, that svgpathtools uses to define a CubicBezier object. The CubicBezier.poly() method expands this polynomial to its standard form
$$\mathcal{B}(t) = c_0t^3 + c_1t^2 +c_2t+c3$$
where
$$\begin{bmatrix}c_0\c_1\c_2\c_3\end{bmatrix} =
\begin{bmatrix}
-1 & 3 & -3 & 1\
3 & -6 & -3 & 0\
-3 & 3 & 0 & 0\
1 & 0 & 0 & 0\
\end{bmatrix}
\begin{bmatrix}P_0\P_1\P_2\P_3\end{bmatrix}$$
QuadraticBezier.poly() and Line.poly() are defined similarly.
End of explanation
t = 0.5
### Method 1: the easy way
u1 = b.unit_tangent(t)
### Method 2: another easy way
# Note: This way will fail if it encounters a removable singularity.
u2 = b.derivative(t)/abs(b.derivative(t))
### Method 2: a third easy way
# Note: This way will also fail if it encounters a removable singularity.
dp = p.deriv()
u3 = dp(t)/abs(dp(t))
### Method 4: the removable-singularity-proof numpy.poly1d way
# Note: This is roughly how Method 1 works
from svgpathtools import real, imag, rational_limit
dx, dy = real(dp), imag(dp) # dp == dx + 1j*dy
p_mag2 = dx**2 + dy**2 # p_mag2(t) = |p(t)|**2
# Note: abs(dp) isn't a polynomial, but abs(dp)**2 is, and,
# the limit_{t->t0}[f(t) / abs(f(t))] ==
# sqrt(limit_{t->t0}[f(t)**2 / abs(f(t))**2])
from cmath import sqrt
u4 = sqrt(rational_limit(dp**2, p_mag2, t))
print("unit tangent check:", u1 == u2 == u3 == u4)
# Let's do a visual check
mag = b.length()/4 # so it's not hard to see the tangent line
tangent_line = Line(b.point(t), b.point(t) + mag*u1)
disvg([b, tangent_line], 'bg', nodes=[b.point(t)])
Explanation: The ability to convert between Bezier objects to NumPy polynomial objects is very useful. For starters, we can take turn a list of Bézier segments into a NumPy array
Numpy Array operations on Bézier path segments
Example available here
To further illustrate the power of being able to convert our Bezier curve objects to numpy.poly1d objects and back, lets compute the unit tangent vector of the above CubicBezier object, b, at t=0.5 in four different ways.
Tangent vectors (and more on NumPy polynomials)
End of explanation
# Speaking of tangents, let's add a normal vector to the picture
n = b.normal(t)
normal_line = Line(b.point(t), b.point(t) + mag*n)
disvg([b, tangent_line, normal_line], 'bgp', nodes=[b.point(t)])
# and let's reverse the orientation of b!
# the tangent and normal lines should be sent to their opposites
br = b.reversed()
# Let's also shift b_r over a bit to the right so we can view it next to b
# The simplest way to do this is br = br.translated(3*mag), but let's use
# the .bpoints() instead, which returns a Bezier's control points
br.start, br.control1, br.control2, br.end = [3*mag + bpt for bpt in br.bpoints()] #
tangent_line_r = Line(br.point(t), br.point(t) + mag*br.unit_tangent(t))
normal_line_r = Line(br.point(t), br.point(t) + mag*br.normal(t))
wsvg([b, tangent_line, normal_line, br, tangent_line_r, normal_line_r],
'bgpkgp', nodes=[b.point(t), br.point(t)], filename='vectorframes.svg',
text=["b's tangent", "br's tangent"], text_path=[tangent_line, tangent_line_r])
Explanation: Translations (shifts), reversing orientation, and normal vectors
End of explanation
# Let's take a Line and an Arc and make some pictures
top_half = Arc(start=-1, radius=1+2j, rotation=0, large_arc=1, sweep=1, end=1)
midline = Line(-1.5, 1.5)
# First let's make our ellipse whole
bottom_half = top_half.rotated(180)
decorated_ellipse = Path(top_half, bottom_half)
# Now let's add the decorations
for k in range(12):
decorated_ellipse.append(midline.rotated(30*k))
# Let's move it over so we can see the original Line and Arc object next
# to the final product
decorated_ellipse = decorated_ellipse.translated(4+0j)
wsvg([top_half, midline, decorated_ellipse], filename='decorated_ellipse.svg')
Explanation: Rotations and Translations
End of explanation
# First we'll load the path data from the file test.svg
paths, attributes = svg2paths('test.svg')
# Let's mark the parametric midpoint of each segment
# I say "parametric" midpoint because Bezier curves aren't
# parameterized by arclength
# If they're also the geometric midpoint, let's mark them
# purple and otherwise we'll mark the geometric midpoint green
min_depth = 5
error = 1e-4
dots = []
ncols = []
nradii = []
for path in paths:
for seg in path:
parametric_mid = seg.point(0.5)
seg_length = seg.length()
if seg.length(0.5)/seg.length() == 1/2:
dots += [parametric_mid]
ncols += ['purple']
nradii += [5]
else:
t_mid = seg.ilength(seg_length/2)
geo_mid = seg.point(t_mid)
dots += [parametric_mid, geo_mid]
ncols += ['red', 'green']
nradii += [5] * 2
# In 'output2.svg' the paths will retain their original attributes
wsvg(paths, nodes=dots, node_colors=ncols, node_radii=nradii,
attributes=attributes, filename='output2.svg')
Explanation: arc length and inverse arc length
Here we'll create an SVG that shows off the parametric and geometric midpoints of the paths from test.svg. We'll need to compute use the Path.length(), Line.length(), QuadraticBezier.length(), CubicBezier.length(), and Arc.length() methods, as well as the related inverse arc length methods .ilength() function to do this.
End of explanation
# Let's find all intersections between redpath and the other
redpath = paths[0]
redpath_attribs = attributes[0]
intersections = []
for path in paths[1:]:
for (T1, seg1, t1), (T2, seg2, t2) in redpath.intersect(path):
intersections.append(redpath.point(T1))
disvg(paths, filename='output_intersections.svg', attributes=attributes,
nodes = intersections, node_radii = [5]*len(intersections))
Explanation: Intersections between Bezier curves
End of explanation
from svgpathtools import parse_path, Line, Path, wsvg
def offset_curve(path, offset_distance, steps=1000):
Takes in a Path object, `path`, and a distance,
`offset_distance`, and outputs an piecewise-linear approximation
of the 'parallel' offset curve.
nls = []
for seg in path:
ct = 1
for k in range(steps):
t = k / steps
offset_vector = offset_distance * seg.normal(t)
nl = Line(seg.point(t), seg.point(t) + offset_vector)
nls.append(nl)
connect_the_dots = [Line(nls[k].end, nls[k+1].end) for k in range(len(nls)-1)]
if path.isclosed():
connect_the_dots.append(Line(nls[-1].end, nls[0].end))
offset_path = Path(*connect_the_dots)
return offset_path
# Examples:
path1 = parse_path("m 288,600 c -52,-28 -42,-61 0,-97 ")
path2 = parse_path("M 151,395 C 407,485 726.17662,160 634,339").translated(300)
path3 = parse_path("m 117,695 c 237,-7 -103,-146 457,0").translated(500+400j)
paths = [path1, path2, path3]
offset_distances = [10*k for k in range(1,51)]
offset_paths = []
for path in paths:
for distances in offset_distances:
offset_paths.append(offset_curve(path, distances))
# Note: This will take a few moments
wsvg(paths + offset_paths, 'g'*len(paths) + 'r'*len(offset_paths), filename='offset_curves.svg')
Explanation: An Advanced Application: Offsetting Paths
Here we'll find the offset curve for a few paths.
End of explanation |
13,449 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Kubeflow E2E MNIST Case
Step1: Configure the Docker Registry for Kubeflow Fairing
In order to build docker images from your notebook we need a docker registry where the images will be stored
Note
Step2: Create PV/PVC to Store the Exported Model
Create Persistent Volume(PV) and Persistent Volume Claim(PVC), the PVC will be used by pods of training and serving for local mode in steps below.
Note
Step3: (Optional) Skip below creating PV/PVC step if you set an existing PV and PVC.
Step4: Use Kubeflow fairing to build the docker image and launch a TFJob for training
Use kubeflow fairing to build a docker image that includes all your dependencies
Launch a TFJob in the on premise cluster to taining model.
Firstly set some custom training parameters for TFJob.
Step5: Use Kubeflow Fairing to build a docker image and push to docker registry, and then launch a TFJob in the on-prem cluster for distributed training model.
Step6: Get the Created TFJobs
Step7: Wait For the Training Job to finish
Step8: Check if the TFJob succeeded.
Step9: Get the Training Logs
Step10: Deploy Service using KFServing
Step11: Get the InferenceService
Step12: Get the InferenceService and Service Endpoint
Step13: Run a prediction to the InferenceService
Step14: Clean Up
Delete the TFJob
Step15: Delete the InferenceService. | Python Code:
!pip show kubeflow-fairing
Explanation: Kubeflow E2E MNIST Case: Building, Distributed Training and Serving
This example guides you through:
1. Taking an example TensorFlow model and modifying it to support distributed training.
1. Using Kubeflow Fairing to build docker image and launch a TFJob to train model.
1. Using Kubeflow Fairing to create InferenceService (KFServing) for the trained model.
1. Clean up the TFJob and InferenceService using kubeflow-tfjob and kfserving SDK client.
Requirements
The Kubeflow Fairing, TF-Operator and KFServing have been installed in Kubenertes Cluster.
Prepare Training Code
We modified the examples to be better suited for distributed training and model serving. There is a delta between existing distributed mnist examples and what's needed to run well as a TFJob. The updated training code is mnist.py.
Install Required Libraries
End of explanation
# Set docker registry to store image.
# Ensure you have permission for pushing docker image requests.
DOCKER_REGISTRY = 'index.docker.io/jinchi'
# Set namespace. Note that the created PVC should be in the namespace.
my_namespace = 'hejinchi'
# You also can get the default target namepspace using below API.
#namespace = fairing_utils.get_default_target_namespace()
Explanation: Configure the Docker Registry for Kubeflow Fairing
In order to build docker images from your notebook we need a docker registry where the images will be stored
Note: The below section must be updated to your values.
End of explanation
# To satify the distributed training, the PVC should be access from all nodes in the cluster.
# The example creates a NFS PV to satify that.
nfs_server = '172.16.189.69'
nfs_path = '/opt/kubeflow/data/mnist'
pv_name = 'kubeflow-mnist'
pvc_name = 'mnist-pvc'
Explanation: Create PV/PVC to Store the Exported Model
Create Persistent Volume(PV) and Persistent Volume Claim(PVC), the PVC will be used by pods of training and serving for local mode in steps below.
Note: The below section must be updated to your values.
End of explanation
from kubernetes import client as k8s_client
from kubernetes import config as k8s_config
from kubeflow.fairing.utils import is_running_in_k8s
pv_yaml = f'''
apiVersion: v1
kind: PersistentVolume
metadata:
name: {pv_name}
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
path: {nfs_path}
server: {nfs_server}
'''
pvc_yaml = f'''
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {pvc_name}
namespace: {my_namespace}
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 10Gi
'''
if is_running_in_k8s():
k8s_config.load_incluster_config()
else:
k8s_config.load_kube_config()
k8s_core_api = k8s_client.CoreV1Api()
k8s_core_api.create_persistent_volume(yaml.safe_load(pv_yaml))
k8s_core_api.create_namespaced_persistent_volume_claim(my_namespace, yaml.safe_load(pvc_yaml))
Explanation: (Optional) Skip below creating PV/PVC step if you set an existing PV and PVC.
End of explanation
num_chief = 1 #number of Chief in TFJob
num_ps = 1 #number of PS in TFJob
num_workers = 2 #number of Worker in TFJob
model_dir = "/mnt"
export_path = "/mnt/export"
train_steps = "1000"
batch_size = "100"
learning_rate = "0.01"
Explanation: Use Kubeflow fairing to build the docker image and launch a TFJob for training
Use kubeflow fairing to build a docker image that includes all your dependencies
Launch a TFJob in the on premise cluster to taining model.
Firstly set some custom training parameters for TFJob.
End of explanation
import uuid
from kubeflow import fairing
from kubeflow.fairing.kubernetes.utils import mounting_pvc
tfjob_name = f'mnist-training-{uuid.uuid4().hex[:4]}'
output_map = {
"Dockerfile": "Dockerfile",
"mnist.py": "mnist.py"
}
command=["python",
"/opt/mnist.py",
"--tf-model-dir=" + model_dir,
"--tf-export-dir=" + export_path,
"--tf-train-steps=" + train_steps,
"--tf-batch-size=" + batch_size,
"--tf-learning-rate=" + learning_rate]
fairing.config.set_preprocessor('python', command=command, path_prefix="/app", output_map=output_map)
fairing.config.set_builder(name='docker', registry=DOCKER_REGISTRY, base_image="",
image_name="mnist", dockerfile_path="Dockerfile")
fairing.config.set_deployer(name='tfjob', namespace=my_namespace, stream_log=False, job_name=tfjob_name,
chief_count=num_chief, worker_count=num_workers, ps_count=num_ps,
pod_spec_mutators=[mounting_pvc(pvc_name=pvc_name, pvc_mount_path=model_dir)])
fairing.config.run()
Explanation: Use Kubeflow Fairing to build a docker image and push to docker registry, and then launch a TFJob in the on-prem cluster for distributed training model.
End of explanation
from kubeflow.tfjob import TFJobClient
tfjob_client = TFJobClient()
tfjob_client.get(tfjob_name, namespace=my_namespace)
Explanation: Get the Created TFJobs
End of explanation
tfjob_client.wait_for_job(tfjob_name, namespace=my_namespace, watch=True)
Explanation: Wait For the Training Job to finish
End of explanation
tfjob_client.is_job_succeeded(tfjob_name, namespace=my_namespace)
Explanation: Check if the TFJob succeeded.
End of explanation
tfjob_client.get_logs(tfjob_name, namespace=my_namespace)
Explanation: Get the Training Logs
End of explanation
from kubeflow.fairing.deployers.kfserving.kfserving import KFServing
isvc_name = f'mnist-service-{uuid.uuid4().hex[:4]}'
isvc = KFServing('tensorflow', namespace=my_namespace, isvc_name=isvc_name,
default_storage_uri='pvc://' + pvc_name + '/export')
isvc.deploy(isvc.generate_isvc())
Explanation: Deploy Service using KFServing
End of explanation
from kfserving import KFServingClient
kfserving_client = KFServingClient()
kfserving_client.get(namespace=my_namespace)
Explanation: Get the InferenceService
End of explanation
mnist_isvc = kfserving_client.get(isvc_name, namespace=my_namespace)
mnist_isvc_name = mnist_isvc['metadata']['name']
mnist_isvc_endpoint = mnist_isvc['status'].get('url', '')
print("MNIST Service Endpoint: " + mnist_isvc_endpoint)
Explanation: Get the InferenceService and Service Endpoint
End of explanation
ISTIO_CLUSTER_IP=!kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.clusterIP}'
CLUSTER_IP=ISTIO_CLUSTER_IP[0]
MODEL_HOST=f"Host: {mnist_isvc_name}.{my_namespace}.example.com"
!curl -v -H "{MODEL_HOST}" http://{CLUSTER_IP}/v1/models/{mnist_isvc_name}:predict -d @./input.json
Explanation: Run a prediction to the InferenceService
End of explanation
tfjob_client.delete(tfjob_name, namespace=my_namespace)
Explanation: Clean Up
Delete the TFJob
End of explanation
kfserving_client.delete(isvc_name, namespace=my_namespace)
Explanation: Delete the InferenceService.
End of explanation |
13,450 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center>
<h1> ILI285 - Computación Científica I / INF285 - Computación Científica </h1>
<h2> Generalized Minimal Residual Method </h2>
<h2> <a href="#acknowledgements"> [S]cientific [C]omputing [T]eam </a> </h2>
<h2> Version
Step1: <div id='intro' />
Introduction
Welcome to another edition of our Jupyter Notebooks. A few notebooks back, we saw that the Conjugate Gradient Method, an iterative method, was very useful to solve $A\,\mathbf{x}=\mathbf{b}$ but it only worked when $A$ was positive definite and symmetric. So now we need an iterative method that works with nonsymmetric linear system of equations, and for that we have the Generalized Minimum Residual Method (GMRes). It works really well for finding the solution of large, sparse (and dense as well), nonsymmetric linear systems of equations. Of course, it will also have trouble for ill-conditioned linear system of equations. But it is really easy to add a left or right or both preconditioners!
<div id='LS' />
A quick review on Least Squares
Least Squares is used to solve overdetemined linear systems of equations $A\,\mathbf{x} = \mathbf{b}$. That is, for example, a linear system of equations where there are more equations than unknowns. It finds the best $\overline{\mathbf{x}}$ so that it minimizes the euclidean length of $\mathbf{r} = \mathbf{b} - A\,\mathbf{x}$.
So, you might be wondering, what does Least Squares have to do with GMRes? WELL, since you're dying to know, I'll tell you
Step2: A very simple example
Step3: Another example, how may iteration does it need to converge?
Step4: Plotting the residual over the iterations | Python Code:
import numpy as np
import scipy as sp
from scipy import linalg as la
import matplotlib.pyplot as plt
import scipy.sparse.linalg
%matplotlib inline
#%load_ext memory_profiler
import matplotlib as mpl
mpl.rcParams['font.size'] = 14
mpl.rcParams['axes.labelsize'] = 20
mpl.rcParams['xtick.labelsize'] = 14
mpl.rcParams['ytick.labelsize'] = 14
M=8
Explanation: <center>
<h1> ILI285 - Computación Científica I / INF285 - Computación Científica </h1>
<h2> Generalized Minimal Residual Method </h2>
<h2> <a href="#acknowledgements"> [S]cientific [C]omputing [T]eam </a> </h2>
<h2> Version: 1.21</h2>
</center>
Table of Contents
Introduction
Short reminder about Least Squares
GMRes
Theoretical Problems
Practical Problems
Acknowledgements
End of explanation
# This is a very instructive implementation of GMRes.
def GMRes(A, b, x0=np.array([0.0]), m=10, flag_display=True, threshold=1e-12):
n = len(b)
if len(x0)==1:
x0=np.zeros(n)
r0 = b - np.dot(A, x0)
nr0=np.linalg.norm(r0)
out_res=np.array(nr0)
Q = np.zeros((n,n))
H = np.zeros((n,n))
Q[:,0] = r0 / nr0
flag_break=False
for k in np.arange(np.min((m,n))):
y = np.dot(A, Q[:,k])
if flag_display:
print('||y||=',np.linalg.norm(y))
for j in np.arange(k+1):
H[j][k] = np.dot(Q[:,j], y)
if flag_display:
print('H[',j,'][',k,']=',H[j][k])
y = y - np.dot(H[j][k],Q[:,j])
if flag_display:
print('||y||=',np.linalg.norm(y))
# All but the last equation are treated equally. Why?
if k+1<n:
H[k+1][k] = np.linalg.norm(y)
if flag_display:
print('H[',k+1,'][',k,']=',H[k+1][k])
if (np.abs(H[k+1][k]) > 1e-16):
Q[:,k+1] = y/H[k+1][k]
else:
print('flag_break has been activated')
flag_break=True
# Do you remember e_1? The canonical vector.
e1 = np.zeros((k+1)+1)
e1[0]=1
H_tilde=H[0:(k+1)+1,0:k+1]
else:
H_tilde=H[0:k+1,0:k+1]
# Solving the 'SMALL' least square problem.
# This could be improved with Givens rotations!
ck = np.linalg.lstsq(H_tilde, nr0*e1)[0]
if k+1<n:
x = x0 + np.dot(Q[:,0:(k+1)], ck)
else:
x = x0 + np.dot(Q, ck)
# Why is 'norm_small' equal to 'norm_full'?
norm_small=np.linalg.norm(np.dot(H_tilde,ck)-nr0*e1)
out_res = np.append(out_res,norm_small)
if flag_display:
norm_full=np.linalg.norm(b-np.dot(A,x))
print('..........||b-A\,x_k||=',norm_full)
print('..........||H_k\,c_k-nr0*e1||',norm_small);
if flag_break:
if flag_display:
print('EXIT: flag_break=True')
break
if norm_small<threshold:
if flag_display:
print('EXIT: norm_small<threshold')
break
return x,out_res
Explanation: <div id='intro' />
Introduction
Welcome to another edition of our Jupyter Notebooks. A few notebooks back, we saw that the Conjugate Gradient Method, an iterative method, was very useful to solve $A\,\mathbf{x}=\mathbf{b}$ but it only worked when $A$ was positive definite and symmetric. So now we need an iterative method that works with nonsymmetric linear system of equations, and for that we have the Generalized Minimum Residual Method (GMRes). It works really well for finding the solution of large, sparse (and dense as well), nonsymmetric linear systems of equations. Of course, it will also have trouble for ill-conditioned linear system of equations. But it is really easy to add a left or right or both preconditioners!
<div id='LS' />
A quick review on Least Squares
Least Squares is used to solve overdetemined linear systems of equations $A\,\mathbf{x} = \mathbf{b}$. That is, for example, a linear system of equations where there are more equations than unknowns. It finds the best $\overline{\mathbf{x}}$ so that it minimizes the euclidean length of $\mathbf{r} = \mathbf{b} - A\,\mathbf{x}$.
So, you might be wondering, what does Least Squares have to do with GMRes? WELL, since you're dying to know, I'll tell you: the backward error of the system in GMRes is minimized at each iteration step using a Least Squares formulation.
<div id='GMR' />
GMRes
GMRes is a member of the family of Krylov methods. It finds an approximation of $\mathbf{x}$ restricted to live on the Krylov sub-space $\mathcal{K_k}$, where $\mathcal{K_k}={\mathbf{r}_0, A\,\mathbf{r}_0, A^2\,\mathbf{r}_0, \cdots, A^{k-1}\,\mathbf{r}_0}$ and $\mathbf{r}_0 = \mathbf{b} - A\,\mathbf{x}_0$ is the residual vector of the initial guess.
The idea behind this method is to look for improvements to the initial guess $\mathbf{x}_0$ in the Krylov space. At the $k$-th iteration, we enlarge the Krylov space by adding $A^k\,\mathbf{r}_0$, reorthogonalize the basis, and then use least squares to find the best improvement to add to $\mathbf{x}_0$.
The algorithm is as follows:
Generalized Minimum Residual Method
$\mathbf{x}0$ = initial guess<br>
$\mathbf{r}$ = $\mathbf{b} - A\,\mathbf{x}_0$<br>
$\mathbf{q}_1$ = $\mathbf{r} / \|\mathbf{r}\|_2$<br>
for $k = 1, ..., m$<br>
$\qquad \ \ \mathbf{y} = A\,\mathbf{q}_k$<br>
$\qquad$ for $j = 1,2,...,k$ <br>
$\qquad \qquad$ $h{jk} = \mathbf{q}j^*\,\mathbf{y}$<br>
$\qquad \qquad$ $\mathbf{y} = \mathbf{y} - h{jk}\, \mathbf{q}j$<br>
$\qquad$ end<br>
$\qquad \ h{k+1,k} = \|y\|2 \qquad$ (If $h{k+1,k} = 0$ , skip next line and terminate at bottom.) <br>
$\qquad \ \mathbf{q}{k+1} = \mathbf{y}/h{k+1,k}$ <br>
$\qquad$ Minimize $\left\|\widehat{H}_k\, \mathbf{c}_k - [\|\mathbf{r}\|_2 \ 0 \ 0 \ ... \ 0]^T \right\|_2$ for $\mathbf{c}_k$ <br>
$\qquad$ $\mathbf{x}_k = Q_k \, \mathbf{c}_k + \mathbf{x}_0$ <br>
end
Now we have to implement it.
End of explanation
A = np.array([[1,1,0],[0,1,0],[0,1,1]])
b = np.array([1,2,3])
x0 = np.zeros(3)
# scipy gmres
x_scipy = scipy.sparse.linalg.gmres(A,b,x0)[0]
# our gmres
x_our, _ = GMRes(A, b)
# numpy solve
x_np= np.linalg.solve(A,b)
# Showing the solutions
print('--------------------------------')
print('x_scipy',x_scipy)
print('x_our',x_our)
print('x_np',x_np)
Explanation: A very simple example
End of explanation
A = np.array([[0,0,0,1],[1,0,0,0],[0,1,0,0],[0,0,1,0]])
b = np.array([1,0,1,0])
x_our, _ = GMRes(A, b, m=10)
norm_full=np.linalg.norm(b-np.dot(A,x_our))
print(norm_full)
A = np.random.rand(10,10)+10*np.eye(10)
b = np.random.rand(10)
x_our, out_res = GMRes(A, b, m=10,flag_display=True)
norm_full=np.linalg.norm(b-np.dot(A,x_our))
print(norm_full)
Explanation: Another example, how may iteration does it need to converge?
End of explanation
plt.figure(figsize=(M,M))
plt.semilogy(out_res,'.k',markersize=20,label='residual')
plt.grid(True)
plt.xlabel(r'$k$')
plt.ylabel(r'$\|\mathbf{b}-A\,\mathbf{x}_k\|_2$')
plt.grid(True)
plt.show()
Explanation: Plotting the residual over the iterations
End of explanation |
13,451 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h2><span style="color
Step1: Short tutorial
Setup input files and params
Step2: calculate distances
Step3: save results
Step4: Draw the matrix
Step5: Draw matrix reordered to match groups in imap | Python Code:
# conda install ipyrad -c bioconda
# conda install toyplot -c eaton-lab (optional)
import ipyrad.analysis as ipa
import toyplot
Explanation: <h2><span style="color:gray">ipyrad-analysis toolkit:</span> distance</h2>
Key features:
Calculate pairwise genetic distances between samples.
Filter SNPs to reduce missing data.
Impute missing data using population allele frequencies.
required software
End of explanation
# the path to your VCF or HDF5 formatted snps file
data = "/home/deren/Downloads/ref_pop2.snps.hdf5"
# group individuals into populations
imap = {
"virg": ["TXWV2", "LALC2", "SCCU3", "FLSF33", "FLBA140"],
"mini": ["FLSF47", "FLMO62", "FLSA185", "FLCK216"],
"gemi": ["FLCK18", "FLSF54", "FLWO6", "FLAB109"],
"bran": ["BJSL25", "BJSB3", "BJVL19"],
"fusi": ["MXED8", "MXGT4", "TXGR3", "TXMD3"],
"sagr": ["CUVN10", "CUCA4", "CUSV6", "CUMM5"],
"oleo": ["CRL0030", "CRL0001", "HNDA09", "BZBB1", "MXSA3017"],
}
# minimum n samples that must be present in each SNP from each group
minmap = {i: 0.5 for i in imap}
Explanation: Short tutorial
Setup input files and params
End of explanation
# load the snp data into distance tool with arguments
from ipyrad.analysis.distance import Distance
dist = Distance(
data=data,
imap=imap,
minmap=minmap,
mincov=0.5,
impute_method="sample",
subsample_snps=False,
)
dist.run()
Explanation: calculate distances
End of explanation
# save to a CSV file
dist.dists.to_csv("distances.csv")
# show the upper corner
dist.dists.head()
Explanation: save results
End of explanation
toyplot.matrix(
dist.dists,
bshow=False,
tshow=False,
rlocator=toyplot.locator.Explicit(
range(len(dist.names)),
sorted(dist.names),
));
Explanation: Draw the matrix
End of explanation
# get list of concatenated names from each group
ordered_names = []
for group in dist.imap.values():
ordered_names += group
# reorder matrix to match name order
ordered_matrix = dist.dists[ordered_names].T[ordered_names]
toyplot.matrix(
ordered_matrix,
bshow=False,
tshow=False,
rlocator=toyplot.locator.Explicit(
range(len(ordered_names)),
ordered_names,
));
Explanation: Draw matrix reordered to match groups in imap
End of explanation |
13,452 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Titanic data exercise
Step1: 1. Describe each attribute, both with basic statistics and plots. State clearly your assumptions and discuss your findings.
pclass
the class a person belongs to
Step2: there are 3 different classes
Step3: most are in class 3, but surprisingly class 1 has more passagengers than class 2
Step4: survived
States if the passenger survived the titanic sinking
Step5: most died
Step6: only 38% survived
name
the name of the passanger
Step7: apparently there are some with the same name
Step8: sex
the sex of the passenger
Step9: 64.4% are male
age
How old the passenger is
Step10: There are NaN values! But also floating point values, which is somewhat unusual but not a problem per se.
Step11: Age distribution in a boxplot
Step12: And the distribution of age plotted
Step13: sibsp
The number of siblings or spouses on the ship
Step14: Plot histogram
Step15: parch
The number of parents or children on the ship
Step16: Histogram
Step17: Let's find the family
Step18: This are the children and the parents of the 'big' familly. Sadly all died
Step19: All (registered) passengers had a ticket ;)
fare
How many they paid
Step20: There are people that did not pay anything
Step21: there is one NaN value
Step22: Someone got ripped of, or got the best room.
cabin
What cabin they are in
Step23: 1014 people have no cabin (all class 3?)
Step24: Even people in class 1 have no cabin (or it is unknown)
Some people have several cabines, but they are also occupied by several peoples, probablement families.
It would be quite complicated to take those 'multiple cabin' entries appart. With more time we could have done it.
Step25: embarked
Step26: two people have NaN in 'embarked'
Step27: boat
On what rescue-boat they were rescued
Step28: some have several boats.
body
the identification number of a body
Step29: 121 bodys got an number
home dest
Step30: 369 different home destinations
Lets find the most common one
Step31: Most come from New York
2. Use the groupby method to calculate the proportion of passengers that survived by sex
Step32: Then calcultate the percentages
Step33: 3. Calculate the same proportion, but by class and sex.
Step34: create a new column with the apply method
Step35: Here is a plot showing the survive rates. Note that the plot is not based on the data calculated above
Step36: We can see that 'women first' is true, but also 'class 1 first'
4. Create age categories
Step37: Then group the data in a sensible way to get the nice Table below.
Step38: And finally calculate the survive portion for all cases
Step39: Plots
Two plots showing this. The first showing the female and the second shows the male passengers
Step40: Almost all women from class 1 and 2 survived, in class 3 about 50% survived | Python Code:
import pandas as pd
import numpy as np
import glob # to find all files in folder
from datetime import datetime
from datetime import date, time
from dateutil.parser import parse
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
sns.set_context('notebook')
pd.options.mode.chained_assignment = None # default='warn'
from IPython.core.display import HTML
HTML(filename='Data/titanic.html')
original_data = pd.read_excel('Data/titanic.xls')
original_data['total'] = 1 # add a colon only consisting of 1s to make couting easier
original_data.head(2)
Explanation: Titanic data exercise
End of explanation
pclass = original_data['pclass']
pclass.unique()
Explanation: 1. Describe each attribute, both with basic statistics and plots. State clearly your assumptions and discuss your findings.
pclass
the class a person belongs to
End of explanation
for c in pclass.unique():
print('nbr in class '+str(c)+': '+str(len(pclass[pclass == c])))
Explanation: there are 3 different classes
End of explanation
plt.hist(pclass.values)
Explanation: most are in class 3, but surprisingly class 1 has more passagengers than class 2
End of explanation
surv = original_data['survived']
surv.unique() # to make sure there are only 1 and 0
#how many survived?
surv.sum()
#how many died?
len(surv[surv == 0])
Explanation: survived
States if the passenger survived the titanic sinking
End of explanation
100/len(surv.values) * surv.sum()
Explanation: most died :(
End of explanation
name = original_data['name']
len(name.unique()) == len(name.values)
Explanation: only 38% survived
name
the name of the passanger
End of explanation
len(name.values) - len(name.unique())
#lets find them
original_data[name.isin(name[name.duplicated()].values)]
Explanation: apparently there are some with the same name
End of explanation
sex = original_data['sex']
sex.unique()
nbr_males = len(sex[sex == 'male'])
nbr_females= len(sex[sex == 'female'])
100/len(sex) * nbr_males
Explanation: sex
the sex of the passenger
End of explanation
age = original_data['age']
age.unique()
Explanation: 64.4% are male
age
How old the passenger is
End of explanation
age.min() # a baby?
age.max()
age.mean()
Explanation: There are NaN values! But also floating point values, which is somewhat unusual but not a problem per se.
End of explanation
sns.boxplot(age.dropna().values)
Explanation: Age distribution in a boxplot:
End of explanation
#plt.hist(age.values)
Explanation: And the distribution of age plotted:
End of explanation
sipsp = original_data['sibsp']
sipsp.unique()
sipsp.mean()
Explanation: sibsp
The number of siblings or spouses on the ship
End of explanation
plt.hist(sipsp)
Explanation: Plot histogram: Almost all traveled without siblings or spouses. there is apparently one family that traveled together (8 siblings are on board)
End of explanation
parch = original_data['parch']
parch.unique()
parch.mean()
Explanation: parch
The number of parents or children on the ship
End of explanation
plt.hist(parch)
Explanation: Histogram: Again almost noone traveled with their kids. The one big family is again seen here.
End of explanation
# the kids
original_data[original_data['sibsp'] == 8]
# the parents
original_data[original_data['parch'] == 9]
Explanation: Let's find the family
End of explanation
ticket = original_data['ticket']
len(ticket.unique())
ticket.dtype
len(ticket[ticket.isnull()])
Explanation: This are the children and the parents of the 'big' familly. Sadly all died :(
ticket
the ticketnbr the passanger had
End of explanation
fare = original_data['fare']
fare.mean()
fare.max()
fare.min()
Explanation: All (registered) passengers had a ticket ;)
fare
How many they paid
End of explanation
original_data[fare == 0]
fare.dtypes
original_data[fare.isnull()]
Explanation: There are people that did not pay anything
End of explanation
plt.hist(fare.dropna())
Explanation: there is one NaN value
End of explanation
cabin = original_data['cabin']
cabin.isnull().sum()
Explanation: Someone got ripped of, or got the best room.
cabin
What cabin they are in
End of explanation
plt.hist(original_data[cabin.isnull()]['pclass'])
Explanation: 1014 people have no cabin (all class 3?)
End of explanation
cabin.head()
Explanation: Even people in class 1 have no cabin (or it is unknown)
Some people have several cabines, but they are also occupied by several peoples, probablement families.
It would be quite complicated to take those 'multiple cabin' entries appart. With more time we could have done it.
End of explanation
embarked = original_data['embarked']
embarked.unique()
len(embarked[embarked.isnull()])
Explanation: embarked
End of explanation
sns.countplot(y="embarked", data=original_data, color="c");
Explanation: two people have NaN in 'embarked'
End of explanation
boat = original_data['boat']
boat.unique()
Explanation: boat
On what rescue-boat they were rescued
End of explanation
body = original_data['body']
body.count()
Explanation: some have several boats.
body
the identification number of a body
End of explanation
homedest = original_data['home.dest']
len(homedest.dropna().unique())
Explanation: 121 bodys got an number
home dest
End of explanation
original_data[['home.dest', 'total']].groupby(by='home.dest').sum().sort_values(by='total', ascending=False)
Explanation: 369 different home destinations
Lets find the most common one
End of explanation
survived_by_sex = original_data[['survived', 'sex']].groupby('sex').sum()
nbr_males = len(original_data[original_data['sex'] == 'male'])
nbr_females = len(original_data[original_data['sex'] == 'female'])
nbr_total = len(original_data['sex'])
survived_by_sex
print(nbr_total == nbr_females + nbr_males) # to check if consistent
Explanation: Most come from New York
2. Use the groupby method to calculate the proportion of passengers that survived by sex:
First gather the numbers
End of explanation
female_survived_percentage = (100/nbr_females) * survived_by_sex.at['female', 'survived']
male_survived_percentage = (100/nbr_males) * survived_by_sex.at['male', 'survived']
print('female surv: '+str(round(female_survived_percentage, 3))+'%')
print('male surv: '+str(round(male_survived_percentage, 3))+'%')
Explanation: Then calcultate the percentages
End of explanation
# make use of the 'total' column (which is all 1's in the original_data)
survived_by_class = original_data[['pclass', 'sex', 'survived', 'total']].groupby(['pclass', 'sex']).sum()
survived_by_class
def combine_surv_total(row):
#print(row)
return 100.0/row.total * row.survived
Explanation: 3. Calculate the same proportion, but by class and sex.
End of explanation
survived_by_class['survived in %'] = survived_by_class.apply(combine_surv_total, axis=1)
survived_by_class
Explanation: create a new column with the apply method
End of explanation
type(original_data['sex'])
sns.barplot(x='sex', y='survived', hue='pclass', data=original_data);
Explanation: Here is a plot showing the survive rates. Note that the plot is not based on the data calculated above
End of explanation
original_data.age.fillna(-1, inplace=True)
age_cats = pd.cut(original_data.age, [-2, 0+1e-6,14+1e-6,20+1e-6,64+1e-6,120], labels=['No age', 'child','adolescent','adult','senior'], include_lowest=True)
original_data['age-category'] = age_cats
catsdata = original_data[['sex', 'age-category', 'pclass', 'survived', 'total']]
Explanation: We can see that 'women first' is true, but also 'class 1 first'
4. Create age categories: children (under 14 years), adolescents (14-20), adult (21-64), and senior(65+), and calculate survival proportions by age category, class and sex.
Create the categories. We use the value -1 to show that the person has a NaN value as age (and put them in the category 'No age'.
End of explanation
grouped = catsdata.groupby(['sex', 'age-category', 'pclass']).sum().fillna(0)
grouped
Explanation: Then group the data in a sensible way to get the nice Table below.
End of explanation
def surv_proportions(row):
if row.total == 0:
return np.nan
return round(100.0/row.total * row.survived, 2)
grouped['survive-portion (%)'] = grouped.apply(surv_proportions, axis=1)
grouped
Explanation: And finally calculate the survive portion for all cases
End of explanation
sns.barplot(x="pclass", y="survived", hue="age-category", data=original_data[original_data['sex'] == 'female'])
Explanation: Plots
Two plots showing this. The first showing the female and the second shows the male passengers
End of explanation
sns.barplot(x="pclass", y="survived", hue="age-category", data=original_data[original_data['sex'] == 'male'])
Explanation: Almost all women from class 1 and 2 survived, in class 3 about 50% survived
End of explanation |
13,453 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Passing Messages to Processes
As with threads, a common use pattern for multiple processes is to divide a job up among several workers to run in parallel. Effective use of multiple processes usually requires some communication between them, so that work can be divided and results can be aggregated. A simple way to communicate between processes with multiprocessing is to use a Queue to pass messages back and forth. Any object that can be serialized with pickle can pass through a Queue.
Step1: A more complex example shows how to manage several workers consuming data from a JoinableQueue and passing results back to the parent process. The poison pill technique is used to stop the workers. After setting up the real tasks, the main program adds one “stop” value per worker to the job queue. When a worker encounters the special value, it breaks out of its processing loop. The main process uses the task queue’s join() method to wait for all of the tasks to finish before processing the results.
Step4: Signaling between Processes
The Event class provides a simple way to communicate state information between processes. An event can be toggled between set and unset states. Users of the event object can wait for it to change from unset to set, using an optional timeout value.
Step5: When wait() times out it returns without an error. The caller is responsible for checking the state of the event using is_set().
a event.set() will set off all process that are waiting for this event
Controlling Access to Resources
In situations when a single resource needs to be shared between multiple processes, a Lock can be used to avoid conflicting accesses.
Step8: Synchronizing Operations
Condition
Condition objects can be used to synchronize parts of a workflow so that some run in parallel but others run sequentially, even if they are in separate processes.
Step9: In this example, two process run the second stage of a job in parallel, but only after the first stage is done.
Controlling Concurrent Access to Resources
Sometimes it is useful to allow more than one worker access to a resource at a time, while still limiting the overall number. For example, a connection pool might support a fixed number of simultaneous connections, or a network application might support a fixed number of concurrent downloads. A Semaphore is one way to manage those connections.
Step10: Managing Shared State
In the previous example, the list of active processes is maintained centrally in the ActivePool instance via a special type of list object created by a Manager. The Manager is responsible for coordinating shared information state between all of its users.
Step11: By creating the list through the manager, it is shared and updates are seen in all processes. Dictionaries are also supported.
Shared Namespaces
In addition to dictionaries and lists, a Manager can create a shared Namespace.
Step12: Any named value added to the Namespace is visible to all of the clients that receive the Namespace instance.
It is important to know that updates to the contents of mutable values in the namespace are not propagated automatically.
Step13: Process Pools
The Pool class can be used to manage a fixed number of workers for simple cases where the work to be done can be broken up and distributed between workers independently. The return values from the jobs are collected and returned as a list. The pool arguments include the number of processes and a function to run when starting the task process (invoked once per child).
Step14: By default, Pool creates a fixed number of worker processes and passes jobs to them until there are no more jobs. Setting the maxtasksperchild parameter tells the pool to restart a worker process after it has finished a few tasks, preventing long-running workers from consuming ever more system resources. | Python Code:
import multiprocessing
class MyFancyClass:
def __init__(self, name):
self.name = name
def do_something(self):
proc_name = multiprocessing.current_process().name
print('Doing something fancy in {} for {}!'.format(
proc_name, self.name))
def worker(q):
obj = q.get()
obj.do_something()
if __name__ == '__main__':
queue = multiprocessing.Queue()
p = multiprocessing.Process(target=worker, args=(queue,))
p.start()
queue.put(MyFancyClass('Fancy Dan'))
# Wait for the worker to finish
queue.close()
queue.join_thread()
p.join()
Explanation: Passing Messages to Processes
As with threads, a common use pattern for multiple processes is to divide a job up among several workers to run in parallel. Effective use of multiple processes usually requires some communication between them, so that work can be divided and results can be aggregated. A simple way to communicate between processes with multiprocessing is to use a Queue to pass messages back and forth. Any object that can be serialized with pickle can pass through a Queue.
End of explanation
import multiprocessing
import time
class Consumer(multiprocessing.Process):
def __init__(self, task_queue, result_queue):
multiprocessing.Process.__init__(self)
self.task_queue = task_queue
self.result_queue = result_queue
def run(self):
proc_name = self.name
while True:
next_task = self.task_queue.get()
if next_task is None:
# Poison pill means shutdown
print('{}: Exiting'.format(proc_name))
self.task_queue.task_done()
break
print('{}: {}'.format(proc_name, next_task))
answer = next_task()
self.task_queue.task_done()
self.result_queue.put(answer)
class Task:
def __init__(self, a, b):
self.a = a
self.b = b
def __call__(self):
time.sleep(0.1) # pretend to take time to do the work
return '{self.a} * {self.b} = {product}'.format(
self=self, product=self.a * self.b)
def __str__(self):
return '{self.a} * {self.b}'.format(self=self)
if __name__ == '__main__':
# Establish communication queues
tasks = multiprocessing.JoinableQueue()
results = multiprocessing.Queue()
# Start consumers
num_consumers = multiprocessing.cpu_count() * 2
print('Creating {} consumers'.format(num_consumers))
consumers = [
Consumer(tasks, results)
for i in range(num_consumers)
]
for w in consumers:
w.start()
# Enqueue jobs
num_jobs = 10
for i in range(num_jobs):
tasks.put(Task(i, i))
# Add a poison pill for each consumer
for i in range(num_consumers):
tasks.put(None)
# Wait for all of the tasks to finish
tasks.join()
# Start printing results
while num_jobs:
result = results.get()
print('Result:', result)
num_jobs -= 1
Explanation: A more complex example shows how to manage several workers consuming data from a JoinableQueue and passing results back to the parent process. The poison pill technique is used to stop the workers. After setting up the real tasks, the main program adds one “stop” value per worker to the job queue. When a worker encounters the special value, it breaks out of its processing loop. The main process uses the task queue’s join() method to wait for all of the tasks to finish before processing the results.
End of explanation
import multiprocessing
import time
def wait_for_event(e):
Wait for the event to be set before doing anything
print('wait_for_event: starting')
e.wait()
print('wait_for_event: e.is_set()->', e.is_set())
def wait_for_event_timeout(e, t):
Wait t seconds and then timeout
print('wait_for_event_timeout: starting')
e.wait(t)
print('wait_for_event_timeout: e.is_set()->', e.is_set())
if __name__ == '__main__':
e = multiprocessing.Event()
w1 = multiprocessing.Process(
name='block',
target=wait_for_event,
args=(e,),
)
w1.start()
w1 = multiprocessing.Process(
name='block',
target=wait_for_event,
args=(e,),
)
w1.start()
w2 = multiprocessing.Process(
name='nonblock',
target=wait_for_event_timeout,
args=(e, 2),
)
w2.start()
print('main: waiting before calling Event.set()')
time.sleep(3)
e.set()
print('main: event is set')
Explanation: Signaling between Processes
The Event class provides a simple way to communicate state information between processes. An event can be toggled between set and unset states. Users of the event object can wait for it to change from unset to set, using an optional timeout value.
End of explanation
import multiprocessing
import sys
def worker_with(lock, stream):
with lock:
stream.write('Lock acquired via with\n')
def worker_no_with(lock, stream):
lock.acquire()
try:
stream.write('Lock acquired directly\n')
finally:
lock.release()
lock = multiprocessing.Lock()
w = multiprocessing.Process(
target=worker_with,
args=(lock, sys.stdout),
)
nw = multiprocessing.Process(
target=worker_no_with,
args=(lock, sys.stdout),
)
w.start()
nw.start()
w.join()
nw.join()
Explanation: When wait() times out it returns without an error. The caller is responsible for checking the state of the event using is_set().
a event.set() will set off all process that are waiting for this event
Controlling Access to Resources
In situations when a single resource needs to be shared between multiple processes, a Lock can be used to avoid conflicting accesses.
End of explanation
import multiprocessing
import time
def stage_1(cond):
perform first stage of work,
then notify stage_2 to continue
name = multiprocessing.current_process().name
print('Starting', name)
with cond:
print('{} done and ready for stage 2'.format(name))
cond.notify_all()
def stage_2(cond):
wait for the condition telling us stage_1 is done
name = multiprocessing.current_process().name
print('Starting', name)
with cond:
cond.wait()
print('{} running'.format(name))
if __name__ == '__main__':
condition = multiprocessing.Condition()
s1 = multiprocessing.Process(name='s1',
target=stage_1,
args=(condition,))
s2_clients = [
multiprocessing.Process(
name='stage_2[{}]'.format(i),
target=stage_2,
args=(condition,),
)
for i in range(1, 3)
]
for c in s2_clients:
c.start()
time.sleep(1)
s1.start()
s1.join()
for c in s2_clients:
c.join()
Explanation: Synchronizing Operations
Condition
Condition objects can be used to synchronize parts of a workflow so that some run in parallel but others run sequentially, even if they are in separate processes.
End of explanation
import random
import multiprocessing
import time
class ActivePool:
def __init__(self):
super(ActivePool, self).__init__()
self.mgr = multiprocessing.Manager()
self.active = self.mgr.list()
self.lock = multiprocessing.Lock()
def makeActive(self, name):
with self.lock:
self.active.append(name)
def makeInactive(self, name):
with self.lock:
self.active.remove(name)
def __str__(self):
with self.lock:
return str(self.active)
def worker(s, pool):
name = multiprocessing.current_process().name
with s:
pool.makeActive(name)
print('Activating {} now running {}'.format(
name, pool))
time.sleep(random.random())
pool.makeInactive(name)
if __name__ == '__main__':
pool = ActivePool()
s = multiprocessing.Semaphore(3)
jobs = [
multiprocessing.Process(
target=worker,
name=str(i),
args=(s, pool),
)
for i in range(10)
]
for j in jobs:
j.start()
while True:
alive = 0
for j in jobs:
if j.is_alive():
alive += 1
j.join(timeout=0.1)
print('Now running {}'.format(pool))
if alive == 0:
# all done
break
Explanation: In this example, two process run the second stage of a job in parallel, but only after the first stage is done.
Controlling Concurrent Access to Resources
Sometimes it is useful to allow more than one worker access to a resource at a time, while still limiting the overall number. For example, a connection pool might support a fixed number of simultaneous connections, or a network application might support a fixed number of concurrent downloads. A Semaphore is one way to manage those connections.
End of explanation
import multiprocessing
import pprint
def worker(d, key, value):
d[key] = value
if __name__ == '__main__':
mgr = multiprocessing.Manager()
d = mgr.dict()
jobs = [
multiprocessing.Process(
target=worker,
args=(d, i, i * 2),
)
for i in range(10)
]
for j in jobs:
j.start()
for j in jobs:
j.join()
print('Results:', d)
Explanation: Managing Shared State
In the previous example, the list of active processes is maintained centrally in the ActivePool instance via a special type of list object created by a Manager. The Manager is responsible for coordinating shared information state between all of its users.
End of explanation
import multiprocessing
def producer(ns, event):
ns.value = 'This is the value'
event.set()
def consumer(ns, event):
try:
print('Before event: {}'.format(ns.value))
except Exception as err:
print('Before event, error:', str(err))
event.wait()
print('After event:', ns.value)
if __name__ == '__main__':
mgr = multiprocessing.Manager()
namespace = mgr.Namespace()
event = multiprocessing.Event()
p = multiprocessing.Process(
target=producer,
args=(namespace, event),
)
c = multiprocessing.Process(
target=consumer,
args=(namespace, event),
)
c.start()
p.start()
c.join()
p.join()
Explanation: By creating the list through the manager, it is shared and updates are seen in all processes. Dictionaries are also supported.
Shared Namespaces
In addition to dictionaries and lists, a Manager can create a shared Namespace.
End of explanation
import multiprocessing
def producer(ns, event):
# DOES NOT UPDATE GLOBAL VALUE!
ns.my_list.append('This is the value')
event.set()
def consumer(ns, event):
print('Before event:', ns.my_list)
event.wait()
print('After event :', ns.my_list)
if __name__ == '__main__':
mgr = multiprocessing.Manager()
namespace = mgr.Namespace()
namespace.my_list = []
event = multiprocessing.Event()
p = multiprocessing.Process(
target=producer,
args=(namespace, event),
)
c = multiprocessing.Process(
target=consumer,
args=(namespace, event),
)
c.start()
p.start()
c.join()
p.join()
Explanation: Any named value added to the Namespace is visible to all of the clients that receive the Namespace instance.
It is important to know that updates to the contents of mutable values in the namespace are not propagated automatically.
End of explanation
import multiprocessing
def do_calculation(data):
return data * 2
def start_process():
print('Starting', multiprocessing.current_process().name)
if __name__ == '__main__':
inputs = list(range(10))
print('Input :', inputs)
builtin_outputs = map(do_calculation, inputs)
print('Built-in:', [i for i in builtin_outputs])
pool_size = multiprocessing.cpu_count() * 2
pool = multiprocessing.Pool(
processes=pool_size,
initializer=start_process,
)
pool_outputs = pool.map(do_calculation, inputs)
pool.close() # no more tasks
pool.join() # wrap up current tasks
print('Pool :', pool_outputs)
Explanation: Process Pools
The Pool class can be used to manage a fixed number of workers for simple cases where the work to be done can be broken up and distributed between workers independently. The return values from the jobs are collected and returned as a list. The pool arguments include the number of processes and a function to run when starting the task process (invoked once per child).
End of explanation
import multiprocessing
def do_calculation(data):
return data * 2
def start_process():
print('Starting', multiprocessing.current_process().name)
if __name__ == '__main__':
inputs = list(range(10))
print('Input :', inputs)
builtin_outputs = map(do_calculation, inputs)
print('Built-in:', builtin_outputs)
pool_size = multiprocessing.cpu_count() * 2
pool = multiprocessing.Pool(
processes=pool_size,
initializer=start_process,
maxtasksperchild=2,
)
pool_outputs = pool.map(do_calculation, inputs)
pool.close() # no more tasks
pool.join() # wrap up current tasks
print('Pool :', pool_outputs)
Explanation: By default, Pool creates a fixed number of worker processes and passes jobs to them until there are no more jobs. Setting the maxtasksperchild parameter tells the pool to restart a worker process after it has finished a few tasks, preventing long-running workers from consuming ever more system resources.
End of explanation |
13,454 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NumPy - multidimensional data arrays
Ondrej Lexa 2016
Introduction
The numpy package (module) is used in almost all numerical computation using Python. It is a package that provide high-performance vector, matrix and higher-dimensional data structures for Python. It is implemented in C and Fortran so when calculations are vectorized (formulated with vectors and matrices), performance is very good.
NumPy adds basic MATLAB-like capability to Python
Step1: Writing scripts
When writing scripts it is recommended that you
Step2: Some different ways of working with NumPy are
Step3: NumPy arrays
In the numpy package the terminology used for vectors, matrices and higher-dimensional data sets is array. There are a number of ways to initialize new numpy arrays, for example from
a Python list or tuples
using functions that are dedicated to generating numpy arrays, such as arange, linspace, etc.
reading data from files
From lists
For example, to create new vector and matrix arrays from Python lists we can use the numpy.array function.
Step4: The v and M objects are both of the type ndarray that the numpy module provides.
Step5: The difference between the v and M arrays is only their shapes. We can get information about the shape of an array by using the ndarray.shape property.
Step6: The number of elements in the array is available through the ndarray.size property
Step7: Equivalently, we could use the function numpy.shape and numpy.size
Step8: The number of dimensions of the array is available through the ndarray.ndim property
Step9: So far the numpy.ndarray looks awefully much like a Python list (or nested list). Why not simply use Python lists for computations instead of creating a new array type?
There are several reasons
Step10: We get an error if we try to assign a value of the wrong type to an element in a numpy array
Step11: Using array-generating functions
For larger arrays it is inpractical to initialize the data manually, using explicit python lists. Instead we can use one of the many functions in numpy that generate arrays of different forms. Some of the more common are
Step12: linspace and logspace
Step13: mgrid
Step14: diag
Step15: zeros and ones
Step16: zeros_like and ones_like
Step17: Manipulating arrays
Indexing
We can index elements in an array using square brackets and indices
Step18: If we omit an index of a multidimensional array it returns the whole row (or, in general, a N-1 dimensional array)
Step19: The same thing can be achieved with using
Step20: We can assign new values to elements in an array using indexing
Step21: Index slicing
Index slicing is the technical name for the syntax M[lower
Step22: Array slices are mutable
Step23: We can omit any of the three parameters in M[lower
Step24: Negative indices counts from the end of the array (positive index from the begining)
Step25: Index slicing works exactly the same way for multidimensional arrays
Step26: Fancy indexing
Fancy indexing is the name for when an array or list is used in-place of an index
Step27: We can also use index masks
Step28: This feature is very useful to conditionally select elements from an array, using for example comparison operators
Step29: Linear algebra
Vectorizing code is the key to writing efficient numerical calculation with Python/Numpy. That means that as much as possible of a program should be formulated in terms of matrix and vector operations, like matrix-matrix multiplication.
Scalar-array operations
We can use the usual arithmetic operators to multiply, add, subtract, and divide arrays with scalar numbers.
Step30: Above we have used the .T to transpose the matrix object v. We could also have used the transpose function to accomplish the same thing.
Element-wise array-array operations
When we add, subtract, multiply and divide arrays with each other, the default behaviour is element-wise operations
Step31: If we multiply arrays with compatible shapes, we get an element-wise multiplication of each row
Step32: Matrix algebra
What about matrix mutiplication? There are two ways. We can either use the dot function, which applies a matrix-matrix, matrix-vector, or inner vector multiplication to its two arguments
Step33: Alternatively, we can cast the array objects to the type matrix. This changes the behavior of the standard arithmetic operators +, -, * to use matrix algebra.
Step34: If we try to add, subtract or multiply objects with incomplatible shapes we get an error
Step35: See also the related functions
Step36: Inverse
Step37: Determinant
Step38: Data processing
Often it is useful to store datasets in Numpy arrays. Numpy provides a number of functions to calculate statistics of datasets in arrays.
Step39: mean
Step40: standard deviations and variance
Step41: min and max
Step42: sum, prod, and trace
Step43: Calculations with higher-dimensional data
When functions such as min, max, etc. are applied to a multidimensional arrays, it is sometimes useful to apply the calculation to the entire array, and sometimes only on a row or column basis. Using the axis argument we can specify how these functions should behave
Step44: Many other functions and methods in the array and matrix classes accept the same (optional) axis keyword argument.
Reshaping, resizing and stacking arrays
The shape of an Numpy array can be modified without copying the underlaying data, which makes it a fast operation even for large arrays.
Step45: We can also use the function flatten to make a higher-dimensional array into a vector. But this function create a copy of the data.
Step46: Stacking and repeating arrays
Using function repeat, tile, vstack, hstack, and concatenate we can create larger vectors and matrices from smaller ones
Step47: concatenate
Step48: hstack and vstack
Step49: Linear equations
System of linear equations like
Step50: or
Step51: Copy and "deep copy"
To achieve high performance, assignments in Python usually do not copy the underlaying objects. This is important for example when objects are passed between functions, to avoid an excessive amount of memory copying when it is not necessary (technical term
Step52: If we want to avoid this behavior, so that when we get a new completely independent object B copied from A, then we need to do a so-called "deep copy" using the function copy
Step53: Iterating over array elements
Generally, we want to avoid iterating over the elements of arrays whenever we can (at all costs). The reason is that in a interpreted language like Python (or MATLAB), iterations are really slow compared to vectorized operations.
However, sometimes iterations are unavoidable. For such cases, the Python for loop is the most convenient way to iterate over an array
Step54: When we need to iterate over each element of an array and modify its elements, it is convenient to use the enumerate function to obtain both the element and its index in the for loop
Step55: Using arrays in conditions
When using arrays in conditions,for example if statements and other boolean expressions, one needs to use any or all, which requires that any or all elements in the array evalutes to True | Python Code:
from pylab import *
Explanation: NumPy - multidimensional data arrays
Ondrej Lexa 2016
Introduction
The numpy package (module) is used in almost all numerical computation using Python. It is a package that provide high-performance vector, matrix and higher-dimensional data structures for Python. It is implemented in C and Fortran so when calculations are vectorized (formulated with vectors and matrices), performance is very good.
NumPy adds basic MATLAB-like capability to Python:
multidimensional arrays with homogeneous data types
specific numeric data types (e.g.\ \pyv{int8}, \pyv{uint32}, \pyv{float64})
array manipulation functions (e.g.\ reshape, transpose, concatenate)
array generation (e.g.\ ones, zeros, eye, random)
element-wise math operations (e.g.\ add, multiply, max, sin)
matrix math operations (e.g.\ inner/outer product, rank, trace)
linear algebra (e.g.\ inv, pinv, svd, eig, det, qr)
SciPy builds on NumPy (much like MATLAB toolboxes) adding:
multidimensional image processing
non-linear solvers, optimization, root finding
signal processing, fast Fourier transforms
numerical integration, interpolation, statistical functions
sparse matrices, sparse solvers
clustering algorithms, distance metrics, spatial data structures
file IO (including to MATLAB .mat files)
Matplotlib adds MATLAB-like plotting capability on top of NumPy.
Interactive Scientific Python (aka PyLab)
PyLab is a meta-package that import most of the NumPy, SciPy and Matplotlib into the global name space. It is the easiest (and most MATLAB-like) way to work with scientific Python.
End of explanation
import numpy as np
import scipy as sp
import matplotlib as mpl
import matplotlib.pyplot as plt
Explanation: Writing scripts
When writing scripts it is recommended that you:
only import what you need, for efficiency
import packages into namespaces, to avoid name clashes
The community has adopted abbreviated naming conventions:
End of explanation
from numpy import eye, array # Import only what you need
from numpy.linalg import svd
Explanation: Some different ways of working with NumPy are:
End of explanation
# a vector: the argument to the array function is a Python list
v = array([1,2,3,4])
v
# a matrix: the argument to the array function is a nested Python list
M = array([[1, 2], [3, 4]])
M
Explanation: NumPy arrays
In the numpy package the terminology used for vectors, matrices and higher-dimensional data sets is array. There are a number of ways to initialize new numpy arrays, for example from
a Python list or tuples
using functions that are dedicated to generating numpy arrays, such as arange, linspace, etc.
reading data from files
From lists
For example, to create new vector and matrix arrays from Python lists we can use the numpy.array function.
End of explanation
type(v), type(M)
Explanation: The v and M objects are both of the type ndarray that the numpy module provides.
End of explanation
v.shape
M.shape
Explanation: The difference between the v and M arrays is only their shapes. We can get information about the shape of an array by using the ndarray.shape property.
End of explanation
M.size
Explanation: The number of elements in the array is available through the ndarray.size property:
End of explanation
shape(M)
size(M)
Explanation: Equivalently, we could use the function numpy.shape and numpy.size
End of explanation
v.ndim
M.ndim
Explanation: The number of dimensions of the array is available through the ndarray.ndim property:
End of explanation
M.dtype
Explanation: So far the numpy.ndarray looks awefully much like a Python list (or nested list). Why not simply use Python lists for computations instead of creating a new array type?
There are several reasons:
Python lists are very general. They can contain any kind of object. They are dynamically typed. They do not support mathematical functions such as matrix and dot multiplications, etc. Implementing such functions for Python lists would not be very efficient because of the dynamic typing.
Numpy arrays are statically typed and homogeneous. The type of the elements is determined when the array is created.
Numpy arrays are memory efficient.
Because of the static typing, fast implementation of mathematical functions such as multiplication and addition of numpy arrays can be implemented in a compiled language (C and Fortran is used).
Using the dtype (data type) property of an ndarray, we can see what type the data of an array has:
End of explanation
M[0,0] = "hello"
Explanation: We get an error if we try to assign a value of the wrong type to an element in a numpy array:
End of explanation
# create a range
x = arange(0, 10, 1) # arguments: start, stop, step
x
x = arange(-1, 1, 0.1)
x
Explanation: Using array-generating functions
For larger arrays it is inpractical to initialize the data manually, using explicit python lists. Instead we can use one of the many functions in numpy that generate arrays of different forms. Some of the more common are:
arange
End of explanation
# using linspace, both end points ARE included
linspace(0, 10, 25)
logspace(0, 10, 10, base=e)
Explanation: linspace and logspace
End of explanation
x, y = mgrid[0:5, 0:5] # similar to meshgrid in MATLAB
x
y
Explanation: mgrid
End of explanation
# a diagonal matrix
diag([1,2,3])
# diagonal with offset from the main diagonal
diag([1,2,3], k=1)
Explanation: diag
End of explanation
zeros((3,3))
ones((3,3))
Explanation: zeros and ones
End of explanation
zeros_like(x)
ones_like(x)
Explanation: zeros_like and ones_like
End of explanation
# v is a vector, and has only one dimension, taking one index
v[0]
# M is a matrix, or a 2 dimensional array, taking two indices
M[1,1]
Explanation: Manipulating arrays
Indexing
We can index elements in an array using square brackets and indices:
End of explanation
M
M[1]
Explanation: If we omit an index of a multidimensional array it returns the whole row (or, in general, a N-1 dimensional array)
End of explanation
M[1,:] # row 1
M[:,1] # column 1
Explanation: The same thing can be achieved with using : instead of an index:
End of explanation
M[0,0] = -1
M
# also works for rows and columns
M[0,:] = 0
M[:,1] = -1
M
Explanation: We can assign new values to elements in an array using indexing:
End of explanation
A = array([1,2,3,4,5])
A
A[1:3]
Explanation: Index slicing
Index slicing is the technical name for the syntax M[lower:upper:step] to extract part of an array:
End of explanation
A[1:3] = [-2,-3]
A
Explanation: Array slices are mutable: if they are assigned a new value the original array from which the slice was extracted is modified:
End of explanation
A[::] # lower, upper, step all take the default values
A[::2] # step is 2, lower and upper defaults to the beginning and end of the array
A[:3] # first three elements
A[3:] # elements from index 3
Explanation: We can omit any of the three parameters in M[lower:upper:step]:
End of explanation
A = array([1,2,3,4,5])
A[-1] # the last element in the array
A[-3:] # the last three elements
Explanation: Negative indices counts from the end of the array (positive index from the begining):
End of explanation
A = array([[n+m*10 for n in range(5)] for m in range(5)])
A
# a block from the original array
A[1:4, 1:4]
# strides
A[::2, ::2]
Explanation: Index slicing works exactly the same way for multidimensional arrays:
End of explanation
row_indices = [1, 2, 3]
A[row_indices]
col_indices = [1, 2, -1] # remember, index -1 means the last element
A[row_indices, col_indices]
Explanation: Fancy indexing
Fancy indexing is the name for when an array or list is used in-place of an index:
End of explanation
B = array([n for n in range(5)])
B
row_mask = array([True, False, True, False, False])
B[row_mask]
# same thing
row_mask = array([1,0,1,0,0], dtype=bool)
B[row_mask]
Explanation: We can also use index masks: If the index mask is an Numpy array of data type bool, then an element is selected (True) or not (False) depending on the value of the index mask at the position of each element:
End of explanation
x = arange(0, 10, 0.5)
x
mask = (5 < x) * (x <= 7)
mask
x[mask]
Explanation: This feature is very useful to conditionally select elements from an array, using for example comparison operators:
End of explanation
v1 = arange(0, 5)
v1 * 2
v1 + 2
A * 2
A + 2
A + A.T
Explanation: Linear algebra
Vectorizing code is the key to writing efficient numerical calculation with Python/Numpy. That means that as much as possible of a program should be formulated in terms of matrix and vector operations, like matrix-matrix multiplication.
Scalar-array operations
We can use the usual arithmetic operators to multiply, add, subtract, and divide arrays with scalar numbers.
End of explanation
A * A # element-wise multiplication
v1 * v1
Explanation: Above we have used the .T to transpose the matrix object v. We could also have used the transpose function to accomplish the same thing.
Element-wise array-array operations
When we add, subtract, multiply and divide arrays with each other, the default behaviour is element-wise operations:
End of explanation
A.shape, v1.shape
A * v1
Explanation: If we multiply arrays with compatible shapes, we get an element-wise multiplication of each row:
End of explanation
dot(A, A)
dot(A, v1)
dot(v1, v1)
Explanation: Matrix algebra
What about matrix mutiplication? There are two ways. We can either use the dot function, which applies a matrix-matrix, matrix-vector, or inner vector multiplication to its two arguments:
End of explanation
M = matrix(A)
v = matrix(v1).T # make it a column vector
v
M * M
M * v
# inner product
v.T * v
# with matrix objects, standard matrix algebra applies
v + M*v
Explanation: Alternatively, we can cast the array objects to the type matrix. This changes the behavior of the standard arithmetic operators +, -, * to use matrix algebra.
End of explanation
v = matrix([1,2,3,4,5,6]).T
shape(M), shape(v)
M * v
Explanation: If we try to add, subtract or multiply objects with incomplatible shapes we get an error:
End of explanation
M = array([[1, 2], [3, 4]])
Explanation: See also the related functions: inner, outer, cross, kron, tensordot. Try for example help(kron).
Matrix computations
End of explanation
inv(M) # equivalent to M.I
dot(inv(M), M)
Explanation: Inverse
End of explanation
det(M)
det(inv(M))
Explanation: Determinant
End of explanation
d = arange(0, 10)
d
Explanation: Data processing
Often it is useful to store datasets in Numpy arrays. Numpy provides a number of functions to calculate statistics of datasets in arrays.
End of explanation
mean(d)
Explanation: mean
End of explanation
std(d), var(d)
Explanation: standard deviations and variance
End of explanation
d.min()
d.max()
Explanation: min and max
End of explanation
# sum up all elements
sum(d)
# product of all elements
prod(d+1)
# cummulative sum
cumsum(d)
# cummulative product
cumprod(d+1)
A
# same as: diag(A).sum()
trace(A)
Explanation: sum, prod, and trace
End of explanation
M = rand(3,4)
M
# global max
M.max()
# max in each column
M.max(axis=0)
# max in each row
M.max(axis=1)
Explanation: Calculations with higher-dimensional data
When functions such as min, max, etc. are applied to a multidimensional arrays, it is sometimes useful to apply the calculation to the entire array, and sometimes only on a row or column basis. Using the axis argument we can specify how these functions should behave:
End of explanation
M
n, m = M.shape
n, m
N = M.reshape((6, 2))
N
O = M.reshape((1, 12))
O
N[0:2,:] = 1 # modify the array
N
M # and the original variable is also changed. B is only a different view of the same data
O
Explanation: Many other functions and methods in the array and matrix classes accept the same (optional) axis keyword argument.
Reshaping, resizing and stacking arrays
The shape of an Numpy array can be modified without copying the underlaying data, which makes it a fast operation even for large arrays.
End of explanation
F = M.flatten()
F
F[0:5] = 0
F
M # now M has not changed, because F's data is a copy of M's, not refering to the same data
Explanation: We can also use the function flatten to make a higher-dimensional array into a vector. But this function create a copy of the data.
End of explanation
a = array([[1, 2], [3, 4]])
# repeat each element 3 times
repeat(a, 3)
# tile the matrix 3 times
tile(a, 3)
Explanation: Stacking and repeating arrays
Using function repeat, tile, vstack, hstack, and concatenate we can create larger vectors and matrices from smaller ones:
tile and repeat
End of explanation
b = array([[5, 6]])
concatenate((a, b), axis=0)
concatenate((a, b.T), axis=1)
Explanation: concatenate
End of explanation
vstack((a,b))
hstack((a,b.T))
Explanation: hstack and vstack
End of explanation
A = array([[1, 2], [3, 4]])
b = array([5,7])
solve(A,b)
Explanation: Linear equations
System of linear equations like:
\begin{array}{rcl}
x + 2y & = & 5\
3x + 4y & = & 7
\end{array}
or
\begin{array}{rcl}
\left[ {\begin{array}{{20}{c}}
1&2\
3&4
\end{array}} \right] \left[ {\begin{array}{{20}{c}}
x\
y
\end{array}} \right] & = & \left[ {\begin{array}{*{20}{c}}
5\
7
\end{array}} \right]
\end{array}
could be written in matrix form as $\mathbf {Ax} = \mathbf b$ and could be solved using numpy solve:
End of explanation
dot(inv(A),b)
Explanation: or
End of explanation
# now B is referring to the same array data as A
B = A
# changing B affects A
B[0,0] = 10
B
A
Explanation: Copy and "deep copy"
To achieve high performance, assignments in Python usually do not copy the underlaying objects. This is important for example when objects are passed between functions, to avoid an excessive amount of memory copying when it is not necessary (technical term: pass by reference).
End of explanation
B = copy(A)
# now, if we modify B, A is not affected
B[0,0] = -5
B
A
Explanation: If we want to avoid this behavior, so that when we get a new completely independent object B copied from A, then we need to do a so-called "deep copy" using the function copy:
End of explanation
v = array([1,2,3,4])
for element in v:
print(element)
M = array([[1,2], [3,4]])
for row in M:
print('Row', row)
for element in row:
print('Element', element)
Explanation: Iterating over array elements
Generally, we want to avoid iterating over the elements of arrays whenever we can (at all costs). The reason is that in a interpreted language like Python (or MATLAB), iterations are really slow compared to vectorized operations.
However, sometimes iterations are unavoidable. For such cases, the Python for loop is the most convenient way to iterate over an array:
End of explanation
for row_idx, row in enumerate(M):
print("row_idx", row_idx, "row", row)
for col_idx, element in enumerate(row):
print("col_idx", col_idx, "element", element)
# update the matrix M: square each element
M[row_idx, col_idx] = element ** 2
# each element in M is now squared
M
Explanation: When we need to iterate over each element of an array and modify its elements, it is convenient to use the enumerate function to obtain both the element and its index in the for loop:
End of explanation
M
if (M > 5).any():
print("at least one element in M is larger than 5")
else:
print("no element in M is larger than 5")
if (M > 5).all():
print("all elements in M are larger than 5")
else:
print("all elements in M are not larger than 5")
from IPython.core.display import HTML
def css_styling():
styles = open("./css/sg2.css", "r").read()
return HTML(styles)
css_styling()
Explanation: Using arrays in conditions
When using arrays in conditions,for example if statements and other boolean expressions, one needs to use any or all, which requires that any or all elements in the array evalutes to True:
End of explanation |
13,455 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Load Iris Flower Data
Step2: Standardize Features
Step3: Train Support Vector Classifier
Step4: Create Previously Unseen Observation
Step5: Predict Class Of Observation | Python Code:
# Load libraries
from sklearn.svm import LinearSVC
from sklearn import datasets
from sklearn.preprocessing import StandardScaler
import numpy as np
Explanation: Title: Support Vector Classifier
Slug: support_vector_classifier
Summary: How to train a support vector classifier in Scikit-Learn
Date: 2017-09-22 12:00
Category: Machine Learning
Tags: Support Vector Machines
Authors: Chris Albon
<a alt="Support Vector Classifier" href="https://machinelearningflashcards.com">
<img src="support_vector_classifier/Support_Vector_Classifier_print.png" class="flashcard center-block">
</a>
There is a balance between SVC maximizing the margin of the hyperplane and minimizing the misclassification. In SVC, the later is controlled with the hyperparameter $C$, the penalty imposed on errors. C is a parameter of the SVC learner and is the penalty for misclassifying a data point. When C is small, the classifier is okay with misclassified data points (high bias but low variance). When C is large, the classifier is heavily penalized for misclassified data and therefore bends over backwards avoid any misclassified data points (low bias but high variance).
In scikit-learn, $C$ is determined by the parameter C and defaults to C=1.0. We should treat $C$ has a hyperparameter of our learning algorithm which we tune using model selection techniques.
Preliminaries
End of explanation
# Load feature and target data
iris = datasets.load_iris()
X = iris.data
y = iris.target
Explanation: Load Iris Flower Data
End of explanation
# Standarize features
scaler = StandardScaler()
X_std = scaler.fit_transform(X)
Explanation: Standardize Features
End of explanation
# Create support vector classifier
svc = LinearSVC(C=1.0)
# Train model
model = svc.fit(X_std, y)
Explanation: Train Support Vector Classifier
End of explanation
# Create new observation
new_observation = [[-0.7, 1.1, -1.1 , -1.7]]
Explanation: Create Previously Unseen Observation
End of explanation
# Predict class of new observation
svc.predict(new_observation)
Explanation: Predict Class Of Observation
End of explanation |
13,456 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 4
Imports
Step2: Line with Gaussian noise
Write a function named random_line that creates x and y data for a line with y direction random noise that has a normal distribution $N(0,\sigma^2)$
Step5: Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function
Step6: Use interact to explore the plot_random_line function using | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
Explanation: Interact Exercise 4
Imports
End of explanation
def random_line(m, b, sigma, size=10):
Create a line y = m*x + b + N(0,sigma**2) between x=[-1.0,1.0]
Parameters
----------
m : float
The slope of the line.
b : float
The y-intercept of the line.
sigma : float
The standard deviation of the y direction normal distribution noise.
size : int
The number of points to create for the line.
Returns
-------
x : array of floats
The array of x values for the line with `size` points.
y : array of floats
The array of y values for the lines with `size` points.
x = np.linspace(-1.0,1.0,size)
def N(mu, sigma):
if sigma == 0:
N = 0
else:
N = np.exp(-1*((x-mu)**2)/(2*(sigma**2)))/(sigma*((2*np.pi)**0.5))
return N
y = m*x + b + N(0,sigma)
return x, y
random_line(0.0,0.0,1.0,500)
m = 0.0; b = 1.0; sigma=0.0; size=3
x, y = random_line(m, b, sigma, size)
assert len(x)==len(y)==size
assert list(x)==[-1.0,0.0,1.0]
assert list(y)==[1.0,1.0,1.0]
sigma = 1.0
m = 0.0; b = 0.0
size = 500
x, y = random_line(m, b, sigma, size)
assert np.allclose(np.mean(y-m*x-b), 0.0, rtol=0.1, atol=0.1)
assert np.allclose(np.std(y-m*x-b), sigma, rtol=0.1, atol=0.1)
Explanation: Line with Gaussian noise
Write a function named random_line that creates x and y data for a line with y direction random noise that has a normal distribution $N(0,\sigma^2)$:
$$
y = m x + b + N(0,\sigma^2)
$$
Be careful about the sigma=0.0 case.
End of explanation
def ticks_out(ax):
Move the ticks to the outside of the box.
ax.get_xaxis().set_tick_params(direction='out', width=1, which='both')
ax.get_yaxis().set_tick_params(direction='out', width=1, which='both')
def plot_random_line(m, b, sigma, size=10, color='red'):
Plot a random line with slope m, intercept b and size points.
x, y = random_line(m, b, sigma, size)
plt.scatter(x,y,color=color)
plt.xlim(min(x),max(x))
plt.ylim(min(y),max(y))
plt.tick_params(direction='out', width=1, which='both')
plot_random_line(5.0, -1.0, 2.0, 50)
assert True # use this cell to grade the plot_random_line function
Explanation: Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function:
Make the marker color settable through a color keyword argument with a default of red.
Display the range $x=[-1.1,1.1]$ and $y=[-10.0,10.0]$.
Customize your plot to make it effective and beautiful.
End of explanation
interact(plot_random_line, m=(-10.0,10.0,0.1), b=(-5.0,5.0,0.1), sigma=(0.0,5.0,0.01), size=(10,100,10), color=['red','green','blue'])
#### assert True # use this cell to grade the plot_random_line interact
Explanation: Use interact to explore the plot_random_line function using:
m: a float valued slider from -10.0 to 10.0 with steps of 0.1.
b: a float valued slider from -5.0 to 5.0 with steps of 0.1.
sigma: a float valued slider from 0.0 to 5.0 with steps of 0.01.
size: an int valued slider from 10 to 100 with steps of 10.
color: a dropdown with options for red, green and blue.
End of explanation |
13,457 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simple Linear Regression
In this module we will learn how to use data to learn a trend and use this trend to predict new observations. First we load the base libraries.
Step1: The easiest way to learn how regression works is by thinking about an example. Consider an imaginary dataset of buildings built in Denver containing three pieces of information for each building
Step2: But we can learn about the math later. Let's think about other interesting questions. Which would be better for predicting
Step3: Know Your Data
Now we inspect the DataFrame using some numpy functions you have already learned such as shape, head, dtypes, corr, and skew functions. Find more methods associated with DataFrame objects!
Step4: Remember we can acces the five number summary + some using the describe function.
Step5: Regression Model
We fit a linear regression model below. We try to use height to predict the number of stories in a building.
Step6: We show the data and the regression lines.
Step7: Check residuals for normality.
Now we will do multiple linear regression. This means we will use more than one predictor when we fit a model and predict our response variable # of stories. We will use both height and the year it was built. We can look at the mean squared error for both models and see which one predicts one better. | Python Code:
import csv
import numpy as np
import scipy as sp
import pandas as pd
import sklearn as sk
import matplotlib.pyplot as plt
from IPython.display import Image
print('csv: {}'.format(csv.__version__))
print('numpy: {}'.format(np.__version__))
print('scipy: {}'.format(sp.__version__))
print('pandas: {}'.format(pd.__version__))
print('sklearn: {}'.format(sk.__version__))
Explanation: Simple Linear Regression
In this module we will learn how to use data to learn a trend and use this trend to predict new observations. First we load the base libraries.
End of explanation
Image(url='http://www.radford.edu/~rsheehy/Gen_flash/Tutorials/Linear_Regression/reg-tut_files/linreg3.gif')
Explanation: The easiest way to learn how regression works is by thinking about an example. Consider an imaginary dataset of buildings built in Denver containing three pieces of information for each building: the year it was built, the number of stories, and the building's total height in feet.
It might seem obvious that the more stories a building has, the taller it is in feet, and vice versa. Linear regression exploits this idea. Let's say I'm a professor researching buildings and stories, and I want to use the # of stories in a building to estimate its height in feet. I can easily stand outside a building and see how many stories it has, but my tape measurer won't reach many of the roofs in Denver. I do know that the two-story building I live in is right around 20 feet high. My idea is to take the number of stories, and multiply by 10.something, but I'm not sure this will work for other buildings (commercial and industrial buildings for example).
I lament to my friends, and by a stroke of incredible luck one of my pals happens to have an old dataset lying around that contains the information I need! His parchment has records of 60 random buildings in Denver built from 1907 to 1992. Inspecting the first few entries of the parchment:
(O) ------------)
....| 770 : 54 |
....| 677 : 47 |
....| 428 : 28 |
(O) ------------)
It seems I may need to multiply by more than 10. Taking the first observations and dividing the height by the number of stories for the first three entries gives about 14.3, 14.4, and 15.3 feet per story, respectively. How can I combine all 60 observations to get a good answer? One could naively just take the average of all of these numbers, but in higher dimensions this doesn't work. To help, we have a statistical technique called linear regression. I can use regression to find a good number to multiply the number of stories by (call it $\beta$), and I hope this will help me get an accurate prediction for the height. I know this height will not be exactly right, so there is some error in each prediction. If I write this all out, we have
$$ \operatorname{(height)} = \operatorname{(# of stories)} \cdot \beta + \epsilon$$
$$ y = X \beta + \epsilon $$
From algebra, we know this is a linear equation, where $\beta$ is the slope of the line. Linear regression actually seeks to minimize the errors $\epsilon$ (the mean squared error). The plot in the link shows the linear regression line, the data it was estimated from, and the errors or deviations $\epsilon$ for each data point.
End of explanation
filename = '/Users/jessicagronski/Downloads/bldgstories1.csv'
raw_data = open(filename, 'rt')
reader = csv.reader(raw_data, delimiter=',', quoting=csv.QUOTE_NONE)
x = list(reader)
data = np.array(x).astype('float')
# Load CSV with numpy
import numpy
raw_data = open(filename, 'rb')
data = numpy.loadtxt(raw_data, delimiter=",")
# Load CSV using Pandas
import pandas
colnames = ['year', 'height', 'stories']
data = pandas.read_csv(filename, names=colnames)
data = pandas.DataFrame(data, columns=colnames)
Explanation: But we can learn about the math later. Let's think about other interesting questions. Which would be better for predicting: would # of stories help predict height in feet better than height would predict # of stories?
Say we decide to predict height using the # of stories. Since we are using one piece of information to predict another, this is called simple linear regression.
Would incorporating the year the building was built help me make a better prediction? This would be an example of multiple regression since we would use two pieces of (or more) information to predict.
Okay now its time to go back to python. We will import the data file, get an initial look at the data using pandas functions, and then fit some linear regression models using scikit-learn.
The dataset is in a .csv file, which we need to import. You may have already seen this, but we can use the python standard library function csv.reader, numpy.loadtxt, or pandas.read_csv to import the data. We show all three just as a reminder, but we keep the data as a pandas DataFrame object.
End of explanation
print('Dimensions:')
print(data.shape)
print('Ten observations:')
print(data.head(6))
print('Correlation matrix:')
correlations = data.corr(method='pearson')
print(correlations)
Explanation: Know Your Data
Now we inspect the DataFrame using some numpy functions you have already learned such as shape, head, dtypes, corr, and skew functions. Find more methods associated with DataFrame objects!
End of explanation
pandas.set_option('precision', 3)
description = data.describe()
print(description)
Explanation: Remember we can acces the five number summary + some using the describe function.
End of explanation
from sklearn import linear_model
obj = linear_model.LinearRegression()
obj.fit(np.array(data.height.values.reshape(-1,1)), data.stories )#need this values.reshape(-1,1) to avoid deprecation warnings
print( obj.coef_, obj.intercept_ )
Explanation: Regression Model
We fit a linear regression model below. We try to use height to predict the number of stories in a building.
End of explanation
x_min, x_max = data.height.values.min() - .5, data.height.values.max() + .5 # for plotting
x_rng = np.linspace(x_min,x_max,200)
plt.plot(x_rng, x_rng * obj.coef_ + obj.intercept_, 'k')
plt.plot(data.height.values, data.stories.values,'ro', alpha = 0.5)
plt.show()
Explanation: We show the data and the regression lines.
End of explanation
obj2 = linear_model.LinearRegression()
X = np.array( (data.height.values, data.year.values))
obj2.fit(X.transpose() , data.stories)
print(obj2.coef_, obj2.intercept_)
from mpl_toolkits.mplot3d import Axes3D
ax = plt.axes(projection = '3d')
#ax.plot(data.height.values, data.year.values , data.stories.values, 'bo')
ax.plot_surface(data.height.values, data.year.values, (np.dot(X.transpose(),obj2.coef_) \
+ obj2.intercept_), color='b')
ax.show()
#plt.close()
##### doesn't work - have the students try to solve it.
print(np.dot(X.transpose(),obj2.coef_).shape)
data.height.values.shape
Explanation: Check residuals for normality.
Now we will do multiple linear regression. This means we will use more than one predictor when we fit a model and predict our response variable # of stories. We will use both height and the year it was built. We can look at the mean squared error for both models and see which one predicts one better.
End of explanation |
13,458 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Flocs Data Demo
How to Export Data
Both static and collected data can be exported by command make export-data-to-csv.
This command creates CSV tables for all models registered in flocs/management/commands/export_data_to_csv.py
into directory exported-data/<datestamp>/. If there is a need to change what data for a model are exported, modify its export_class attribute (namedtuple) and to_export_tuple method.
Tables
There are following CSV tables
Step1: Concepts
Step2: Blocks
Step3: Instructions
Step4: Tasks
Step5: Students
Step6: Task Instances
Step7: Attempts
Step8: Analysis Example
Problem | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
Explanation: Flocs Data Demo
How to Export Data
Both static and collected data can be exported by command make export-data-to-csv.
This command creates CSV tables for all models registered in flocs/management/commands/export_data_to_csv.py
into directory exported-data/<datestamp>/. If there is a need to change what data for a model are exported, modify its export_class attribute (namedtuple) and to_export_tuple method.
Tables
There are following CSV tables:
- static data: concepts.csv, blocks.csv, instructions.csv, tasks.csv
- collected data: students.csv, task-instances.csv, attempts.csv
End of explanation
concepts = pd.read_csv('data/concepts.csv')
concepts.head()
Explanation: Concepts
End of explanation
blocks = pd.read_csv('data/blocks.csv')
blocks.head()
Explanation: Blocks
End of explanation
instructions = pd.read_csv('data/instructions.csv')
instructions.head()
Explanation: Instructions
End of explanation
tasks = pd.read_csv('data/tasks.csv')
tasks.head(3)
Explanation: Tasks
End of explanation
students = pd.read_csv('data/students.csv')
students.head()
Explanation: Students
End of explanation
task_instances = pd.read_csv('data/task-instances.csv')
task_instances.head()
Explanation: Task Instances
End of explanation
attempts = pd.read_csv('data/attempts.csv')
attempts.head()
Explanation: Attempts
End of explanation
programming_concepts = concepts[concepts.type == 'programming']
programming_concepts
solved_instances = task_instances[task_instances.solved]
instances_concepts = pd.merge(solved_instances, tasks, on='task_id')[['time_spent', 'concepts_ids']]
instances_concepts.head()
# unpack concepts IDs
from ast import literal_eval
concepts_lists = [literal_eval(c) for c in instances_concepts.concepts_ids]
times = instances_concepts.time_spent
concepts_times = pd.DataFrame([(times[i], concept_id)
for i, concepts_list in enumerate(concepts_lists)
for concept_id in concepts_list],
columns=['time', 'concept_id'])
concepts_times.head()
# (If you know how to do this better (ideally function to unpack any column), let mi know.)
# filter programming concepts
programming_concepts_times = pd.merge(concepts_times, programming_concepts)
programming_concepts_times.head()
# calculate median for each programming concept
medians = programming_concepts_times.groupby(['concept_id', 'name']).median()
medians
# plot
programming_concepts_times['concept'] = programming_concepts_times['name'].apply(lambda x: x.split('_')[-1].lower())
programming_concepts_times[['concept', 'time']].boxplot(by='concept')
Explanation: Analysis Example
Problem: Find median of a task solving time for each programming concept.
End of explanation |
13,459 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Projects
Wells are one of the fundamental objects in welly.
Well objects include collections of Curve objects. Multiple Well objects can be stored in a Project.
On this page, we take a closer look at the Project class. It lets us handle groups of wells. It is really just a list of Well objects, with a few extra powers.
First, some preliminaries…
Step1: Make a project
We have a few LAS files in a folder; we can load them all at once with standard POSIX file globbing syntax
Step2: Now we have a project, containing two files
Step3: You can pass in a list of files or URLs
Step4: This project has three wells
Step5: Typical, the UWIs are a disaster. Let's ignore this for now.
The Project is really just a list-like thing, so you can index into it to get at a single well. Each well is represented by a welly.Well object.
Step6: Some of the fields of this LAS file are messed up; see the Well notebook for more on how to fix this.
Plot curves from several wells
The DT log is called DT4P in one of the wells. We can deal with this sort of issue with aliases. Let's set up an alias dictionary, then plot the DT log from each well
Step7: Get a pandas.DataFrame
The df() method makes a DataFrame using a dual index of UWI and Depth.
Before we export our wells, let's give Kennetcook #2 a better UWI
Step8: That's better.
When creating the DataFrame, you can pass a list of the keys (mnemonics) you want, and use aliases as usual.
Step9: Quality
Welly can run quality tests on the curves in your project. Some of the tests take arguments. You can test for things like this
Step10: Let's add our own test for units
Step11: We'll use the same alias dictionary as before
Step12: Now we can run the tests and look at the results, which are in an HTML table | Python Code:
import welly
welly.__version__
Explanation: Projects
Wells are one of the fundamental objects in welly.
Well objects include collections of Curve objects. Multiple Well objects can be stored in a Project.
On this page, we take a closer look at the Project class. It lets us handle groups of wells. It is really just a list of Well objects, with a few extra powers.
First, some preliminaries…
End of explanation
p = welly.read_las("../../tests/assets/example_*.las")
Explanation: Make a project
We have a few LAS files in a folder; we can load them all at once with standard POSIX file globbing syntax:
End of explanation
p
Explanation: Now we have a project, containing two files:
End of explanation
p = welly.read_las(['../../tests/assets/P-129_out.LAS',
'https://geocomp.s3.amazonaws.com/data/P-130.LAS',
'https://geocomp.s3.amazonaws.com/data/R-39.las',
])
Explanation: You can pass in a list of files or URLs:
End of explanation
p
Explanation: This project has three wells:
End of explanation
p[0]
Explanation: Typical, the UWIs are a disaster. Let's ignore this for now.
The Project is really just a list-like thing, so you can index into it to get at a single well. Each well is represented by a welly.Well object.
End of explanation
alias = {'Sonic': ['DT', 'DT4P'],
'Caliper': ['HCAL', 'CALI'],
}
import matplotlib.pyplot as plt
fig, axs = plt.subplots(figsize=(7, 14),
ncols=len(p),
sharey=True,
)
for i, (ax, w) in enumerate(zip(axs, p)):
log = w.get_curve('Sonic', alias=alias)
if log is not None:
ax = log.plot(ax=ax)
ax.set_title("Sonic log for\n{}".format(w.uwi))
min_z, max_z = p.basis_range
plt.ylim(max_z, min_z)
plt.show()
Explanation: Some of the fields of this LAS file are messed up; see the Well notebook for more on how to fix this.
Plot curves from several wells
The DT log is called DT4P in one of the wells. We can deal with this sort of issue with aliases. Let's set up an alias dictionary, then plot the DT log from each well:
End of explanation
p[0].uwi = p[0].name
p[0]
Explanation: Get a pandas.DataFrame
The df() method makes a DataFrame using a dual index of UWI and Depth.
Before we export our wells, let's give Kennetcook #2 a better UWI:
End of explanation
alias
keys = ['Caliper', 'GR', 'Sonic']
df = p.df(keys=keys, alias=alias, rename_aliased=True)
df
Explanation: That's better.
When creating the DataFrame, you can pass a list of the keys (mnemonics) you want, and use aliases as usual.
End of explanation
import welly.quality as q
tests = {
'All': [q.no_similarities],
'Each': [q.no_gaps, q.no_monotonic, q.no_flat],
'GR': [q.all_positive],
'Sonic': [q.all_positive, q.all_between(50, 200)],
}
Explanation: Quality
Welly can run quality tests on the curves in your project. Some of the tests take arguments. You can test for things like this:
all_positive: Passes if all the values are greater than zero.
all_above(50): Passes if all the values are greater than 50.
mean_below(100): Passes if the mean of the log is less than 100.
no_nans: Passes if there are no NaNs in the log.
no_flat: Passes if there are no sections of well log with the same values (e.g. because a gap was interpolated across with a constant value).
no_monotonic: Passes if there are no monotonic ramps in the log (e.g. because a gap was linearly interpolated across).
Insert lists of tests into a dictionary with any of the following key examples:
'GR': The test(s) will run against the GR log.
'Gamma': The test(s) will run against the log matching according to the alias dictionary.
'Each': The test(s) will run against every log in a well.
'All': Some tests take multiple logs as input, for example quality.no_similarities. These test(s) will run against all the logs as a group. Could be quite slow, because there may be a lot of pairwise comparisons to do.
The tests are run against all wells in the project. If you only want to run against a subset of the wells, make a new project for them.
End of explanation
def has_si_units(curve):
return curve.units.lower() in ['mm', 'gapi', 'us/m', 'k/m3']
tests['Each'].append(has_si_units)
Explanation: Let's add our own test for units:
End of explanation
alias
Explanation: We'll use the same alias dictionary as before:
End of explanation
from IPython.display import HTML
HTML(p.curve_table_html(keys=['Caliper', 'GR', 'Sonic', 'SP', 'RHOB'],
tests=tests, alias=alias)
)
Explanation: Now we can run the tests and look at the results, which are in an HTML table:
End of explanation |
13,460 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Type
Is Required
Step7: 1.4. Elemental Stoichiometry
Is Required
Step8: 1.5. Elemental Stoichiometry Details
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 1.7. Diagnostic Variables
Is Required
Step11: 1.8. Damping
Is Required
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required
Step13: 2.2. Timestep If Not From Ocean
Is Required
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required
Step15: 3.2. Timestep If Not From Ocean
Is Required
Step16: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required
Step17: 4.2. Scheme
Is Required
Step18: 4.3. Use Different Scheme
Is Required
Step19: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required
Step20: 5.2. River Input
Is Required
Step21: 5.3. Sediments From Boundary Conditions
Is Required
Step22: 5.4. Sediments From Explicit Model
Is Required
Step23: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required
Step24: 6.2. CO2 Exchange Type
Is Required
Step25: 6.3. O2 Exchange Present
Is Required
Step26: 6.4. O2 Exchange Type
Is Required
Step27: 6.5. DMS Exchange Present
Is Required
Step28: 6.6. DMS Exchange Type
Is Required
Step29: 6.7. N2 Exchange Present
Is Required
Step30: 6.8. N2 Exchange Type
Is Required
Step31: 6.9. N2O Exchange Present
Is Required
Step32: 6.10. N2O Exchange Type
Is Required
Step33: 6.11. CFC11 Exchange Present
Is Required
Step34: 6.12. CFC11 Exchange Type
Is Required
Step35: 6.13. CFC12 Exchange Present
Is Required
Step36: 6.14. CFC12 Exchange Type
Is Required
Step37: 6.15. SF6 Exchange Present
Is Required
Step38: 6.16. SF6 Exchange Type
Is Required
Step39: 6.17. 13CO2 Exchange Present
Is Required
Step40: 6.18. 13CO2 Exchange Type
Is Required
Step41: 6.19. 14CO2 Exchange Present
Is Required
Step42: 6.20. 14CO2 Exchange Type
Is Required
Step43: 6.21. Other Gases
Is Required
Step44: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required
Step45: 7.2. PH Scale
Is Required
Step46: 7.3. Constants If Not OMIP
Is Required
Step47: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required
Step48: 8.2. Sulfur Cycle Present
Is Required
Step49: 8.3. Nutrients Present
Is Required
Step50: 8.4. Nitrous Species If N
Is Required
Step51: 8.5. Nitrous Processes If N
Is Required
Step52: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required
Step53: 9.2. Upper Trophic Levels Treatment
Is Required
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required
Step55: 10.2. Pft
Is Required
Step56: 10.3. Size Classes
Is Required
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required
Step58: 11.2. Size Classes
Is Required
Step59: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required
Step60: 12.2. Lability
Is Required
Step61: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required
Step62: 13.2. Types If Prognostic
Is Required
Step63: 13.3. Size If Prognostic
Is Required
Step64: 13.4. Size If Discrete
Is Required
Step65: 13.5. Sinking Speed If Prognostic
Is Required
Step66: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required
Step67: 14.2. Abiotic Carbon
Is Required
Step68: 14.3. Alkalinity
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'ukesm1-0-ll', 'ocnbgchem')
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: MOHC
Source ID: UKESM1-0-LL
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:15
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation |
13,461 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Chemistry Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 1.8. Coupling With Chemical Reactivity
Is Required
Step12: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step13: 2.2. Code Version
Is Required
Step14: 2.3. Code Languages
Is Required
Step15: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required
Step16: 3.2. Split Operator Advection Timestep
Is Required
Step17: 3.3. Split Operator Physical Timestep
Is Required
Step18: 3.4. Split Operator Chemistry Timestep
Is Required
Step19: 3.5. Split Operator Alternate Order
Is Required
Step20: 3.6. Integrated Timestep
Is Required
Step21: 3.7. Integrated Scheme Type
Is Required
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required
Step23: 4.2. Convection
Is Required
Step24: 4.3. Precipitation
Is Required
Step25: 4.4. Emissions
Is Required
Step26: 4.5. Deposition
Is Required
Step27: 4.6. Gas Phase Chemistry
Is Required
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required
Step30: 4.9. Photo Chemistry
Is Required
Step31: 4.10. Aerosols
Is Required
Step32: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required
Step33: 5.2. Global Mean Metrics Used
Is Required
Step34: 5.3. Regional Metrics Used
Is Required
Step35: 5.4. Trend Metrics Used
Is Required
Step36: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required
Step37: 6.2. Matches Atmosphere Grid
Is Required
Step38: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required
Step39: 7.2. Canonical Horizontal Resolution
Is Required
Step40: 7.3. Number Of Horizontal Gridpoints
Is Required
Step41: 7.4. Number Of Vertical Levels
Is Required
Step42: 7.5. Is Adaptive Grid
Is Required
Step43: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required
Step44: 8.2. Use Atmospheric Transport
Is Required
Step45: 8.3. Transport Details
Is Required
Step46: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required
Step47: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required
Step48: 10.2. Method
Is Required
Step49: 10.3. Prescribed Climatology Emitted Species
Is Required
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step51: 10.5. Interactive Emitted Species
Is Required
Step52: 10.6. Other Emitted Species
Is Required
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required
Step54: 11.2. Method
Is Required
Step55: 11.3. Prescribed Climatology Emitted Species
Is Required
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step57: 11.5. Interactive Emitted Species
Is Required
Step58: 11.6. Other Emitted Species
Is Required
Step59: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required
Step60: 12.2. Prescribed Upper Boundary
Is Required
Step61: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required
Step62: 13.2. Species
Is Required
Step63: 13.3. Number Of Bimolecular Reactions
Is Required
Step64: 13.4. Number Of Termolecular Reactions
Is Required
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required
Step67: 13.7. Number Of Advected Species
Is Required
Step68: 13.8. Number Of Steady State Species
Is Required
Step69: 13.9. Interactive Dry Deposition
Is Required
Step70: 13.10. Wet Deposition
Is Required
Step71: 13.11. Wet Oxidation
Is Required
Step72: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required
Step73: 14.2. Gas Phase Species
Is Required
Step74: 14.3. Aerosol Species
Is Required
Step75: 14.4. Number Of Steady State Species
Is Required
Step76: 14.5. Sedimentation
Is Required
Step77: 14.6. Coagulation
Is Required
Step78: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required
Step79: 15.2. Gas Phase Species
Is Required
Step80: 15.3. Aerosol Species
Is Required
Step81: 15.4. Number Of Steady State Species
Is Required
Step82: 15.5. Interactive Dry Deposition
Is Required
Step83: 15.6. Coagulation
Is Required
Step84: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required
Step85: 16.2. Number Of Reactions
Is Required
Step86: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required
Step87: 17.2. Environmental Conditions
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'hammoz-consortium', 'sandbox-1', 'atmoschem')
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: HAMMOZ-CONSORTIUM
Source ID: SANDBOX-1
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:03
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation |
13,462 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Neuron Lang is a python based DSL for naming neurons. Neurons are modelled as collections of phenotypes with semantics backed by Web Ontology Language (OWL2) classes. Neuron Lang provides tools for mapping to and from collections of local names for phenotypes by using ontology identifiers as the common language underlying all local naming. These tools also let us automatically generate names for neurons in a regular and consistent way using a set of rules operating on the neurons' constitutent phenotypes. Neuron Lang can export to python or to any serialziation supported by rdflib, however deterministic turtle (ttl) is prefered. Neuron Lang depends on files from the NIF-Ontology.
This notebook has examples of how to use Neuron Lang to
Step1: Neurons
Neuron instances are build out of Phenotype instances.
Phenotypes are object-predicate pairs that take curied
string representations (or uris) as arguments.
Step2: Viewing and saving
Neuron Lang can only be used to add new neurons to a graph.
Therefore if you need to remove neruons you need to reset
the whole program. For this reason I do not suggest using
ipython notebooks since they persist state in ways that can
be very confusing when working with a persistent datastore.
Step3: scig
When creating neurons we want to be able to find relevant identifiers
quickly while working. There is a cli utility called scig that can be
used as a cell magic %scig to search a SciGraph instance for terms.
Step4: Accessing SciGraph directly from python
Step5: Namespaces - context managers
We can be more concise by creating a namespace for our phenotype names.
Normally these are defined in another file (e.g. phenotype_namespaces.py) so that they can be shared and reused.
NOTE
Step6: Namespaces 2 - global modification
Step7: Context - context managers
Step8: Context 2 - global modification
Step9: Context 3 - the old way
Step10: Disjointness
Neuron Lang enforces basic disjointness on phenotypes of 'data' level neurons | Python Code:
from neurondm import *
# set predicates in the event that the default config options do not work
# if you cloned the NIF-Ontology into a nonstandard location change ontology_local_repo in devconfig.yaml
from pyontutils.namespaces import ilxtr as pred
from neurondm import phenotype_namespaces as phns
config = Config('neuron-lang-notebook')
# By default Config saves ontology files in NIF-Ontology/ttl/generated/neurons/
# and python files in pyontutils/neurondm/neurondm/compiled/
# NOTE: if you call config multiple times any call to Neuron
# will be associate with the most recent instance of Config
# you can ignore this cell
# some utility functions needed for this tutorial
# due to the potential for notebooks to run cells out of order
def cellguard(addns=False):
# unfortunately ipy.hooks['pre_run_code_hook'].add(__cellguard)
# causes this to be called too frequently :/
setLocalNames()
setLocalContext()
if addns:
setLocalNames(phns.BBP)
Explanation: Neuron Lang is a python based DSL for naming neurons. Neurons are modelled as collections of phenotypes with semantics backed by Web Ontology Language (OWL2) classes. Neuron Lang provides tools for mapping to and from collections of local names for phenotypes by using ontology identifiers as the common language underlying all local naming. These tools also let us automatically generate names for neurons in a regular and consistent way using a set of rules operating on the neurons' constitutent phenotypes. Neuron Lang can export to python or to any serialziation supported by rdflib, however deterministic turtle (ttl) is prefered. Neuron Lang depends on files from the NIF-Ontology.
This notebook has examples of how to use Neuron Lang to:
* Define neurons and phenotypes.
* Export all defined neurons.
* Use %scig magic to search for existing ontology identifiers
* Use LocalNameManager to create abbreviations for phenotypes.
* Bind local names in the current python namespace using with or setLocalNames.
* Creat a phenotype context in which to define neurons using with or setLocalContext.
Please see the documentation in order to set up a working
environment for this notebook.
Setup for any file defining neurons
End of explanation
myFirstNeuron = Neuron(Phenotype('NCBITaxon:10090'),
Phenotype('UBERON:0000955'))
# NOTE: label is cosmetic and will be overwritten by rdfs:label
# unless you set override=True
myPhenotype = Phenotype('NCBITaxon:9685', # object
pred.hasInstanceInSpecies, # predicate (optional)
label='Cat') # label for human readability
# str and repr produce different results
print(myFirstNeuron)
print(repr(myFirstNeuron)) # NOTE: this is equivalent to typing `myFirstNeuron` and running the cell
Explanation: Neurons
Neuron instances are build out of Phenotype instances.
Phenotypes are object-predicate pairs that take curied
string representations (or uris) as arguments.
End of explanation
# view the turtle (ttl) serialization of all neurons
turtle = config.ttl()
print(turtle)
# view the python serialization of all neurons for the current config
python = config.python()
print(python)
# write the turtle file defined in cell 1
config.write()
# write a python file that has the same name as the file in cell 1
# but with python safe separators and a .py extension
config.write_python()
# view a list of all neurons for the current config
config.neurons()
Explanation: Viewing and saving
Neuron Lang can only be used to add new neurons to a graph.
Therefore if you need to remove neruons you need to reset
the whole program. For this reason I do not suggest using
ipython notebooks since they persist state in ways that can
be very confusing when working with a persistent datastore.
End of explanation
import neurondm.lang
%scig --help
# use -t to limit the number of results
%scig -t 1 t hippocampus -v
# you can escape spaces with \
%scig t macaca\ mulatta
# quotes also allow search with spaces
%scig -t 1 s 'nucleus basalis of meynert'
# without quotes scig will search multiple terms at once
%scig -t 1 t cat mouse
Explanation: scig
When creating neurons we want to be able to find relevant identifiers
quickly while working. There is a cli utility called scig that can be
used as a cell magic %scig to search a SciGraph instance for terms.
End of explanation
from pyontutils.scigraph_client import Graph, Vocabulary
sgg = Graph()
sgv = Vocabulary()
terms = sgv.findByTerm('neocortex')
nodes_edges = sgg.getNeighbors('UBERON:0000955',
relationshipType='BFO:0000050', # part of
direction='INCOMING')
print('synonyms:', terms[0]['synonyms'])
print('subjects:', *(e['sub'] for e in nodes_edges['edges']))
Explanation: Accessing SciGraph directly from python
End of explanation
from neurondm import LocalNameManager
from pyontutils.utils import TermColors as tc # pretty printing that is not part of this tutorial
class myPhenotypeNames(LocalNameManager): # see neurons.LocalNameManager
Mouse = Phenotype('NCBITaxon:10090', pred.hasInstanceInSpecies)
Rat = Phenotype('NCBITaxon:10116', pred.hasInstanceInSpecies)
brain = Phenotype('UBERON:0000955', pred.hasSomaLocatedIn)
PV = Phenotype('PR:000013502', pred.hasExpressionPhenotype)
# you can see all the mappings in a local name manager by printing it or repring it
print(myPhenotypeNames)
# with a context manager we can use a namespace to create neurons
# more concisely and more importantly to repr them more concisely
with myPhenotypeNames:
n = Neuron(Rat, brain, PV)
# printing is unaffected so the fully expanded form is always
# accessible (__str__ vs __repr__)
print(tc.red('print inside unchanged:'), n, sep='\n')
print(tc.red('repr inside inside:'), repr(n))
# we can also repr a neuron defined elsewhere using our own names
print(tc.red('repr outside inside:'), repr(myFirstNeuron))
# outside the context manager our concise repr is gone
print(tc.red('repr inside outside:'), repr(n))
# in addition we will now get a NameError of we try to use bare words
try: Neuron(Rat)
except NameError: print(tc.blue('Rat fails as expected.'))
Explanation: Namespaces - context managers
We can be more concise by creating a namespace for our phenotype names.
Normally these are defined in another file (e.g. phenotype_namespaces.py) so that they can be shared and reused.
NOTE: for a full explication of phenotype namespaces see neurondm/example.py
End of explanation
cellguard()
# there are already many namespaces defined in phenotype_namespaces.py
print(tc.red('Namespaces:'), phns.__all__)
# setLocalNames adds any names from a namespace to the current namespace
setLocalNames(phns.Species)
# we can load additional names
setLocalNames(phns.Regions, phns.Layers)
# however we will get a ValueError on a conflict
try:
setLocalNames(phns.Test)
except ValueError as e:
print(tc.red('The error:'), e)
# we can extend namespaces as well (again, best in a separate file)
# as long as the local names match we can combine entries
class MoreSpecies(phns.Species, myPhenotypeNames):
Cat = myPhenotype
ACh = Phenotype('CHEBI:15355', pred.hasExpressionPhenotype)
AChMinus = NegPhenotype(ACh)
with MoreSpecies:
can = Neuron(Cat, ACh, L2)
cant = Neuron(Cat, AChMinus, L3)
print(tc.red('More species:'), can, cant, sep='\n')
# we can also refer to phenotypes in a namespace directly
n = Neuron(Mouse, MoreSpecies.ACh)
print(tc.red('Direct usage:'), n, sep='\n')
# getLocalNames can be used to inspect the current set of defined names
print(tc.red('getLocalNames:'), sorted(getLocalNames().keys()))
# clear the local names by calling setLocalNames with no arguments
setLocalNames()
# no more short names ;_;
try: Neuron(Mouse, PV)
except NameError: print(tc.blue('Neuron(Mouse, PV) fails as expected'))
# for the rest of these examples we will use the BBP namespace
setLocalNames(phns.BBP)
# define neurons using our local names
Neuron(Mouse, L23, CCK, NPY)
Neuron(Mouse, brain, L3, PV)
Neuron(PV, DA)
cellguard()
Explanation: Namespaces 2 - global modification
End of explanation
cellguard(True)
# we often want to create many neurons in the same contex
# the easiest way to do this is to use a instance of a neuron
# as the input to a context manager
with Neuron(Rat, CA1):
n1 = Neuron(CCK)
n2 = Neuron(NPY)
n3 = Neuron(PC)
# neurons always retain the context they were created in
print(tc.red('example 1:'), *map(repr, (n1, n2, n3)), '', sep='\n')
# you cannot change a neuron's context but you can see its original context
print(tc.red('example 2:'), n3.context, '', sep='\n')
try:
n3.context = Neuron(Mouse, CA2)
except TypeError as e:
print(tc.red('error when setting context:'), e, '\n')
# you can also use with as syntax when creating a context
with Neuron(Mouse) as n4:
n5 = Neuron(CCK)
print(tc.red('example 3:'), *map(repr, (n4, n5)), '', sep='\n')
# contexts cannot violate disjointness axioms
try:
with Neuron(Rat):
print(tc.red('neuron ok:'), Neuron(), '', sep='\n')
with Neuron(Mouse):
print('This will not print')
except TypeError: print(tc.blue('Neuron(Rat, Mouse) fails as expected\n'))
# if you define a new neuron inside a context it will carry
# that context with it if used to define a new context
# context does not nest for neurons defined outside a with
with n3:
n6 = Neuron(VIP)
with n5: # defined outside does not nest
n7 = Neuron(SOM)
with Neuron(SLM) as n8: # defined inside nests
n9 = Neuron(SOM)
n10 = Neuron(SOM)
print(tc.red('example 4:'), *map(repr, (n3, n6, n5, n7, n8, n9, n10)), sep='\n')
#
with Neuron(Rat), Neuron(CTX) as context:
print(context)
n11 = Neuron(L1)
print(n11)
cellguard()
Explanation: Context - context managers
End of explanation
cellguard(True)
# like namespaces you can also set a persistent local context
context0 = Neuron(CCK, NPY, SOM, DA, CA1, SPy)
context1 = Neuron(Rat, S1, L4)
setLocalContext(context0)
print(tc.red('created with context:'), repr(Neuron(TPC)))
# contexts are addative
# to change context using a Neuron you need to setLocalContext() first
# without resetting we get a disjointness error
try: setLocalContext(Neuron(Rat, S1, L4))
except TypeError as e: print(tc.blue('Neuron(S1, CA1) fails as expected'), e)
# reset
setLocalContext()
# now we will not get an error
setLocalContext(Neuron(Rat, S1, L4))
print(tc.red('Success:'), repr(Neuron(PC)))
# a neuron declared in a different context can be used to change the context withour resetting
# if you know in advance that you will be dealing with multiple contexts, I suggest you
# create all of those context neurons first so that they are available when needed
setLocalContext(context0)
# like namespaces call getLocalContext to see the current context
print(tc.red('getLocalContext:'), *(p.pShortName for p in getLocalContext()))
# like namespaces calling setLocalContext without arguments clears context
setLocalContext()
print(tc.red('no context:'), repr(Neuron(brain)))
cellguard()
Explanation: Context 2 - global modification
End of explanation
cellguard(True)
context = (Rat, S1)
ca1_context = (Rat, CA1)
def NeuronC(*args, **kwargs):
return Neuron(*args, *context, **kwargs)
def NeuronH(*args, **kwargs):
return Neuron(*args, *ca1_context, **kwargs)
neurons = {
'HBP_CELL:0000013': NeuronC(CCK),
'HBP_CELL:0000016': NeuronC(PV),
'HBP_CELL:0000018': NeuronC(PC),
'HBP_CELL:0000135': NeuronH(SLM, PPA),
'HBP_CELL:0000136': NeuronH(SO, BP),
'HBP_CELL:0000137': NeuronH(SPy, BS),
'HBP_CELL:0000148': Neuron(Rat, STRI, MSN, D1),
'HBP_CELL:0000149': Neuron(Rat, CA3, PC),
}
neurons['HBP_CELL:0000013']
cellguard()
Explanation: Context 3 - the old way
End of explanation
cellguard(True)
try: Neuron(Mouse, Rat)
except TypeError as e: print(tc.blue('Neuron(Mouse, Rat) fails as expected'), e, sep='\n')
cellguard()
Explanation: Disjointness
Neuron Lang enforces basic disjointness on phenotypes of 'data' level neurons
End of explanation |
13,463 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Moon Phase Correlation Analysis
Step1: This Wikipedia article has a nice description of how to calculate the current phase of the moon. In code, that looks like this
Step2: Let's randomly sample 10% of pings for nightly submissions made from 2015-07-05 to 2015-08-05
Step3: Extract the startup time metrics with their submission date and make sure we only consider one submission per user
Step4: Obtain an array of pairs, each containing the moon visibility and the startup time
Step5: Let's see what this data looks like
Step6: The correlation coefficient is now easy to calculate | Python Code:
import ujson as json
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import plotly.plotly as py
from moztelemetry import get_pings, get_pings_properties, get_one_ping_per_client
from moztelemetry.histogram import Histogram
import datetime as dt
%pylab inline
Explanation: Moon Phase Correlation Analysis
End of explanation
def approximate_moon_visibility(current_date):
days_per_synodic_month = 29.530588853 # change this if the moon gets towed away
days_since_known_new_moon = (current_date - dt.date(2015, 7, 16)).days
phase_fraction = (days_since_known_new_moon % days_per_synodic_month) / days_per_synodic_month
return (1 - phase_fraction if phase_fraction > 0.5 else phase_fraction) * 2
def date_string_to_date(date_string):
return dt.datetime.strptime(date_string, "%Y%m%d").date()
Explanation: This Wikipedia article has a nice description of how to calculate the current phase of the moon. In code, that looks like this:
End of explanation
pings = get_pings(sc, app="Firefox", channel="nightly", submission_date=("20150705", "20150805"), fraction=0.1, schema="v4")
Explanation: Let's randomly sample 10% of pings for nightly submissions made from 2015-07-05 to 2015-08-05:
End of explanation
subset = get_pings_properties(pings, ["clientId", "meta/submissionDate", "payload/simpleMeasurements/firstPaint"])
subset = get_one_ping_per_client(subset)
cached = subset.cache()
Explanation: Extract the startup time metrics with their submission date and make sure we only consider one submission per user:
End of explanation
pairs = cached.map(lambda p: (approximate_moon_visibility(date_string_to_date(p["meta/submissionDate"])), p["payload/simpleMeasurements/firstPaint"]))
pairs = np.asarray(pairs.filter(lambda p: p[1] != None and p[1] < 100000000).collect())
Explanation: Obtain an array of pairs, each containing the moon visibility and the startup time:
End of explanation
plt.figure(figsize=(15, 7))
plt.scatter(pairs.T[0], pairs.T[1])
plt.xlabel("Moon visibility ratio")
plt.ylabel("Startup time (ms)")
plt.show()
Explanation: Let's see what this data looks like:
End of explanation
np.corrcoef(pairs.T)[0, 1]
Explanation: The correlation coefficient is now easy to calculate:
End of explanation |
13,464 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: Using the Meta-Dataset Data Pipeline
This notebook shows how to use meta_dataset’s input pipeline to sample data for the Meta-Dataset benchmark. There are two main ways in which data is sampled
Step2: Primers
Download your data and process it as explained in link. Set BASE_PATH pointing the processed tf-records ($RECORDS in the conversion instructions).
meta_dataset supports many different setting for sampling data. We use gin-config to control default parameters of our functions. You can go to default gin file we are pointing and see the default values.
You can use meta_dataset in eager or graph mode.
Let's write a generator that makes the right calls to return data from dataset. dataset.make_one_shot_iterator() returns an iterator where each element is an episode.
SPLIT is used to define which part of the meta-split is going to be used. Different splits have different classes and the details on how they are created can be found in the paper.
Step3: Reading datasets
In order to sample data, we need to read the dataset_spec files for each dataset. Following snippet reads those files into a list.
Step4: (1) Episodic Mode
meta_dataset uses tf.data.Dataset API and it takes one call to pipeline.make_multisource_episode_pipeline(). We loaded or defined most of the variables used during this call above. The remaining parameters are explained below
Step5: Using Dataset
The episodic dataset consist in a tuple of the form (Episode, data source ID). The data source ID is an integer Tensor containing a value in the range [0, len(all_dataset_specs) - 1]
signifying which of the datasets of the multisource pipeline the given episode
came from. Episodes consist of support and query sets and we want to learn to classify images at the query set correctly given the support images. For both support and query set we have images, labels and class_ids. Labels are transformed class_ids offset to zero, so that global class_ids are set to [0, N] where N is the number of classes in an episode.
As one can see the number of images in query set and support set is different. Images are scaled, copied into 84*84*3 tensors. Labels are presented in two forms
Step6: Visualizing Episodes
Let's visualize the episodes.
Support and query set for each episode plotted sequentially. Set N_EPISODES to control number of episodes visualized.
Each episode is sampled from a single dataset and include N different classes. Each class might have different number of samples in support set, whereas number of images in query set is fixed. We limit number of classes and images per class to 10 in order to create legible plots. Actual episodes might have more classes and samples.
Each column represents a distinct class and dataset specific class ids are plotted on the x_axis.
Step7: (2) Batch Mode
Second mode that meta_dataset library provides is the batch mode, where one can sample batches from the list of datasets in a non-episodic manner and use it to train baseline models. There are couple things to note here
Step8: (3) Fixing Ways and Shots
meta_dataset library provides option to set number of classes/samples per episode. There are 3 main flags you can set.
NUM_WAYS
Step9: (4) Using Meta-dataset with PyTorch
As mentioned above it is super easy to consume meta_dataset as NumPy arrays. This also enables easy integration into other popular deep learning frameworks like PyTorch. TensorFlow code processes the data and passes it to PyTorch, ready to be consumed. Since the data loader and processing steps do not have any operation on the GPU, TF should not attempt to grab the GPU, and it should be available for PyTorch.
1. Let's use an episodic dataset created earlier, dataset_episodic, and build on top of it. We will transpose tensor to CHW, which is the common order used by convolutional layers of PyTorch.
2. We will use zero-indexed labels, therefore grabbing e[1] and e[4]. At the end we return a generator that consumes the tf.Dataset.
3. Using .cuda() on PyTorch tensors should distribute them to appropriate devices. | Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
#@title Imports and Utility Functions
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
from collections import Counter
import gin
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from meta_dataset.data import config
from meta_dataset.data import dataset_spec as dataset_spec_lib
from meta_dataset.data import learning_spec
from meta_dataset.data import pipeline
def plot_episode(support_images, support_class_ids, query_images,
query_class_ids, size_multiplier=1, max_imgs_per_col=10,
max_imgs_per_row=10):
for name, images, class_ids in zip(('Support', 'Query'),
(support_images, query_images),
(support_class_ids, query_class_ids)):
n_samples_per_class = Counter(class_ids)
n_samples_per_class = {k: min(v, max_imgs_per_col)
for k, v in n_samples_per_class.items()}
id_plot_index_map = {k: i for i, k
in enumerate(n_samples_per_class.keys())}
num_classes = min(max_imgs_per_row, len(n_samples_per_class.keys()))
max_n_sample = max(n_samples_per_class.values())
figwidth = max_n_sample
figheight = num_classes
if name == 'Support':
print('#Classes: %d' % len(n_samples_per_class.keys()))
figsize = (figheight * size_multiplier, figwidth * size_multiplier)
fig, axarr = plt.subplots(
figwidth, figheight, figsize=figsize)
fig.suptitle('%s Set' % name, size='20')
fig.tight_layout(pad=3, w_pad=0.1, h_pad=0.1)
reverse_id_map = {v: k for k, v in id_plot_index_map.items()}
for i, ax in enumerate(axarr.flat):
ax.patch.set_alpha(0)
# Print the class ids, this is needed since, we want to set the x axis
# even there is no picture.
ax.set(xlabel=reverse_id_map[i % figheight], xticks=[], yticks=[])
ax.label_outer()
for image, class_id in zip(images, class_ids):
# First decrement by one to find last spot for the class id.
n_samples_per_class[class_id] -= 1
# If class column is filled or not represented: pass.
if (n_samples_per_class[class_id] < 0 or
id_plot_index_map[class_id] >= max_imgs_per_row):
continue
# If width or height is 1, then axarr is a vector.
if axarr.ndim == 1:
ax = axarr[n_samples_per_class[class_id]
if figheight == 1 else id_plot_index_map[class_id]]
else:
ax = axarr[n_samples_per_class[class_id], id_plot_index_map[class_id]]
ax.imshow(image / 2 + 0.5)
plt.show()
def plot_batch(images, labels, size_multiplier=1):
num_examples = len(labels)
figwidth = np.ceil(np.sqrt(num_examples)).astype('int32')
figheight = num_examples // figwidth
figsize = (figwidth * size_multiplier, (figheight + 1.5) * size_multiplier)
_, axarr = plt.subplots(figwidth, figheight, dpi=300, figsize=figsize)
for i, ax in enumerate(axarr.transpose().ravel()):
# Images are between -1 and 1.
ax.imshow(images[i] / 2 + 0.5)
ax.set(xlabel=labels[i], xticks=[], yticks=[])
plt.show()
Explanation: Using the Meta-Dataset Data Pipeline
This notebook shows how to use meta_dataset’s input pipeline to sample data for the Meta-Dataset benchmark. There are two main ways in which data is sampled:
1. episodic: Returns N-way classification episodes, which contain a support (training) set and a query (test) set. The number of classes (N) may vary from episode to episode.
2. batch: Returns batches of images and their corresponding label, sampled from all available classes.
We first import meta_dataset and other required packages, and define utility functions for visualization. We’ll make use of meta_dataset.data.learning_spec and meta_dataset.data.pipeline; their purpose will be made clear later on.
End of explanation
# 1
BASE_PATH = '/path/to/records'
GIN_FILE_PATH = 'meta_dataset/learn/gin/setups/data_config.gin'
# 2
gin.parse_config_file(GIN_FILE_PATH)
# 3
# Comment out to disable eager execution.
tf.enable_eager_execution()
# 4
def iterate_dataset(dataset, n):
if not tf.executing_eagerly():
iterator = dataset.make_one_shot_iterator()
next_element = iterator.get_next()
with tf.Session() as sess:
for idx in range(n):
yield idx, sess.run(next_element)
else:
for idx, episode in enumerate(dataset):
if idx == n:
break
yield idx, episode
# 5
SPLIT = learning_spec.Split.TRAIN
Explanation: Primers
Download your data and process it as explained in link. Set BASE_PATH pointing the processed tf-records ($RECORDS in the conversion instructions).
meta_dataset supports many different setting for sampling data. We use gin-config to control default parameters of our functions. You can go to default gin file we are pointing and see the default values.
You can use meta_dataset in eager or graph mode.
Let's write a generator that makes the right calls to return data from dataset. dataset.make_one_shot_iterator() returns an iterator where each element is an episode.
SPLIT is used to define which part of the meta-split is going to be used. Different splits have different classes and the details on how they are created can be found in the paper.
End of explanation
ALL_DATASETS = ['aircraft', 'cu_birds', 'dtd', 'fungi', 'ilsvrc_2012',
'omniglot', 'quickdraw', 'vgg_flower']
all_dataset_specs = []
for dataset_name in ALL_DATASETS:
dataset_records_path = os.path.join(BASE_PATH, dataset_name)
dataset_spec = dataset_spec_lib.load_dataset_spec(dataset_records_path)
all_dataset_specs.append(dataset_spec)
Explanation: Reading datasets
In order to sample data, we need to read the dataset_spec files for each dataset. Following snippet reads those files into a list.
End of explanation
use_bilevel_ontology_list = [False]*len(ALL_DATASETS)
use_dag_ontology_list = [False]*len(ALL_DATASETS)
# Enable ontology aware sampling for Omniglot and ImageNet.
use_bilevel_ontology_list[5] = True
use_dag_ontology_list[4] = True
variable_ways_shots = config.EpisodeDescriptionConfig(
num_query=None, num_support=None, num_ways=None)
dataset_episodic = pipeline.make_multisource_episode_pipeline(
dataset_spec_list=all_dataset_specs,
use_dag_ontology_list=use_dag_ontology_list,
use_bilevel_ontology_list=use_bilevel_ontology_list,
episode_descr_config=variable_ways_shots,
split=SPLIT,
image_size=84,
shuffle_buffer_size=300)
Explanation: (1) Episodic Mode
meta_dataset uses tf.data.Dataset API and it takes one call to pipeline.make_multisource_episode_pipeline(). We loaded or defined most of the variables used during this call above. The remaining parameters are explained below:
use_bilevel_ontology_list: This is a list of booleans indicating whether corresponding dataset in ALL_DATASETS should use bilevel ontology. Omniglot is set up with a hierarchy with two level: the alphabet (Latin, Inuktitut...), and the character (with 20 examples per character).
The flag means that each episode will contain classes from a single alphabet.
use_dag_ontology_list: This is a list of booleans indicating whether corresponding dataset in ALL_DATASETS should use dag_ontology. Same idea for ImageNet, except it uses the hierarchical sampling procedure described in the article.
image_size: All images from various datasets are down or upsampled to the same size. This is the flag controls the edge size of the square.
shuffle_buffer_size: Controls the amount of shuffling among examples from any given class.
End of explanation
# 1
idx, (episode, source_id) = next(iterate_dataset(dataset_episodic, 1))
print('Got an episode from dataset:', all_dataset_specs[source_id].name)
# 2
for t, name in zip(episode,
['support_images', 'support_labels', 'support_class_ids',
'query_images', 'query_labels', 'query_class_ids']):
print(name, t.shape)
# 3
episode = [a.numpy() for a in episode]
# 4
support_class_ids, query_class_ids = episode[2], episode[5]
print(Counter(support_class_ids))
print(Counter(query_class_ids))
Explanation: Using Dataset
The episodic dataset consist in a tuple of the form (Episode, data source ID). The data source ID is an integer Tensor containing a value in the range [0, len(all_dataset_specs) - 1]
signifying which of the datasets of the multisource pipeline the given episode
came from. Episodes consist of support and query sets and we want to learn to classify images at the query set correctly given the support images. For both support and query set we have images, labels and class_ids. Labels are transformed class_ids offset to zero, so that global class_ids are set to [0, N] where N is the number of classes in an episode.
As one can see the number of images in query set and support set is different. Images are scaled, copied into 84*84*3 tensors. Labels are presented in two forms:
*_labels are relative to the classes selected for the current episode only. They are used as targets for this episode.
*_class_ids are the original class ids relative to the whole dataset. They are used for visualization and diagnostics.
It easy to convert tensors of the episode into numpy arrays and use them outside of the Tensorflow framework.
Classes might have different number of samples in the support set, whereas each class has 10 samples in the query set.
End of explanation
# 1
N_EPISODES=2
# 2, 3
for idx, (episode, source_id) in iterate_dataset(dataset_episodic, N_EPISODES):
print('Episode id: %d from source %s' % (idx, all_dataset_specs[source_id].name))
episode = [a.numpy() for a in episode]
plot_episode(support_images=episode[0], support_class_ids=episode[2],
query_images=episode[3], query_class_ids=episode[5])
Explanation: Visualizing Episodes
Let's visualize the episodes.
Support and query set for each episode plotted sequentially. Set N_EPISODES to control number of episodes visualized.
Each episode is sampled from a single dataset and include N different classes. Each class might have different number of samples in support set, whereas number of images in query set is fixed. We limit number of classes and images per class to 10 in order to create legible plots. Actual episodes might have more classes and samples.
Each column represents a distinct class and dataset specific class ids are plotted on the x_axis.
End of explanation
BATCH_SIZE = 16
ADD_DATASET_OFFSET = True
dataset_batch = pipeline.make_multisource_batch_pipeline(
dataset_spec_list=all_dataset_specs, batch_size=BATCH_SIZE, split=SPLIT,
image_size=84, add_dataset_offset=ADD_DATASET_OFFSET,
shuffle_buffer_size=1000)
for idx, ((images, labels), source_id) in iterate_dataset(dataset_batch, 1):
print(images.shape, labels.shape)
N_BATCH = 2
for idx, (batch, source_id) in iterate_dataset(dataset_batch, N_BATCH):
print('Batch-%d from source %s' % (idx, all_dataset_specs[source_id].name))
plot_batch(*map(lambda a: a.numpy(), batch), size_multiplier=0.5)
Explanation: (2) Batch Mode
Second mode that meta_dataset library provides is the batch mode, where one can sample batches from the list of datasets in a non-episodic manner and use it to train baseline models. There are couple things to note here:
Each batch is sampled from a different dataset.
ADD_DATASET_OFFSET controls whether the class_id's returned by the iterator overlaps among different datasets or not. A dataset specific offset is added in order to make returned ids unique.
make_multisource_batch_pipeline() creates a tf.data.Dataset object that returns datasets of the form (Batch, data source ID) where similarly to the
episodic case, the data source ID is an integer Tensor that identifies which
dataset the given batch originates from.
shuffle_buffer_size controls the amount of shuffling done among examples from a given dataset (unlike for the episodic pipeline).
End of explanation
#1
NUM_WAYS = 8
NUM_SUPPORT = 3
NUM_QUERY = 5
fixed_ways_shots = config.EpisodeDescriptionConfig(
num_ways=NUM_WAYS, num_support=NUM_SUPPORT, num_query=NUM_QUERY)
#2
use_bilevel_ontology_list = [False]*len(ALL_DATASETS)
use_dag_ontology_list = [False]*len(ALL_DATASETS)
quickdraw_spec = [all_dataset_specs[6]]
#3
dataset_fixed = pipeline.make_multisource_episode_pipeline(
dataset_spec_list=quickdraw_spec, use_dag_ontology_list=[False],
use_bilevel_ontology_list=use_bilevel_ontology_list, split=SPLIT,
image_size=84, episode_descr_config=fixed_ways_shots)
N_EPISODES = 2
for idx, (episode, source_id) in iterate_dataset(dataset_fixed, N_EPISODES):
print('Episode id: %d from source %s' % (idx, quickdraw_spec[source_id].name))
episode = [a.numpy() for a in episode]
plot_episode(support_images=episode[0], support_class_ids=episode[2],
query_images=episode[3], query_class_ids=episode[5])
Explanation: (3) Fixing Ways and Shots
meta_dataset library provides option to set number of classes/samples per episode. There are 3 main flags you can set.
NUM_WAYS: Fixes the # classes per episode. We would still get variable number of samples per class in the support set.
NUM_SUPPORT: Fixes # samples per class in the support set.
NUM_SUPPORT: Fixes # samples per class in the query set.
If we want to use fixed num_ways, we have to disable ontology based sampling for omniglot and imagenet. We advise using single dataset for using this feature, since using multiple datasets is not supported/tested. In this notebook, we are using Quick, Draw! Dataset.
We sample episodes and visualize them as we did earlier.
End of explanation
import torch
# 1
to_torch_labels = lambda a: torch.from_numpy(a.numpy()).long()
to_torch_imgs = lambda a: torch.from_numpy(np.transpose(a.numpy(), (0, 3, 1, 2)))
# 2
def data_loader(n_batches):
for i, (e, _) in enumerate(dataset_episodic):
if i == n_batches:
break
yield (to_torch_imgs(e[0]), to_torch_labels(e[1]),
to_torch_imgs(e[3]), to_torch_labels(e[4]))
for i, batch in enumerate(data_loader(n_batches=2)):
#3
data_support, labels_support, data_query, labels_query = [x.cuda() for x in batch]
print(data_support.shape, labels_support.shape, data_query.shape, labels_query.shape)
Explanation: (4) Using Meta-dataset with PyTorch
As mentioned above it is super easy to consume meta_dataset as NumPy arrays. This also enables easy integration into other popular deep learning frameworks like PyTorch. TensorFlow code processes the data and passes it to PyTorch, ready to be consumed. Since the data loader and processing steps do not have any operation on the GPU, TF should not attempt to grab the GPU, and it should be available for PyTorch.
1. Let's use an episodic dataset created earlier, dataset_episodic, and build on top of it. We will transpose tensor to CHW, which is the common order used by convolutional layers of PyTorch.
2. We will use zero-indexed labels, therefore grabbing e[1] and e[4]. At the end we return a generator that consumes the tf.Dataset.
3. Using .cuda() on PyTorch tensors should distribute them to appropriate devices.
End of explanation |
13,465 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A neural network from first principles
The code below was adpated from the code supplied in Andrew Ng's Coursera course on machine learning. The original code was written in Matlab/Octave and in order to further my understanding and enhance my Python skills, I ported it over to Python.
To start, various libraries are imported.
Step1: First, we load the data. For details, please see the accompanying notebook MNIST-loader.ipynb for details.
Step2: Now let's define some useful functions for the neural network to use. First is the sigmoid activation function
Step3: The neural network
Below is the real meat of this notebook
Step4: Next is the predict function. This function takes the learned weights and performs forward propagation through the netwwork using the x values supplied in the arguments. The effect of this is essentially to predict the output class of the given data using the weights that have been learned. We also calculate the cost here, because the actual cost value (and it's calculation) is only necessary if monitoring is set to True. Note
Step5: We initialize theta with a set of random weights with a standard deviation of $ 1/\sqrt{n} $
Step6: Stochastic gradient descent
This function handles the SGD part of the learning, and will be called later on in the script when we're ready to learn the model.
First, the function calls weight_init() to initialize the starting weights. Empty lists are created for storing the costs and accuracies over the course of learning. Next, the function loops over the number of epochs. In each loop, the x and y matrices are shuffled and divided into mini-batches. Looping through all of the mini-batches, the nn() function is called to perform forward and backpropagation and update the weights accordingly. If the monitor flags are set when calling SDG() the predict function will produce the cost and accuracies and store them in the empty lists we created earlier.
Step7: Finally, we train the model
First, we specify the model parameters. Lambda is the regularization parameter, which protects against overfitting. The variable classes specifies the number of nodes in the output layer, m is the number of features in the data set (this also doubles as the number of input layers, see below), and epochs, minibatchsize and rate parameters are fairly self explanatory.
The layers variable is a list, wherin each element of the list corresponds to a layer in the network (including the input and output layers). For example, the three layer network we've been working with until now is defined by [784, 100, 10], i.e. 784 features in the input layer, 100 neurons in the single hidden layer, and 10 output neurons.
Now that all of the various elements have been coded, and the parameters have been set, we're ready to train the model using the training set, and plot the cost/accuracies.
Step8: Visualizing cost and accuracy as a function of epochs
This quick code simply plots the cost versus number of epochs and training and testing set accuracies versus number of epochs
Step9: Visualizing the handwritten numbers
Here are two quick functions to visualize the actual data. First, we randomly select 100 data points and plot them. The second function grabs a single random data point, plots the image and uses the model above to predict the output. | Python Code:
# Import libraries
import numpy as np
import matplotlib.pyplot as plt
import math
from sklearn.metrics import accuracy_score
import pickle
import sys
Explanation: A neural network from first principles
The code below was adpated from the code supplied in Andrew Ng's Coursera course on machine learning. The original code was written in Matlab/Octave and in order to further my understanding and enhance my Python skills, I ported it over to Python.
To start, various libraries are imported.
End of explanation
# Load data
with open('./data/pickled/xtrain.pickle', 'rb') as f:
xtrain = pickle.load(f)
with open('./data/pickled/ytrain.pickle', 'rb') as f:
ytrain = pickle.load(f)
with open('./data/pickled/xtest.pickle', 'rb') as f:
xtest = pickle.load(f)
with open('./data/pickled/ytest.pickle', 'rb') as f:
ytest = pickle.load(f)
with open('./data/pickled/xval.pickle', 'rb') as f:
xval = pickle.load(f)
with open('./data/pickled/yval.pickle', 'rb') as f:
yval = pickle.load(f)
Explanation: First, we load the data. For details, please see the accompanying notebook MNIST-loader.ipynb for details.
End of explanation
# Sigmoid function
def sigmoid(z):
return 1.0 / (1.0 + np.exp(-z))
Explanation: Now let's define some useful functions for the neural network to use. First is the sigmoid activation function:
End of explanation
def nn(weights,x,y):
### Initialization
n = len(x)
activations = [np.array(0) for i in range(n_layers)]
activations[0] = x
deltas = [np.array(0) for i in range(n_layers-1)]
bias = np.ones((n,1))
### Forward propagation
for w,l in zip(weights,range(1,n_layers)):
inputs = np.concatenate((bias,activations[l-1]),axis=1)
activations[l] = sigmoid(np.dot(inputs,w.T))
### Output error
deltas[-1] = activations[-1] - y
### Back propagation
for l in range(2,n_layers):
deltas[-l] = np.dot(deltas[-(l-1)],weights[-(l-1)][:,1:]) * activations[-l]*(1-activations[-l])
# Update the weights / biases
for w,l in zip(weights,range(len(layers)-1,0,-1)):
w[:,1:] = w[:,1:] * (1-rate*Lambda/n) - np.dot(deltas[-l].T,activations[-(l+1)])*rate/n
w[:,:1] -= np.sum(deltas[-l])*rate/n
return weights
Explanation: The neural network
Below is the real meat of this notebook: the neural network funcion. The function, as defined, takes only one argument: theta. These are the weights as they are known in the moment when the function is called. At the start, these will be randomly initialized, however as the network learns these values will be changed and improved in order to minimize the cost function.
Let's walk through the basics. Firstly, the weights are provided in a "rolled" format, in other words instead of two matrices of weights, we have a single long list of weights. The first job is to "unroll" the weights into two matrices that can be efficiently used by the numpy matrix multiplication methods. To do this, we take the first (n+1)hidden (i.e. 40120 = 820) values, and reshape them into a 20X401 matrix. Next, we take the remaining 210 values (i.e. classes(hidden+1)) and reshape them into a 10X21 matrix. The n+1 and hidden+1* take into account the bias term, which I'll discuss below.
Forward propagation
Next, we perform forward propagation. This is relatively simple: we multiply the input values in the layer by the weights for that layer, sum the total and add the bias term, and then and apply the sigmoid function. Recall from the reading in week one:
$$
z = w.x+b
$$
$$
a = \sigma(z) = \frac{1}{(1+e^{-z})}
$$
Since w.x is the dot product, it implies the sum. Basic matrix multiplication says that when multiplying two matrices that have the same internal dimension (ie number of columns in matrix one is the same as number of rows in matrix two), each element in row i of matrix one is multiplied by each element in column i of matrix two, and all of those products are summed. This value goes into the first row of the resulting matrix. Subsequently, the same is repeated for the remaining columns of the second matrix, and the first row of the output matrix is completed. This process goes on for all remaining rows in the first matrix.
In our example, the first matrix contains MNIST images in a row-wise format. The second matrix contains the weights for each connection between the input layers and the hidden layers. So following the logic from above, the first row of the input matrix (i.e. the first image in the data set) is multiplied by each of 10 sets of weights (10 columns in the weights matrix), one for each hidden layer. Because it's matrix mulitplication, all of these products are automatically summed.
A quick note about bias
If you look at the code below (and elsewhere in this notebook) you'll find a number of n+1's and hidden+1's, etc. These account for bias, the b term in the equation above. Every time forward propagation is run, and extra column of ones is appended onto the end of the matrix (these are not a part of the actual data). When the weights are randomly initialized, they too have an extra weight included for this bias term (i.e. the dimensions are n+1Xhidden). These two values, bias in the input matrix and bias in the weights matrix, are multiplied during matrix multiplication and their product is added to the total sum for that neuron. Because the value for bias in the input matrix is always 1, the actual value of the bias is thus coded in the weights and can be learned just like a regular weight.
So to sum it all up, for each connection between a node in the input (i.e. a feature, a pixel in the image) and a node in the hidden layer, the input value is multiplied by the weight of each connection and these products for all features are added. To incorporate bias, we include an extra input value of 1 and multiply is by it's own weight. The sigmoid function is applied to this sum, and generates the value of the hidden layer for this particular data point.
Continuing on with forward propagation
Now we have the values of the hidden layer, we repeat this process once again with the weights for the connections between the hidden layer and the output layer. Nothing changes here, except for the sizes of the matrices. Recall that we had n input nodes and, say, 20 hidden layer nodes. That means we had n+1 weights (adding 1 for the bias term), so here we will have hidden+1 weights.
At the end of the second forward propagation, we will have a matrix with a row for each example in the data set and a column for each output class in the neural network (i.e. 10). The columns will contain the value the neural network determined for each class. If the network learned how to identify handwritten digits, the highest of these values will correspond with the correct output. At this point, however, our network has done no learning so we wouldn't expect anything better than a random guess (since the weights were randomly initialized!)
The cost function
Next comes the cost function. Here, we implement the cross entropy cost function with weight decay or L2 regularization. I implemented this in two lines for clarity's sake. First, the unregularized cost is determined and subsequently the regularization term is added.
Note: In the SGD version of this notebook, I removed the cost function. The scipy.optimize version required the cost to be calculated during training, however for SGD the cost is incoporated into the weight updates (or rather, its derivative w.r.t the weights/biases is incoporated) and so computing the cost each time is a waste of resources since it won't be used. Instead, I moved the cost calculation into the predict function which is 1) only called if the various monitoring parameters are set to True when training is inititated, and 2) is only calculated once per epoch, instead of on each minibatch (i.e. for 30 epochs and a mini-batch size of 10, it is calculated 30 times, instead of 15 000 times).
And finally, back propagation
First, we find the difference between the output values in the 10 classes and the real value. In this case, the real value for each of the 10 possible digits is a vector or length 10, with a 1 in the position representing the number in question and 0's everywhere else. For example, the number 3 is represented by [0 0 0 1 0 0 0 0 0 0]. Since the outputs are from sigmoid neurons, the values will be between 0 and 1 with the highest value indicating the number the model predicted. Sticking with the above example, we might expect our model to output something like [0.1 0.2 0.1 0.6 0.1 0.2 0.2 0.1 0.2 0.3]. Subtracting these two will give a measure of the error. The larger the value, the more incorrect that class prediction was.
To perform backpropagation, first we find the delta for the final layer (in this case, d3). This is simply the actual value (which is one-hot encoded) subtracted from the neural networks prediction for that value.
Next, we multiple the error from layer 3 by the weights that produced layer three's activation values. This is a matrix multiplication which automatically sums the totals. In this case, the matrices have the dimensions [3750X10] and [10X11], for a dot product of [3750X11]. We can simply perform an elementwise multiplication with the derivative of the sigmoid function with respect to the activations in layer 2 to get the error at layer 2.
Since we only have three layers here, we're done. There is no error on layer 1 since this was the input layer. We can't find the error on the raw values that are input to the network!
Now we would use these two delta values to update the weights and biases and then run the network again. Rinse and repeat until the cost function is appreciably minimized.
Wrapping this function up
As you can see, the nn function takes in a set of weights, performs forward propagation to predict the output, calculates the regularized cost using cross entropy and L2 regularization, and then performs backpropagation to determine the rate of change of the cost function with respect to the biases and weights, subsequently using the learning rate and the Lambda regularization parameter to update the weights and biases. The weights are rolled into a vector which and returned by the nn() function. We skipped past this at the beginning of the function, but you look back, you can see that the first thing done in the function is to unroll the weights and reshape them into matrices for efficient matrix multiplication during forward and backpropagation.
End of explanation
def predict(weights,x,y):
### Initialization
n = len(x)
activations = [np.array(0) for i in range(n_layers)]
activations[0] = x
bias = np.ones((n,1))
### Forward propagation
for w,l in zip(weights,range(1,n_layers)):
inputs = np.concatenate((bias,activations[l-1]),axis=1)
activations[l] = sigmoid(np.dot(inputs,w.T))
# Cost function: regularized cross entropy
C = np.sum(np.nan_to_num(-y*np.log(activations[-1]) - (1-y)*(np.log(1-activations[-1]))))/n
ws_sum_squares = 0
for l in range(n_layers-1):
ws_sum_squares += np.sum(weights[l][:,1:]**2)
C += ((Lambda/(2*n))) * ws_sum_squares # Add regularization to the cost function
return np.argmax(activations[-1],axis=1),C
Explanation: Next is the predict function. This function takes the learned weights and performs forward propagation through the netwwork using the x values supplied in the arguments. The effect of this is essentially to predict the output class of the given data using the weights that have been learned. We also calculate the cost here, because the actual cost value (and it's calculation) is only necessary if monitoring is set to True. Note: this function is only called by the accuracy tools at the end, and thus doesn't need to perform backpropagation or do any learning.
End of explanation
def weight_init(L_in,L_out):
np.random.seed(13) # This makes testing consistent.
return np.random.normal(scale=1/np.sqrt(L_in), size=(L_out,L_in+1))
Explanation: We initialize theta with a set of random weights with a standard deviation of $ 1/\sqrt{n} $
End of explanation
def SGD(x,y,monitor_cost,monitor_train_acc,monitor_test_acc):
# Make list of weights arrays
weights = [np.array(0) for i in range(len(layers)-1)]
for l in range(len(layers)-1):
weights[l] = weight_init(layers[l],layers[l+1]) #[layers-1,[L_in+1,Lout]]
def shuffle(x,y):
state = np.random.get_state()
np.random.shuffle(x)
np.random.set_state(state)
np.random.shuffle(y)
return x,y
costs, test_acc, train_acc = [],[],[]
for j in range(epochs):
# Shuffle the data
x,y = shuffle(x,y)
# Seperate x,y mini-batches
mini_x = [x[k:k+minibatchsize] for k in range(0,len(x),minibatchsize)]
mini_y = [y[k:k+minibatchsize] for k in range(0,len(y),minibatchsize)]
# Iterate through pairs of mini-batches, calling nn() on each pair
for x_mini,y_mini in zip(mini_x,mini_y):
weights = nn(weights,x_mini,y_mini)
# If statements for monitoring. This ensures the predict() function isn't called unnecessarily
if monitor_cost | monitor_train_acc:
ypred, C = predict(weights,x,y)
if monitor_cost:
costs.append(C)
if monitor_train_acc:
train_acc.append(accuracy_score(np.argmax(y,axis=1),ypred))
if monitor_test_acc:
test_acc.append(accuracy_score(np.argmax(ytest,axis=1),predict(weights,xtest,ytest)[0]))
# Write progress monitor
progress = (j+1)/(epochs)*100.0
bar = 20
hashes = '#'*(int(round(progress/100*bar)))
spaces = ' '*(bar-len(hashes))
sys.stdout.write('\r[{0}] {1}%'.format(hashes + spaces, round(progress,2)))
return weights,costs,train_acc,test_acc
Explanation: Stochastic gradient descent
This function handles the SGD part of the learning, and will be called later on in the script when we're ready to learn the model.
First, the function calls weight_init() to initialize the starting weights. Empty lists are created for storing the costs and accuracies over the course of learning. Next, the function loops over the number of epochs. In each loop, the x and y matrices are shuffled and divided into mini-batches. Looping through all of the mini-batches, the nn() function is called to perform forward and backpropagation and update the weights accordingly. If the monitor flags are set when calling SDG() the predict function will produce the cost and accuracies and store them in the empty lists we created earlier.
End of explanation
# Model parameters
m = np.int(xtrain.shape[1]) # Number of features in each example
layers = [m, 100, 10]
n_layers = len(layers)
# Learning parameters
Lambda = 0.01
epochs = 40
minibatchsize = 50
rate = 0.3
# Train the model
weights, costs, train_acc, test_acc = SGD(xtrain,ytrain,True,True,True)
# Plot the results
# Note: don't bother calling unless the monitor parameters are set...
plot()
accuracy_score(np.argmax(yval,axis=1),predict(weights,xval,yval)[0])
Explanation: Finally, we train the model
First, we specify the model parameters. Lambda is the regularization parameter, which protects against overfitting. The variable classes specifies the number of nodes in the output layer, m is the number of features in the data set (this also doubles as the number of input layers, see below), and epochs, minibatchsize and rate parameters are fairly self explanatory.
The layers variable is a list, wherin each element of the list corresponds to a layer in the network (including the input and output layers). For example, the three layer network we've been working with until now is defined by [784, 100, 10], i.e. 784 features in the input layer, 100 neurons in the single hidden layer, and 10 output neurons.
Now that all of the various elements have been coded, and the parameters have been set, we're ready to train the model using the training set, and plot the cost/accuracies.
End of explanation
def plot():# Visualize the cost and accuracy
fig = plt.figure(figsize=(12, 12))
ax = fig.add_subplot(221)
ax.plot(np.arange(epochs), costs, "-")
ax.set_xlim([0, epochs])
ax.set_xlabel('Epoch')
ax.set_ylabel('Cost')
ax.set_title('Cost over epochs')
ax = fig.add_subplot(222)
ax.plot(np.arange(epochs), train_acc, "-",color='blue',label="Training data, final acc: "+str(train_acc[-1]))
ax.plot(np.arange(epochs), test_acc, "-",color='orange',label="Testing data, final acc: "+str(test_acc[-1]))
ax.set_xlim([0, epochs])
ax.set_xlabel('Epoch')
ax.set_ylabel('Accuracy')
plt.legend(loc='lower right')
ax.set_title('Accuracy over epochs')
plt.show()
Explanation: Visualizing cost and accuracy as a function of epochs
This quick code simply plots the cost versus number of epochs and training and testing set accuracies versus number of epochs
End of explanation
# Visualize the data
def drawplot(draw,x,y):
if draw:
n = x.shape[0]
idx = np.random.randint(0,n,size=100) # Make an array of random integers between 0 and n
fig, ax = plt.subplots(10, 10) # make the plots
img_size = math.sqrt(m) # Specify the image size (in these case sqrt(m) = 28)
for i in range(10):
for j in range(10):
Xi = x[idx[i*10+j],:].reshape(int(img_size), int(img_size)) # get each example and resize
ax[i,j].set_axis_off() # Turns off the axes for all the subplots for clarity
ax[i,j].imshow(Xi, aspect='auto',cmap='gray') # plots the current image in the correct position
plt.show()
drawplot(True,xtrain,ytrain)
# Interactive printer function
def printer(x,y,weights):
idx = np.random.randint(len(x),size=1)
img_size = int(math.sqrt(m))
xi = x[idx,:].reshape(img_size,img_size)
yi = predict(weights,x[idx,:],y[idx,:])[0]
plt.title('The predicted value is %i\n The true value is %i' %(yi,np.argmax(y[idx,:],axis=1)))
plt.imshow(xi, aspect='auto',cmap='gray')
plt.axis('off')
plt.show()
# Running this cell will draw a single image
# The predicted and real value for y is printed above
printer(xtest,ytest,weights)
Explanation: Visualizing the handwritten numbers
Here are two quick functions to visualize the actual data. First, we randomly select 100 data points and plot them. The second function grabs a single random data point, plots the image and uses the model above to predict the output.
End of explanation |
13,466 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Building a better model
Following the baseline model and some feature engineering, we will now build a better predictive model. This will follow a few new patterns
Step1: Data
Step2: Cleaning, imputing missing values, feature engineering (some NLP)
Step3: Train, test split
Step4: We can use past years as predictors of future years. One challenge with this approach is that we confound time-sensitive trends (for example, global economic shocks to interest rates - such as the financial crisis of 2008, or the growth of Lending Club to broader and broader markets of debtors) with differences related to time-insensitive factors (such as a debtor's riskiness).
To account for this, we can bundle our training and test sets into the following blocks
Step5: We'll use the pre-2015 data on interest rates (old) to fit a model and cross-validate it. We'll then use the post-2015 data as a 'wild' dataset to test against.
Fitting the model
Step6: Fitting the model
We fit the model on all the data, and evaluate feature importances. | Python Code:
%matplotlib inline
%load_ext autoreload
%autoreload 2
import os
import sys
import sklearn
import sqlite3
import matplotlib
import numpy as np
import pandas as pd
import enchant as en
import seaborn as sns
import statsmodels.api as sm
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestRegressor
from sklearn.cross_validation import train_test_split, cross_val_score
src_dir = os.path.join(os.getcwd(), os.pardir, 'src')
sys.path.append(src_dir)
%aimport data
from data import make_dataset as md
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = (16.0, 6.0)
plt.rcParams['legend.markerscale'] = 3
matplotlib.rcParams['font.size'] = 16.0
Explanation: Building a better model
Following the baseline model and some feature engineering, we will now build a better predictive model. This will follow a few new patterns:
1. We will import data cleaning and feature engineering stuff from external Python modules we've built (for standardization across our machines).
2. We will cross-validate across time: that is, the model will be trained on earlier years and tested on later years.
3. Rather than looping through models (and perhaps working mroe with Pipeline and GridSearch), we will focus on tuning the parameters of the best-performing model from the baseline set.
End of explanation
DIR = os.getcwd() + "/../data/"
t = pd.read_csv(DIR + 'raw/lending-club-loan-data/loan.csv', low_memory=False)
t.head()
Explanation: Data: Preparing for the model
Importing the raw data
End of explanation
t2 = md.clean_data(t)
t3 = md.impute_missing(t2)
df = md.simple_dataset(t3)
# df = md.spelling_mistakes(t3) - skipping for now, so computationally expensive!
Explanation: Cleaning, imputing missing values, feature engineering (some NLP)
End of explanation
df['issue_d'].hist(bins = 50)
plt.title('Seasonality in lending')
plt.ylabel('Frequency')
plt.xlabel('Year')
plt.show()
Explanation: Train, test split: Splitting on 2015
End of explanation
old = df[df['issue_d'] < '2015']
new = df[df['issue_d'] >= '2015']
old.shape, new.shape
Explanation: We can use past years as predictors of future years. One challenge with this approach is that we confound time-sensitive trends (for example, global economic shocks to interest rates - such as the financial crisis of 2008, or the growth of Lending Club to broader and broader markets of debtors) with differences related to time-insensitive factors (such as a debtor's riskiness).
To account for this, we can bundle our training and test sets into the following blocks:
- Before 2015: Training set
- 2015 to current: Test set
End of explanation
X = old.drop(['int_rate', 'issue_d', 'earliest_cr_line', 'grade'], 1)
y = old['int_rate']
X.shape, y.shape
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
rfr = RandomForestRegressor(n_estimators = 10, max_features='sqrt')
scores = cross_val_score(rfr, X, y, cv = 3)
print("Accuracy: {:.2f} (+/- {:.2f})".format(scores.mean(), scores.std() * 2))
X_new = new.drop(['int_rate', 'issue_d', 'earliest_cr_line', 'grade'], 1)
y_new = new['int_rate']
new_scores = cross_val_score(rfr, X_new, y_new, cv = 3)
print("Accuracy: {:.2f} (+/- {:.2f})".format(new_scores.mean(), new_scores.std() * 2))
# QUINN: Let's just use this - all data
X_total = df.drop(['int_rate', 'issue_d', 'earliest_cr_line', 'grade'], 1)
y_total = df['int_rate']
total_scores = cross_val_score(rfr, X_total, y_total, cv = 3)
print("Accuracy: {:.2f} (+/- {:.2f})".format(total_scores.mean(), total_scores.std() * 2))
Explanation: We'll use the pre-2015 data on interest rates (old) to fit a model and cross-validate it. We'll then use the post-2015 data as a 'wild' dataset to test against.
Fitting the model
End of explanation
rfr.fit(X_total, y_total)
fi = [{'importance': x, 'feature': y} for (x, y) in \
sorted(zip(rfr.feature_importances_, X_total.columns))]
fi = pd.DataFrame(fi)
fi.sort_values(by = 'importance', ascending = False, inplace = True)
fi.head()
top5 = fi.head()
top5.plot(kind = 'bar')
plt.xticks(range(5), top5['feature'])
plt.title('Feature importances (top 5 features)')
plt.ylabel('Relative importance')
plt.show()
Explanation: Fitting the model
We fit the model on all the data, and evaluate feature importances.
End of explanation |
13,467 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Kaggle San Francisco Crime Classification
Berkeley MIDS W207 Final Project
Step1: DDL to construct table for SQL transformations
Step2: Local, individual load of updated data set (with weather data integrated) into training, development, and test subsets.
Step3: Sarah's School data that we may still get to work as features
Step4: Formatting to meet Kaggle submission standards
Step5: Generate baseline prediction probabilities from MNB classifier and store in a .csv file (Negated with hashtags for now, as will cause file dependency issues if run locally for everyone. Will be run by Isabell in final notebook with correct files she needs)
Step6: Note
Step7: Defining Performance Criteria
As determined by the Kaggle submission guidelines, the performance criteria metric for the San Francisco Crime Classification competition is Multi-class Logarithmic Loss (also known as cross-entropy). There are various other performance metrics that are appropriate for different domains
Step8: Adding Features, Hyperparameter Tuning, and Model Calibration To Improve Prediction For Each Classifier
Here we seek to optimize the performance of our classifiers in a three-step, dynamnic engineering process.
1) Feature addition
We previously added components from the weather data into the original SF crime data as new features. We will not repeat work done in our initial submission, where our training dataset did not include these features. For comparision with respoect to how the added features improved our performance with respect to log loss, please refer back to our initial submission.
We can have Kalvin expand on exactly what he did here.
2) Hyperparameter tuning
Each classifier has parameters that we can engineer to further optimize performance, as opposed to using the default parameter values as we did above in the model prototyping cell. This will be specific to each classifier as detailed below.
3) Model calibration
We can calibrate the models via Platt Scaling or Isotonic Regression to attempt to improve their performance.
Platt Scaling
Step9: Model calibration
Step10: Comments on results for Hyperparameter tuning and Calibration for KNN
Step11: Hyperparameter tuning
Step12: Tuning
Step13: Model calibration
Step14: Model calibration | Python Code:
# Import relevant libraries:
import time
import numpy as np
import pandas as pd
from sklearn.neighbors import KNeighborsClassifier
from sklearn import preprocessing
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import StandardScaler
from sklearn.naive_bayes import BernoulliNB
from sklearn.naive_bayes import MultinomialNB
from sklearn.naive_bayes import GaussianNB
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import classification_report
from sklearn.metrics import log_loss
from sklearn.linear_model import LogisticRegression
from sklearn import svm
from sklearn.neural_network import MLPClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
# Import Meta-estimators
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import BaggingClassifier
from sklearn.ensemble import GradientBoostingClassifier
# Import Calibration tools
from sklearn.calibration import CalibratedClassifierCV
# Set random seed and format print output:
np.random.seed(0)
np.set_printoptions(precision=3)
Explanation: Kaggle San Francisco Crime Classification
Berkeley MIDS W207 Final Project: Sam Goodgame, Sarah Cha, Kalvin Kao, Bryan Moore
Environment and Data
End of explanation
data_path = "./data/train_transformed.csv"
df = pd.read_csv(data_path, header=0)
x_data = df.drop('category', 1)
y = df.category.as_matrix()
######### Adding the date back into the data
import csv
import time
import calendar
data_path = "./data/train.csv"
dataCSV = open(data_path, 'rt')
csvData = list(csv.reader(dataCSV))
csvFields = csvData[0] #['Dates', 'Category', 'Descript', 'DayOfWeek', 'PdDistrict', 'Resolution', 'Address', 'X', 'Y']
allData = csvData[1:]
dataCSV.close()
df2 = pd.DataFrame(allData)
df2.columns = csvFields
dates = df2['Dates']
dates = dates.apply(time.strptime, args=("%Y-%m-%d %H:%M:%S",))
dates = dates.apply(calendar.timegm)
print(dates.head())
x_data['secondsFromEpoch'] = dates
colnames = x_data.columns.tolist()
colnames = colnames[-1:] + colnames[:-1]
x_data = x_data[colnames]
#########
######### Adding the weather data into the original crime data
weatherData1 = "./data/1027175.csv"
weatherData2 = "./data/1027176.csv"
dataCSV = open(weatherData1, 'rt')
csvData = list(csv.reader(dataCSV))
csvFields = csvData[0] #['Dates', 'Category', 'Descript', 'DayOfWeek', 'PdDistrict', 'Resolution', 'Address', 'X', 'Y']
allWeatherData1 = csvData[1:]
dataCSV.close()
dataCSV = open(weatherData2, 'rt')
csvData = list(csv.reader(dataCSV))
csvFields = csvData[0] #['Dates', 'Category', 'Descript', 'DayOfWeek', 'PdDistrict', 'Resolution', 'Address', 'X', 'Y']
allWeatherData2 = csvData[1:]
dataCSV.close()
weatherDF1 = pd.DataFrame(allWeatherData1)
weatherDF1.columns = csvFields
dates1 = weatherDF1['DATE']
sunrise1 = weatherDF1['DAILYSunrise']
sunset1 = weatherDF1['DAILYSunset']
weatherDF2 = pd.DataFrame(allWeatherData2)
weatherDF2.columns = csvFields
dates2 = weatherDF2['DATE']
sunrise2 = weatherDF2['DAILYSunrise']
sunset2 = weatherDF2['DAILYSunset']
# functions for processing the sunrise and sunset times of each day
def get_hour_and_minute(milTime):
hour = int(milTime[:-2])
minute = int(milTime[-2:])
return [hour, minute]
def get_date_only(date):
return time.struct_time(tuple([date[0], date[1], date[2], 0, 0, 0, date[6], date[7], date[8]]))
def structure_sun_time(timeSeries, dateSeries):
sunTimes = timeSeries.copy()
for index in range(len(dateSeries)):
sunTimes[index] = time.struct_time(tuple([dateSeries[index][0], dateSeries[index][1], dateSeries[index][2], timeSeries[index][0], timeSeries[index][1], dateSeries[index][5], dateSeries[index][6], dateSeries[index][7], dateSeries[index][8]]))
return sunTimes
dates1 = dates1.apply(time.strptime, args=("%Y-%m-%d %H:%M",))
sunrise1 = sunrise1.apply(get_hour_and_minute)
sunrise1 = structure_sun_time(sunrise1, dates1)
sunrise1 = sunrise1.apply(calendar.timegm)
sunset1 = sunset1.apply(get_hour_and_minute)
sunset1 = structure_sun_time(sunset1, dates1)
sunset1 = sunset1.apply(calendar.timegm)
dates1 = dates1.apply(calendar.timegm)
dates2 = dates2.apply(time.strptime, args=("%Y-%m-%d %H:%M",))
sunrise2 = sunrise2.apply(get_hour_and_minute)
sunrise2 = structure_sun_time(sunrise2, dates2)
sunrise2 = sunrise2.apply(calendar.timegm)
sunset2 = sunset2.apply(get_hour_and_minute)
sunset2 = structure_sun_time(sunset2, dates2)
sunset2 = sunset2.apply(calendar.timegm)
dates2 = dates2.apply(calendar.timegm)
weatherDF1['DATE'] = dates1
weatherDF1['DAILYSunrise'] = sunrise1
weatherDF1['DAILYSunset'] = sunset1
weatherDF2['DATE'] = dates2
weatherDF2['DAILYSunrise'] = sunrise2
weatherDF2['DAILYSunset'] = sunset2
weatherDF = pd.concat([weatherDF1,weatherDF2[32:]],ignore_index=True)
# Starting off with some of the easier features to work with-- more to come here . . . still in beta
weatherMetrics = weatherDF[['DATE','HOURLYDRYBULBTEMPF','HOURLYRelativeHumidity', 'HOURLYWindSpeed', \
'HOURLYSeaLevelPressure', 'HOURLYVISIBILITY', 'DAILYSunrise', 'DAILYSunset']]
weatherMetrics = weatherMetrics.convert_objects(convert_numeric=True)
weatherDates = weatherMetrics['DATE']
'DATE','HOURLYDRYBULBTEMPF','HOURLYRelativeHumidity', 'HOURLYWindSpeed',
'HOURLYSeaLevelPressure', 'HOURLYVISIBILITY'
timeWindow = 10800 #3 hours
hourlyDryBulbTemp = []
hourlyRelativeHumidity = []
hourlyWindSpeed = []
hourlySeaLevelPressure = []
hourlyVisibility = []
dailySunrise = []
dailySunset = []
daylight = []
test = 0
for timePoint in dates:#dates is the epoch time from the kaggle data
relevantWeather = weatherMetrics[(weatherDates <= timePoint) & (weatherDates > timePoint - timeWindow)]
hourlyDryBulbTemp.append(relevantWeather['HOURLYDRYBULBTEMPF'].mean())
hourlyRelativeHumidity.append(relevantWeather['HOURLYRelativeHumidity'].mean())
hourlyWindSpeed.append(relevantWeather['HOURLYWindSpeed'].mean())
hourlySeaLevelPressure.append(relevantWeather['HOURLYSeaLevelPressure'].mean())
hourlyVisibility.append(relevantWeather['HOURLYVISIBILITY'].mean())
dailySunrise.append(relevantWeather['DAILYSunrise'].iloc[-1])
dailySunset.append(relevantWeather['DAILYSunset'].iloc[-1])
daylight.append(1.0*((timePoint >= relevantWeather['DAILYSunrise'].iloc[-1]) and (timePoint < relevantWeather['DAILYSunset'].iloc[-1])))
if timePoint < relevantWeather['DAILYSunset'][-1]:
daylight.append(1)
else:
daylight.append(0)
if test%100000 == 0:
print(relevantWeather)
test += 1
hourlyDryBulbTemp = pd.Series.from_array(np.array(hourlyDryBulbTemp))
hourlyRelativeHumidity = pd.Series.from_array(np.array(hourlyRelativeHumidity))
hourlyWindSpeed = pd.Series.from_array(np.array(hourlyWindSpeed))
hourlySeaLevelPressure = pd.Series.from_array(np.array(hourlySeaLevelPressure))
hourlyVisibility = pd.Series.from_array(np.array(hourlyVisibility))
dailySunrise = pd.Series.from_array(np.array(dailySunrise))
dailySunset = pd.Series.from_array(np.array(dailySunset))
daylight = pd.Series.from_array(np.array(daylight))
x_data['HOURLYDRYBULBTEMPF'] = hourlyDryBulbTemp
x_data['HOURLYRelativeHumidity'] = hourlyRelativeHumidity
x_data['HOURLYWindSpeed'] = hourlyWindSpeed
x_data['HOURLYSeaLevelPressure'] = hourlySeaLevelPressure
x_data['HOURLYVISIBILITY'] = hourlyVisibility
x_data['DAILYSunrise'] = dailySunrise
x_data['DAILYSunset'] = dailySunset
x_data['Daylight'] = daylight
x_data.to_csv(path_or_buf="C:/MIDS/W207 final project/x_data.csv")
#########
Impute missing values with mean values:
x_complete = x_data.fillna(x_data.mean())
X_raw = x_complete.as_matrix()
Scale the data between 0 and 1:
X = MinMaxScaler().fit_transform(X_raw)
Shuffle data to remove any underlying pattern that may exist:
shuffle = np.random.permutation(np.arange(X.shape[0]))
X, y = X[shuffle], y[shuffle]
Separate training, dev, and test data:
test_data, test_labels = X[800000:], y[800000:]
dev_data, dev_labels = X[700000:800000], y[700000:800000]
train_data, train_labels = X[:700000], y[:700000]
mini_train_data, mini_train_labels = X[:75000], y[:75000]
mini_dev_data, mini_dev_labels = X[75000:100000], y[75000:100000]
labels_set = set(mini_dev_labels)
print(labels_set)
print(len(labels_set))
print(train_data[:10])
Explanation: DDL to construct table for SQL transformations:
sql
CREATE TABLE kaggle_sf_crime (
dates TIMESTAMP,
category VARCHAR,
descript VARCHAR,
dayofweek VARCHAR,
pd_district VARCHAR,
resolution VARCHAR,
addr VARCHAR,
X FLOAT,
Y FLOAT);
Getting training data into a locally hosted PostgreSQL database:
sql
\copy kaggle_sf_crime FROM '/Users/Goodgame/Desktop/MIDS/207/final/sf_crime_train.csv' DELIMITER ',' CSV HEADER;
SQL Query used for transformations:
sql
SELECT
category,
date_part('hour', dates) AS hour_of_day,
CASE
WHEN dayofweek = 'Monday' then 1
WHEN dayofweek = 'Tuesday' THEN 2
WHEN dayofweek = 'Wednesday' THEN 3
WHEN dayofweek = 'Thursday' THEN 4
WHEN dayofweek = 'Friday' THEN 5
WHEN dayofweek = 'Saturday' THEN 6
WHEN dayofweek = 'Sunday' THEN 7
END AS dayofweek_numeric,
X,
Y,
CASE
WHEN pd_district = 'BAYVIEW' THEN 1
ELSE 0
END AS bayview_binary,
CASE
WHEN pd_district = 'INGLESIDE' THEN 1
ELSE 0
END AS ingleside_binary,
CASE
WHEN pd_district = 'NORTHERN' THEN 1
ELSE 0
END AS northern_binary,
CASE
WHEN pd_district = 'CENTRAL' THEN 1
ELSE 0
END AS central_binary,
CASE
WHEN pd_district = 'BAYVIEW' THEN 1
ELSE 0
END AS pd_bayview_binary,
CASE
WHEN pd_district = 'MISSION' THEN 1
ELSE 0
END AS mission_binary,
CASE
WHEN pd_district = 'SOUTHERN' THEN 1
ELSE 0
END AS southern_binary,
CASE
WHEN pd_district = 'TENDERLOIN' THEN 1
ELSE 0
END AS tenderloin_binary,
CASE
WHEN pd_district = 'PARK' THEN 1
ELSE 0
END AS park_binary,
CASE
WHEN pd_district = 'RICHMOND' THEN 1
ELSE 0
END AS richmond_binary,
CASE
WHEN pd_district = 'TARAVAL' THEN 1
ELSE 0
END AS taraval_binary
FROM kaggle_sf_crime;
Loading the data, version 2, with weather features to improve performance: (Negated with hashtags for now, as will cause file dependency issues if run locally for everyone. Will be run by Isabell in final notebook with correct files she needs)
We seek to add features to our models that will improve performance with respect to out desired performance metric. There is evidence that there is a correlation between weather patterns and crime, with some experts even arguing for a causal relationship between weather and crime [1]. More specifically, a 2013 paper published in Science showed that higher temperatures and extreme rainfall led to large increases in conflict. In the setting of strong evidence that weather influences crime, we see it as a candidate for additional features to improve the performance of our classifiers. Weather data was gathered from (insert source). Certain features from this data set were incorporated into the original crime data set in order to add features that were hypothesizzed to improve performance. These features included (insert what we eventually include).
End of explanation
# Data path to your local copy of Sam's "train_transformed.csv", which was produced by ?separate Python script?
data_path_for_labels_only = "/Users/Bryan/Desktop/UC_Berkeley_MIDS_files/Courses/W207_Intro_To_Machine_Learning/Final_Project/sf_crime-master/data/train_transformed.csv"
df = pd.read_csv(data_path_for_labels_only, header=0)
y = df.category.as_matrix()
# Data path to your local copy of Kalvin's "x_data.csv", which was produced by the negated cell above
data_path = "/Users/Bryan/Desktop/UC_Berkeley_MIDS_files/Courses/W207_Intro_To_Machine_Learning/Final_Project/x_data_08_15.csv"
df = pd.read_csv(data_path, header=0)
# Impute missing values with mean values:
x_complete = df.fillna(df.mean())
X_raw = x_complete.as_matrix()
# Scale the data between 0 and 1:
X = MinMaxScaler().fit_transform(X_raw)
# Shuffle data to remove any underlying pattern that may exist. Must re-run random seed step each time:
np.random.seed(0)
shuffle = np.random.permutation(np.arange(X.shape[0]))
X, y = X[shuffle], y[shuffle]
# Due to difficulties with log loss and set(y_pred) needing to match set(labels), we will remove the extremely rare
# crimes from the data for quality issues.
X_minus_trea = X[np.where(y != 'TREA')]
y_minus_trea = y[np.where(y != 'TREA')]
X_final = X_minus_trea[np.where(y_minus_trea != 'PORNOGRAPHY/OBSCENE MAT')]
y_final = y_minus_trea[np.where(y_minus_trea != 'PORNOGRAPHY/OBSCENE MAT')]
# Separate training, dev, and test data:
test_data, test_labels = X_final[800000:], y_final[800000:]
dev_data, dev_labels = X_final[700000:800000], y_final[700000:800000]
train_data, train_labels = X_final[100000:700000], y_final[100000:700000]
calibrate_data, calibrate_labels = X_final[:100000], y_final[:100000]
# Create mini versions of the above sets
mini_train_data, mini_train_labels = X_final[:20000], y_final[:20000]
mini_calibrate_data, mini_calibrate_labels = X_final[19000:28000], y_final[19000:28000]
mini_dev_data, mini_dev_labels = X_final[49000:60000], y_final[49000:60000]
# Create list of the crime type labels. This will act as the "labels" parameter for the log loss functions that follow
crime_labels = list(set(y_final))
crime_labels_mini_train = list(set(mini_train_labels))
crime_labels_mini_dev = list(set(mini_dev_labels))
crime_labels_mini_calibrate = list(set(mini_calibrate_labels))
print(len(crime_labels), len(crime_labels_mini_train), len(crime_labels_mini_dev),len(crime_labels_mini_calibrate))
#print(len(train_data),len(train_labels))
#print(len(dev_data),len(dev_labels))
#print(len(mini_train_data),len(mini_train_labels))
#print(len(mini_dev_data),len(mini_dev_labels))
#print(len(test_data),len(test_labels))
#print(len(mini_calibrate_data),len(mini_calibrate_labels))
#print(len(calibrate_data),len(calibrate_labels))
Explanation: Local, individual load of updated data set (with weather data integrated) into training, development, and test subsets.
End of explanation
### Read in zip code data
#data_path_zip = "./data/2016_zips.csv"
#zips = pd.read_csv(data_path_zip, header=0, sep ='\t', usecols = [0,5,6], names = ["GEOID", "INTPTLAT", "INTPTLONG"], dtype ={'GEOID': int, 'INTPTLAT': float, 'INTPTLONG': float})
#sf_zips = zips[(zips['GEOID'] > 94000) & (zips['GEOID'] < 94189)]
### Mapping longitude/latitude to zipcodes
#def dist(lat1, long1, lat2, long2):
# return np.sqrt((lat1-lat2)**2+(long1-long2)**2)
# return abs(lat1-lat2)+abs(long1-long2)
#def find_zipcode(lat, long):
# distances = sf_zips.apply(lambda row: dist(lat, long, row["INTPTLAT"], row["INTPTLONG"]), axis=1)
# return sf_zips.loc[distances.idxmin(), "GEOID"]
#x_data['zipcode'] = 0
#for i in range(0, 1):
# x_data['zipcode'][i] = x_data.apply(lambda row: find_zipcode(row['x'], row['y']), axis=1)
#x_data['zipcode']= x_data.apply(lambda row: find_zipcode(row['x'], row['y']), axis=1)
### Read in school data
#data_path_schools = "./data/pubschls.csv"
#schools = pd.read_csv(data_path_schools,header=0, sep ='\t', usecols = ["CDSCode","StatusType", "School", "EILCode", "EILName", "Zip", "Latitude", "Longitude"], dtype ={'CDSCode': str, 'StatusType': str, 'School': str, 'EILCode': str,'EILName': str,'Zip': str, 'Latitude': float, 'Longitude': float})
#schools = schools[(schools["StatusType"] == 'Active')]
### Find the closest school
#def dist(lat1, long1, lat2, long2):
# return np.sqrt((lat1-lat2)**2+(long1-long2)**2)
#def find_closest_school(lat, long):
# distances = schools.apply(lambda row: dist(lat, long, row["Latitude"], row["Longitude"]), axis=1)
# return min(distances)
#x_data['closest_school'] = x_data_sub.apply(lambda row: find_closest_school(row['y'], row['x']), axis=1)
Explanation: Sarah's School data that we may still get to work as features: (Negated with hashtags for now, as will cause file dependency issues if run locally for everyone. Will be run by Isabell in final notebook with correct files she needs)
End of explanation
# The Kaggle submission format requires listing the ID of each example.
# This is to remember the order of the IDs after shuffling
#allIDs = np.array(list(df.axes[0]))
#allIDs = allIDs[shuffle]
#testIDs = allIDs[800000:]
#devIDs = allIDs[700000:800000]
#trainIDs = allIDs[:700000]
# Extract the column names for the required submission format
#sampleSubmission_path = "./data/sampleSubmission.csv"
#sampleDF = pd.read_csv(sampleSubmission_path)
#allColumns = list(sampleDF.columns)
#featureColumns = allColumns[1:]
# Extracting the test data for a baseline submission
#real_test_path = "./data/test_transformed.csv"
#testDF = pd.read_csv(real_test_path, header=0)
#real_test_data = testDF
#test_complete = real_test_data.fillna(real_test_data.mean())
#Test_raw = test_complete.as_matrix()
#TestData = MinMaxScaler().fit_transform(Test_raw)
# Here we remember the ID of each test data point, in case we ever decide to shuffle the test data for some reason
#testIDs = list(testDF.axes[0])
Explanation: Formatting to meet Kaggle submission standards: (Negated with hashtags for now, as will cause file dependency issues if run locally for everyone. Will be run by Isabell in final notebook with correct files she needs)
End of explanation
# Generate a baseline MNB classifier and make it return prediction probabilities for the actual test data
#def MNB():
# mnb = MultinomialNB(alpha = 0.0000001)
# mnb.fit(train_data, train_labels)
# print("\n\nMultinomialNB accuracy on dev data:", mnb.score(dev_data, dev_labels))
# return mnb.predict_proba(dev_data)
#MNB()
#baselinePredictionProbabilities = MNB()
# Place the resulting prediction probabilities in a .csv file in the required format
# First, turn the prediction probabilties into a data frame
#resultDF = pd.DataFrame(baselinePredictionProbabilities,columns=featureColumns)
# Add the IDs as a final column
#resultDF.loc[:,'Id'] = pd.Series(testIDs,index=resultDF.index)
# Make the 'Id' column the first column
#colnames = resultDF.columns.tolist()
#colnames = colnames[-1:] + colnames[:-1]
#resultDF = resultDF[colnames]
# Output to a .csv file
# resultDF.to_csv('result.csv',index=False)
Explanation: Generate baseline prediction probabilities from MNB classifier and store in a .csv file (Negated with hashtags for now, as will cause file dependency issues if run locally for everyone. Will be run by Isabell in final notebook with correct files she needs)
End of explanation
## Data sub-setting quality check-point
print(train_data[:1])
print(train_labels[:1])
# Modeling quality check-point with MNB--fast model
def MNB():
mnb = MultinomialNB(alpha = 0.0000001)
mnb.fit(train_data, train_labels)
print("\n\nMultinomialNB accuracy on dev data:", mnb.score(dev_data, dev_labels))
MNB()
Explanation: Note: the code above will shuffle data differently every time it's run, so model accuracies will vary accordingly.
End of explanation
def model_prototype(train_data, train_labels, eval_data, eval_labels):
knn = KNeighborsClassifier(n_neighbors=5).fit(train_data, train_labels)
bnb = BernoulliNB(alpha=1, binarize = 0.5).fit(train_data, train_labels)
mnb = MultinomialNB().fit(train_data, train_labels)
log_reg = LogisticRegression().fit(train_data, train_labels)
neural_net = MLPClassifier().fit(train_data, train_labels)
random_forest = RandomForestClassifier().fit(train_data, train_labels)
decision_tree = DecisionTreeClassifier().fit(train_data, train_labels)
support_vm_step_one = svm.SVC(probability = True)
support_vm = support_vm_step_one.fit(train_data, train_labels)
models = [knn, bnb, mnb, log_reg, neural_net, random_forest, decision_tree, support_vm]
for model in models:
eval_prediction_probabilities = model.predict_proba(eval_data)
eval_predictions = model.predict(eval_data)
print(model, "Multi-class Log Loss:", log_loss(y_true = eval_labels, y_pred = eval_prediction_probabilities, labels = crime_labels_mini_dev), "\n\n")
model_prototype(mini_train_data, mini_train_labels, mini_dev_data, mini_dev_labels)
Explanation: Defining Performance Criteria
As determined by the Kaggle submission guidelines, the performance criteria metric for the San Francisco Crime Classification competition is Multi-class Logarithmic Loss (also known as cross-entropy). There are various other performance metrics that are appropriate for different domains: accuracy, F-score, Lift, ROC Area, average precision, precision/recall break-even point, and squared error.
(Describe each performance metric and a domain in which it is preferred. Give Pros/Cons if able)
Multi-class Log Loss:
Accuracy:
F-score:
Lift:
ROC Area:
Average precision
Precision/Recall break-even point:
Squared-error:
Model Prototyping
We will start our classifier and feature engineering process by looking at the performance of various classifiers with default parameter settings in predicting labels on the mini_dev_data:
End of explanation
list_for_ks = []
list_for_ws = []
list_for_ps = []
list_for_log_loss = []
def k_neighbors_tuned(k,w,p):
tuned_KNN = KNeighborsClassifier(n_neighbors=k, weights=w, p=p).fit(mini_train_data, mini_train_labels)
dev_prediction_probabilities = tuned_KNN.predict_proba(mini_dev_data)
list_for_ks.append(this_k)
list_for_ws.append(this_w)
list_for_ps.append(this_p)
working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = dev_prediction_probabilities, labels = crime_labels_mini_dev)
list_for_log_loss.append(working_log_loss)
#print("Multi-class Log Loss with KNN and k,w,p =", k,",",w,",", p, "is:", working_log_loss)
k_value_tuning = [i for i in range(1,5002,500)]
weight_tuning = ['uniform', 'distance']
power_parameter_tuning = [1,2]
start = time.clock()
for this_k in k_value_tuning:
for this_w in weight_tuning:
for this_p in power_parameter_tuning:
k_neighbors_tuned(this_k, this_w, this_p)
index_best_logloss = np.argmin(list_for_log_loss)
print('For KNN the best log loss with hyperparameter tuning is',list_for_log_loss[index_best_logloss], 'with k =', list_for_ks[index_best_logloss], 'w =', list_for_ws[index_best_logloss], 'p =', list_for_ps[index_best_logloss])
end = time.clock()
print("Computation time for this step is %.2f" % (end-start), 'seconds')
Explanation: Adding Features, Hyperparameter Tuning, and Model Calibration To Improve Prediction For Each Classifier
Here we seek to optimize the performance of our classifiers in a three-step, dynamnic engineering process.
1) Feature addition
We previously added components from the weather data into the original SF crime data as new features. We will not repeat work done in our initial submission, where our training dataset did not include these features. For comparision with respoect to how the added features improved our performance with respect to log loss, please refer back to our initial submission.
We can have Kalvin expand on exactly what he did here.
2) Hyperparameter tuning
Each classifier has parameters that we can engineer to further optimize performance, as opposed to using the default parameter values as we did above in the model prototyping cell. This will be specific to each classifier as detailed below.
3) Model calibration
We can calibrate the models via Platt Scaling or Isotonic Regression to attempt to improve their performance.
Platt Scaling: ((brief explanation of how it works))
Isotonic Regression: ((brief explanation of how it works))
For each classifier, we can use CalibratedClassifierCV to perform probability calibration with isotonic regression or sigmoid (Platt Scaling). The parameters within CalibratedClassifierCV that we can adjust are the method ('sigmoid' or 'isotonic') and cv (cross-validation generator). As we will already be training our models before calibration, we will only use cv = 'prefit'. Thus, in practice the cross-validation generator will not be a modifiable parameter for us.
K-Nearest Neighbors
Hyperparameter tuning:
For the KNN classifier, we can seek to optimize the following classifier parameters: n-neighbors, weights, and the power parameter ('p').
End of explanation
# Here we will calibrate the KNN classifier with both Platt Scaling and with Isotonic Regression using CalibratedClassifierCV
# with various parameter settings. The "method" parameter can be set to "sigmoid" or to "isotonic",
# corresponding to Platt Scaling and to Isotonic Regression respectively.
list_for_ks = []
list_for_ws = []
list_for_ps = []
list_for_ms = []
list_for_log_loss = []
def knn_calibrated(k,w,p,m):
tuned_KNN = KNeighborsClassifier(n_neighbors=k, weights=w, p=p).fit(mini_train_data, mini_train_labels)
dev_prediction_probabilities = tuned_KNN.predict_proba(mini_dev_data)
ccv = CalibratedClassifierCV(tuned_KNN, method = m, cv = 'prefit')
ccv.fit(mini_calibrate_data, mini_calibrate_labels)
ccv_prediction_probabilities = ccv.predict_proba(mini_dev_data)
list_for_ks.append(this_k)
list_for_ws.append(this_w)
list_for_ps.append(this_p)
list_for_ms.append(this_m)
working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = ccv_prediction_probabilities, labels = crime_labels_mini_dev)
list_for_log_loss.append(working_log_loss)
print("Multi-class Log Loss with KNN and k,w,p =", k,",",w,",",p,",",m,"is:", working_log_loss)
k_value_tuning = [i for i in range(1,5002,500)]
weight_tuning = ['uniform', 'distance']
power_parameter_tuning = [1,2]
methods = ['sigmoid', 'isotonic']
start = time.clock()
for this_k in k_value_tuning:
for this_w in weight_tuning:
for this_p in power_parameter_tuning:
for this_m in methods:
knn_calibrated(this_k, this_w, this_p, this_m)
index_best_logloss = np.argmin(list_for_log_loss)
print('For KNN the best log loss with hyperparameter tuning and calibration is',list_for_log_loss[index_best_logloss], 'with k =', list_for_ks[index_best_logloss], 'w =', list_for_ws[index_best_logloss], 'p =', list_for_ps[index_best_logloss], 'm =', list_for_ms[index_best_logloss])
end = time.clock()
print("Computation time for this step is %.2f" % (end-start), 'seconds')
Explanation: Model calibration:
We will consider embeding this step within the for loop for the hyperparameter tuning. More likely we will pipeline it along with the hyperparameter tuning steps. We will then use GridSearchCV top find the optimized parameters based on our performance metric of Mutli-Class Log Loss.
End of explanation
list_for_as = []
list_for_bs = []
list_for_log_loss = []
def BNB_tuned(a,b):
bnb_tuned = BernoulliNB(alpha = a, binarize = b).fit(mini_train_data, mini_train_labels)
dev_prediction_probabilities = bnb_tuned.predict_log_proba(mini_dev_data)
list_for_as.append(this_a)
list_for_bs.append(this_b)
working_log_loss = log_loss(y_true = mini_dev_labels, y_pred = dev_prediction_probabilities, labels = crime_labels_mini_dev)
list_for_log_loss.append(working_log_loss)
print("Multi-class Log Loss with BNB and a,b =", a,",",b,"is:", working_log_loss)
alpha_tuning = [0.00000001,0.0000001,0.000001, 0.00001, 0.0001, 0.001, 0.01, 0.1, 0.2, 0.3, 0.4, 0.5, 1.0, 2.0, 10.0, 100.0, 1000.0]
binarize_thresholds_tuning = [0.000001, 0.00001, 0.0001, 0.001, 0.01, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.95, 0.99, 0.999, 0.9999]
start = time.clock()
for this_a in alpha_tuning:
for this_b in binarize_thresholds_tuning:
BNB_tuned(this_a, this_b)
index_best_logloss = np.argmin(list_for_log_loss)
print('For BNB the best log loss with hyperparameter tuning is',list_for_log_loss[index_best_logloss], 'with alpha =', list_for_as[index_best_logloss], 'binarization threshold =', list_for_bs[index_best_logloss])
end = time.clock()
print("Computation time for this step is %.2f" % (end-start), 'seconds')
Explanation: Comments on results for Hyperparameter tuning and Calibration for KNN:
We see that the best log loss we achieve for KNN is with _ neighbors, _ weights, and _ power parameter.
When we add-in calibration, we see that the the best log loss we achieve for KNN is with _ neighbors, _ weights, _ power parameter, and _ calibration method.
(Further explanation here?)
Multinomial, Bernoulli, and Gaussian Naive Bayes
Hyperparameter tuning:
For the Bernoulli Naive Bayes classifier and Multinomial Naive Bayes classifer, we seek to optimize the alpha parameter (Laplace smoothing parameter). For the Gaussian Naive Bayes classifier there are no inherent parameters within the classifier function to optimize, but we will look at our log loss before and after adding noise to the data that is hypothesized to give it a more normal (Gaussian) distribution, which is required by the GNB classifier.
Hyperparameter tuning: Bernoulli Naive Bayes
For the Bernoulli Naive Bayes classifier, we seek to optimize the alpha parameter (Laplace smoothing parameter) and the binarize parameter (threshold for binarizing of the sample features). For the binarize parameter, we will create arbitrary thresholds over which our features, which are not binary/boolean features, will be binarized.
End of explanation
def MNB():
mnb = MultinomialNB(alpha = 0.0000001)
mnb.fit(train_data, train_labels)
print("\n\nMultinomialNB accuracy on dev data:", mnb.score(dev_data, dev_labels))
alphas = [0.0, 0.000000001,0.00000001,0.0000001,0.000001, 0.00001, 0.0001, 0.001, 0.01, 0.1, 0.2, 0.3, 0.4, 0.5, 1.0, 2.0, 10.0, 100.0, 1000.0]
Explanation: Hyperparameter tuning: Multinomial Naive Bayes
End of explanation
def GNB():
gnb = GaussianNB()
gnb.fit(train_data, train_labels)
print("GaussianNB accuracy on dev data:",
gnb.score(dev_data, dev_labels))
# Gaussian Naive Bayes requires the data to have a relative normal distribution. Sometimes
# adding noise can improve performance by making the data more normal:
train_data_noise = np.random.rand(train_data.shape[0],train_data.shape[1])
modified_train_data = np.multiply(train_data,train_data_noise)
gnb_noise = GaussianNB()
gnb.fit(modified_train_data, train_labels)
print("GaussianNB accuracy with added noise:",
gnb.score(dev_data, dev_labels))
Explanation: Tuning: Gaussian Naive Bayes
End of explanation
### All the work from Sarah's notebook:
import theano
from theano import tensor as T
from theano.sandbox.rng_mrg import MRG_RandomStreams as RandomStreams
print (theano.config.device) # We're using CPUs (for now)
print (theano.config.floatX )# Should be 64 bit for CPUs
np.random.seed(0)
from IPython.display import display, clear_output
numFeatures = train_data[1].size
numTrainExamples = train_data.shape[0]
numTestExamples = test_data.shape[0]
print ('Features = %d' %(numFeatures))
print ('Train set = %d' %(numTrainExamples))
print ('Test set = %d' %(numTestExamples))
class_labels = list(set(train_labels))
print(class_labels)
numClasses = len(class_labels)
### Binarize the class labels
def binarizeY(data):
binarized_data = np.zeros((data.size,39))
for j in range(0,data.size):
feature = data[j]
i = class_labels.index(feature)
binarized_data[j,i]=1
return binarized_data
train_labels_b = binarizeY(train_labels)
test_labels_b = binarizeY(test_labels)
numClasses = train_labels_b[1].size
print ('Classes = %d' %(numClasses))
print ('\n', train_labels_b[:5, :], '\n')
print (train_labels[:10], '\n')
###1) Parameters
numFeatures = train_data.shape[1]
numHiddenNodeslayer1 = 50
numHiddenNodeslayer2 = 30
w_1 = theano.shared(np.asarray((np.random.randn(*(numFeatures, numHiddenNodeslayer1))*0.01)))
w_2 = theano.shared(np.asarray((np.random.randn(*(numHiddenNodeslayer1, numHiddenNodeslayer2))*0.01)))
w_3 = theano.shared(np.asarray((np.random.randn(*(numHiddenNodeslayer2, numClasses))*0.01)))
params = [w_1, w_2, w_3]
###2) Model
X = T.matrix()
Y = T.matrix()
srng = RandomStreams()
def dropout(X, p=0.):
if p > 0:
X *= srng.binomial(X.shape, p=1 - p)
X /= 1 - p
return X
def model(X, w_1, w_2, w_3, p_1, p_2, p_3):
return T.nnet.softmax(T.dot(dropout(T.nnet.sigmoid(T.dot(dropout(T.nnet.sigmoid(T.dot(dropout(X, p_1), w_1)),p_2), w_2)),p_3),w_3))
y_hat_train = model(X, w_1, w_2, w_3, 0.2, 0.5,0.5)
y_hat_predict = model(X, w_1, w_2, w_3, 0., 0., 0.)
### (3) Cost function
cost = T.mean(T.sqr(y_hat - Y))
cost = T.mean(T.nnet.categorical_crossentropy(y_hat_train, Y))
### (4) Objective (and solver)
alpha = 0.01
def backprop(cost, w):
grads = T.grad(cost=cost, wrt=w)
updates = []
for wi, grad in zip(w, grads):
updates.append([wi, wi - grad * alpha])
return updates
update = backprop(cost, params)
train = theano.function(inputs=[X, Y], outputs=cost, updates=update, allow_input_downcast=True)
y_pred = T.argmax(y_hat_predict, axis=1)
predict = theano.function(inputs=[X], outputs=y_pred, allow_input_downcast=True)
miniBatchSize = 10
def gradientDescent(epochs):
for i in range(epochs):
for start, end in zip(range(0, len(train_data), miniBatchSize), range(miniBatchSize, len(train_data), miniBatchSize)):
cc = train(train_data[start:end], train_labels_b[start:end])
clear_output(wait=True)
print ('%d) accuracy = %.4f' %(i+1, np.mean(np.argmax(test_labels_b, axis=1) == predict(test_data))) )
gradientDescent(50)
### How to decide what # to use for epochs? epochs in this case are how many rounds?
### plot costs for each of the 50 iterations and see how much it decline.. if its still very decreasing, you should
### do more iterations; otherwise if its looking like its flattening, you can stop
Explanation: Model calibration:
Here we will calibrate the MNB, BNB, and GNB classifiers with both Platt Scaling and with Isotonic Regression using CalibratedClassifierCV with various parameter settings. The "method" parameter can be set to "sigmoid" or to "isotonic", corresponding to Platt Scaling and to Isotonic Regression respectively.
Logistic Regression
Hyperparameter tuning:
For the Logistic Regression classifier, we can seek to optimize the following classifier parameters: penalty (l1 or l2), C (inverse of regularization strength), solver ('newton-cg', 'lbfgs', 'liblinear', or 'sag')
Model calibration:
See above
Decision Tree
Hyperparameter tuning:
For the Decision Tree classifier, we can seek to optimize the following classifier parameters: min_samples_leaf (the minimum number of samples required to be at a leaf node), max_depth
From readings, setting min_samples_leaf to approximately 1% of the data points can stop the tree from inappropriately classifying outliers, which can help to improve accuracy (unsure if significantly improves MCLL).
Model calibration:
See above
Support Vector Machines
Hyperparameter tuning:
For the SVM classifier, we can seek to optimize the following classifier parameters: C (penalty parameter C of the error term), kernel ('linear', 'poly', 'rbf', sigmoid', or 'precomputed')
See source [2] for parameter optimization in SVM
Model calibration:
See above
Neural Nets
Hyperparameter tuning:
For the Neural Networks MLP classifier, we can seek to optimize the following classifier parameters: hidden_layer_sizes, activation ('identity', 'logistic', 'tanh', 'relu'), solver ('lbfgs','sgd', adam'), alpha, learning_rate ('constant', 'invscaling','adaptive')
End of explanation
# Here we will likely use Pipeline and GridSearchCV in order to find the overall classifier with optimized Multi-class Log Loss.
# This will be the last step after all attempts at feature addition, hyperparameter tuning, and calibration are completed
# and the corresponding performance metrics are gathered.
Explanation: Model calibration:
See above
Random Forest
Hyperparameter tuning:
For the Random Forest classifier, we can seek to optimize the following classifier parameters: n_estimators (the number of trees in the forsest), max_features, max_depth, min_samples_leaf, bootstrap (whether or not bootstrap samples are used when building trees), oob_score (whether or not out-of-bag samples are used to estimate the generalization accuracy)
Model calibration:
See above
Meta-estimators
AdaBoost Classifier
Hyperparameter tuning:
There are no major changes that we seek to make in the AdaBoostClassifier with respect to default parameter values.
Adaboosting each classifier:
We will run the AdaBoostClassifier on each different classifier from above, using the classifier settings with optimized Multi-class Log Loss after hyperparameter tuning and calibration.
Bagging Classifier
Hyperparameter tuning:
For the Bagging meta classifier, we can seek to optimize the following classifier parameters: n_estimators (the number of trees in the forsest), max_samples, max_features, bootstrap (whether or not bootstrap samples are used when building trees), bootstrap_features (whether features are drawn with replacement), and oob_score (whether or not out-of-bag samples are used to estimate the generalization accuracy)
Bagging each classifier:
We will run the BaggingClassifier on each different classifier from above, using the classifier settings with optimized Multi-class Log Loss after hyperparameter tuning and calibration.
Gradient Boosting Classifier
Hyperparameter tuning:
For the Gradient Boosting meta classifier, we can seek to optimize the following classifier parameters: n_estimators (the number of trees in the forsest), max_depth, min_samples_leaf, and max_features
Gradient Boosting each classifier:
We will run the GradientBoostingClassifier with loss = 'deviance' (as loss = 'exponential' uses the AdaBoost algorithm) on each different classifier from above, using the classifier settings with optimized Multi-class Log Loss after hyperparameter tuning and calibration.
Final evaluation on test data
End of explanation |
13,468 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modeling and Simulation in Python
Chapter 13
Copyright 2017 Allen Downey
License
Step4: Code from previous chapters
make_system, plot_results, and calc_total_infected are unchanged.
Step6: Exercise | Python Code:
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
Explanation: Modeling and Simulation in Python
Chapter 13
Copyright 2017 Allen Downey
License: Creative Commons Attribution 4.0 International
End of explanation
def make_system(beta, gamma):
Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
init = State(S=89, I=1, R=0)
init /= np.sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
def plot_results(S, I, R):
Plot the results of a SIR model.
S: TimeSeries
I: TimeSeries
R: TimeSeries
plot(S, '--', label='Susceptible')
plot(I, '-', label='Infected')
plot(R, ':', label='Recovered')
decorate(xlabel='Time (days)',
ylabel='Fraction of population')
def calc_total_infected(results):
Fraction of population infected during the simulation.
results: DataFrame with columns S, I, R
returns: fraction of population
return get_first_value(results.S) - get_last_value(results.S)
Explanation: Code from previous chapters
make_system, plot_results, and calc_total_infected are unchanged.
End of explanation
# Solution
def slope_func(state, t, system):
Update the SIR model.
state: State (s, i, r)
t: time
system: System object
returns: State (sir)
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
dSdt = -infected
dIdt = infected - recovered
dRdt = recovered
return dSdt, dIdt, dRdt
system = make_system(0.333, 0.25)
slope_func(system.init, 0, system)
results, details = run_ode_solver(system, slope_func, max_step=3)
details
plot_results(results.S, results.I, results.R)
Explanation: Exercise: Write a slope function for the SIR model and test it with run_ode_solver.
End of explanation |
13,469 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Key Requirements for the iRF scikit-learn implementation
The following is a documentation of the main requirements for the iRF implementation
Typical Setup
Import the required dependencies
In particular irf_utils and irf_jupyter_utils
Step1: Step 1
Step2: Check out the data
Step3: Step 2
Step4: STEP 3
Step5: Perform Manual CHECKS on the irf_utils
These should be converted to unit tests and checked with nosetests -v test_irf_utils.py
Step 4
Step6: Plot Ranked Feature Importances
Step7: Decision Tree 0 (First) - Get output
Check the output against the decision tree graph
Step8: Compare to our dict of extracted data from the tree
Step9: Check output against the diagram
Step11: Wrapper function for iRF
Step12: Run the iRF Function
For bootstrap - just pick up 20% of the training dataset at a time
Step13: Run iRF for just 1 iteration - should be the uniform sampling version
Step14: Compare to the original single fitted random forest (top of the notebook)!
Step15: These look like they match as required!
Step16: | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.datasets import load_breast_cancer
import numpy as np
from functools import reduce
# Needed for the scikit-learn wrapper function
from sklearn.utils import resample
from sklearn.ensemble import RandomForestClassifier
from math import ceil
# Import our custom utilities
from imp import reload
from utils import irf_jupyter_utils
from utils import irf_utils
reload(irf_jupyter_utils)
reload(irf_utils)
Explanation: Key Requirements for the iRF scikit-learn implementation
The following is a documentation of the main requirements for the iRF implementation
Typical Setup
Import the required dependencies
In particular irf_utils and irf_jupyter_utils
End of explanation
load_breast_cancer = load_breast_cancer()
X_train, X_test, y_train, y_test, rf = irf_jupyter_utils.generate_rf_example(n_estimators=10,
feature_weight=None)
Explanation: Step 1: Fit the Initial Random Forest
Just fit every feature with equal weights per the usual random forest code e.g. DecisionForestClassifier in scikit-learn
End of explanation
print("Training feature dimensions", X_train.shape, sep = ":\n")
print("\n")
print("Training outcome dimensions", y_train.shape, sep = ":\n")
print("\n")
print("Test feature dimensions", X_test.shape, sep = ":\n")
print("\n")
print("Test outcome dimensions", y_test.shape, sep = ":\n")
print("\n")
print("first 5 rows of the training set features", X_train[:2], sep = ":\n")
print("\n")
print("first 5 rows of the training set outcomes", y_train[:2], sep = ":\n")
Explanation: Check out the data
End of explanation
all_rf_tree_data = irf_utils.get_rf_tree_data(rf=rf,
X_train=X_train, y_train=y_train,
X_test=X_test, y_test=y_test)
#all_rf_tree_data
rf.feature_importances_
Explanation: Step 2: Get all Random Forest and Decision Tree Data
Extract in a single dictionary the random forest data and for all of it's decision trees
This is as required for RIT purposes
End of explanation
all_rit_tree_data = irf_utils.get_rit_tree_data(
all_rf_tree_data=all_rf_tree_data,
bin_class_type=1,
random_state=12,
M=100,
max_depth=2,
noisy_split=False,
num_splits=2)
#for i in range(100):
# print(all_rit_tree_data['rit{}'.format(i)]['rit_leaf_node_union_value'])
Explanation: STEP 3: Get the RIT data and produce RITs
End of explanation
# Print the feature ranking
print("Feature ranking:")
feature_importances_rank_idx = all_rf_tree_data['feature_importances_rank_idx']
feature_importances = all_rf_tree_data['feature_importances']
for f in range(X_train.shape[1]):
print("%d. feature %d (%f)" % (f + 1
, feature_importances_rank_idx[f]
, feature_importances[feature_importances_rank_idx[f]]))
Explanation: Perform Manual CHECKS on the irf_utils
These should be converted to unit tests and checked with nosetests -v test_irf_utils.py
Step 4: Plot some Data
List Ranked Feature Importances
End of explanation
# Plot the feature importances of the forest
feature_importances_std = all_rf_tree_data['feature_importances_std']
plt.figure()
plt.title("Feature importances")
plt.bar(range(X_train.shape[1])
, feature_importances[feature_importances_rank_idx]
, color="r"
, yerr = feature_importances_std[feature_importances_rank_idx], align="center")
plt.xticks(range(X_train.shape[1]), feature_importances_rank_idx)
plt.xlim([-1, X_train.shape[1]])
plt.show()
Explanation: Plot Ranked Feature Importances
End of explanation
# Now plot the trees individually
#irf_jupyter_utils.draw_tree(decision_tree = all_rf_tree_data['rf_obj'].estimators_[0])
Explanation: Decision Tree 0 (First) - Get output
Check the output against the decision tree graph
End of explanation
#irf_jupyter_utils.pretty_print_dict(inp_dict = all_rf_tree_data['dtree0'])
# Count the number of samples passing through the leaf nodes
sum(all_rf_tree_data['dtree0']['tot_leaf_node_values'])
Explanation: Compare to our dict of extracted data from the tree
End of explanation
#irf_jupyter_utils.pretty_print_dict(inp_dict = all_rf_tree_data['dtree0']['all_leaf_paths_features'])
Explanation: Check output against the diagram
End of explanation
def run_RIT(X_train,
X_test,
y_train,
y_test,
K,
n_estimators,
B,
random_state_classifier=2018,
propn_n_samples=0.2,
bin_class_type=1,
random_state=12,
M=4,
max_depth=2,
noisy_split=False,
num_splits=2):
This function will allow us to run the RIT
for the given parameters
# Set the random state for reproducibility
np.random.seed(random_state_classifier)
# Convert the bootstrap resampling proportion to the number
# of rows to resample from the training data
n_samples = ceil(propn_n_samples * X_train.shape[0])
# Initialize dictionary of rf weights
# CHECK: change this name to be `all_rf_weights_output`
all_rf_weights = {}
# Initialize dictionary of bootstrap rf output
all_rf_bootstrap_output = {}
# Initialize dictionary of bootstrap RIT output
all_rit_bootstrap_output = {}
for k in range(K):
if k == 0:
# Initially feature weights are None
feature_importances = None
# Update the dictionary of all our RF weights
all_rf_weights["rf_weight{}".format(k)] = feature_importances
# fit RF feature weights i.e. initially None
rf = RandomForestClassifier(n_estimators=n_estimators)
# fit the classifier
rf.fit(
X=X_train,
y=y_train,
feature_weight=all_rf_weights["rf_weight{}".format(k)])
# Update feature weights using the
# new feature importance score
feature_importances = rf.feature_importances_
# Load the weights for the next iteration
all_rf_weights["rf_weight{}".format(k + 1)] = feature_importances
else:
# fit weighted RF
# Use the weights from the previous iteration
rf = RandomForestClassifier(n_estimators=n_estimators)
# fit the classifier
rf.fit(
X=X_train,
y=y_train,
feature_weight=all_rf_weights["rf_weight{}".format(k)])
# Update feature weights using the
# new feature importance score
feature_importances = rf.feature_importances_
# Load the weights for the next iteration
all_rf_weights["rf_weight{}".format(k + 1)] = feature_importances
# Run the RITs
for b in range(B):
# Take a bootstrap sample from the training data
# based on the specified user proportion
X_train_rsmpl, y_rsmpl = resample(
X_train, y_train, n_samples=n_samples)
# Set up the weighted random forest
# Using the weight from the (K-1)th iteration i.e. RF(w(K))
rf_bootstrap = RandomForestClassifier(
#CHECK: different number of trees to fit for bootstrap samples
n_estimators=n_estimators)
# Fit RF(w(K)) on the bootstrapped dataset
rf_bootstrap.fit(
X=X_train_rsmpl,
y=y_rsmpl,
feature_weight=all_rf_weights["rf_weight{}".format(K - 1)])
# All RF tree data
# CHECK: why do we need y_train here?
all_rf_tree_data = irf_utils.get_rf_tree_data(
rf=rf_bootstrap,
X_train=X_train_rsmpl,
y_train=y_rsmpl,
X_test=X_test,
y_test=y_test)
# Update the rf bootstrap output dictionary
all_rf_bootstrap_output['rf_bootstrap{}'.format(b)] = all_rf_tree_data
# Run RIT on the interaction rule set
# CHECK - each of these variables needs to be passed into
# the main run_RIT function
all_rit_tree_data = irf_utils.get_rit_tree_data(
all_rf_tree_data=all_rf_tree_data,
bin_class_type=1,
random_state=12,
M=4,
max_depth=2,
noisy_split=False,
num_splits=2)
# Update the rf bootstrap output dictionary
# We will reference the RIT for a particular rf bootstrap
# using the specific bootstrap id - consistent with the
# rf bootstrap output data
all_rit_bootstrap_output['rf_bootstrap{}'.format(b)] = all_rit_tree_data
return all_rf_weights, all_rf_bootstrap_output, all_rit_bootstrap_output
all_rf_weights, all_rf_bootstrap_output, all_rit_bootstrap_output =\
run_RIT(X_train=X_train,
X_test=X_test,
y_train=y_train,
y_test=y_test,
K=5,
n_estimators=20,
B=3,
random_state_classifier=2018,
propn_n_samples=0.2,
bin_class_type=1,
random_state=12,
M=4,
max_depth=2,
noisy_split=False,
num_splits=2)
all_rf_weights
Explanation: Wrapper function for iRF
End of explanation
all_rf_weights, all_rf_bootstrap_output, all_rit_bootstrap_output = run_RIT(
X_train=X_train,
X_test=X_test,
y_train=y_train,
y_test=y_test,
K=5,
n_estimators=20,
B=3,
random_state_classifier=2018,
propn_n_samples=0.2)
all_rit_bootstrap_output
Explanation: Run the iRF Function
For bootstrap - just pick up 20% of the training dataset at a time
End of explanation
all_rf_weights, all_rf_bootstrap_output, all_rit_bootstrap_output = run_RIT(
X_train=X_train,
X_test=X_test,
y_train=y_train,
y_test=y_test,
K=1,
n_estimators=1000,
B=3,
random_state_classifier=2018,
propn_n_samples=0.2)
print(np.ndarray.tolist(all_rf_weights['rf_weight1']))
Explanation: Run iRF for just 1 iteration - should be the uniform sampling version
End of explanation
rf.feature_importances_
Explanation: Compare to the original single fitted random forest (top of the notebook)!
End of explanation
rf_weight5 = np.ndarray.tolist(all_rf_weights['rf_weight1'])
rf_weight5
Explanation: These look like they match as required!
End of explanation
sorted([i for i, e in enumerate(rf_weight10) if e != 0])
Explanation:
End of explanation |
13,470 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
Step1: Air Quality Dataset
Regression machine learning task
From UCI repository
Step2: Insurance Dataset
Classification machine learning problem.
Formatted data from Kaggle | Python Code:
from feature_selector import FeatureSelector
import pandas as pd
Explanation: Introduction: Testing Feature Selector
In this notebook we will test the feature selector using two additional datasets. We will try out many of the FeatureSelector methods on these standard machine learning sets to make sure that it is a minimum working product.
https://github.com/WillKoehrsen/feature-selector
End of explanation
air_quality = pd.read_csv('data/AirQualityUCI.csv')
air_quality['Date'] = pd.to_datetime(air_quality['Date'])
air_quality['Date'] = (air_quality['Date'] - air_quality['Date'].min()).dt.total_seconds()
air_quality['Time'] = [int(x[:2]) for x in air_quality['Time']]
air_quality.head()
labels = air_quality['PT08.S5(O3)']
air_quality = air_quality.drop(columns = 'PT08.S5(O3)')
fs = FeatureSelector(data = air_quality, labels = labels)
fs.identify_all(selection_params = {'missing_threshold': 0.5, 'correlation_threshold': 0.7,
'task': 'regression', 'eval_metric': 'l2',
'cumulative_importance': 0.9})
fs.plot_collinear()
fs.plot_missing()
fs.plot_feature_importances(threshold = 0.9)
fs.data_all.head()
air_quality_removed = fs.remove(methods = 'all', keep_one_hot=False)
fs.ops
fs.plot_collinear(plot_all=True)
Explanation: Air Quality Dataset
Regression machine learning task
From UCI repository: https://archive.ics.uci.edu/ml/datasets/Air+Quality#
End of explanation
insurance = pd.read_csv('data/caravan-insurance-challenge.csv')
insurance = insurance[insurance['ORIGIN'] == 'train']
labels = insurance['CARAVAN']
insurance = insurance.drop(columns = ['ORIGIN', 'CARAVAN'])
insurance.head()
fs = FeatureSelector(data = insurance, labels = labels)
fs.identify_all(selection_params = {'missing_threshold': 0.8, 'correlation_threshold': 0.85,
'task': 'classification', 'eval_metric': 'auc',
'cumulative_importance': 0.8})
fs.plot_feature_importances(threshold=0.8)
fs.plot_collinear()
insurance_missing_zero = fs.remove(methods = ['missing', 'zero_importance'])
to_remove = fs.check_removal()
fs.feature_importances.head()
insurance_removed = fs.remove(methods = 'all', keep_one_hot=False)
Explanation: Insurance Dataset
Classification machine learning problem.
Formatted data from Kaggle: https://www.kaggle.com/uciml/caravan-insurance-challenge/data
Originally from UCI machine learning repository: https://archive.ics.uci.edu/ml/datasets/Insurance+Company+Benchmark+%28COIL+2000%29
End of explanation |
13,471 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Expectation Maximization with Mixtures
Implementation of a mixture model using the t distribution.
Source
D. Peel, G. J. McLachlan; Robust mixture modelling using the t distribution. Statistics and Computing (2000) 10, 339-348.
Generating a sample
I'll generate two samples with distinct parameters and merge them into one.
Step1: Plotting the sample with actual parameters
Step2: Estimating parameters | Python Code:
actual_mu01 = [0,0]
actual_cov01 = [[1,0], [0,1]]
actual_df01 = 15
actual_mu02 = [1,1]
actual_cov02 = [[.5, 0], [0, 1.5]]
actual_df02 = 15
size = 300
x01 = multivariate_t_rvs(m=actual_mu01, S=actual_cov01, df=actual_df01, n=size)
x02 = multivariate_t_rvs(m=actual_mu02, S=actual_cov02, df=actual_df02, n=size)
X = np.concatenate([x01, x02])
X.shape
Explanation: Expectation Maximization with Mixtures
Implementation of a mixture model using the t distribution.
Source
D. Peel, G. J. McLachlan; Robust mixture modelling using the t distribution. Statistics and Computing (2000) 10, 339-348.
Generating a sample
I'll generate two samples with distinct parameters and merge them into one.
End of explanation
x, y = np.mgrid[-3:3:.1, -3:4:.1]
xy = np.column_stack([x.ravel(),y.ravel()])
xy.shape
t01 = multivariate_t(actual_mu01, actual_cov01, actual_df01)
t02 = multivariate_t(actual_mu02, actual_cov02, actual_df02)
z01 = []
z02 = []
for _ in xy:
z01.append(t01.pdf(_.reshape(1, -1)))
z02.append(t02.pdf(_.reshape(1, -1)))
z01 = np.reshape(z01, x.shape)
z02 = np.reshape(z02, x.shape)
fig = plt.figure(figsize=(14, 5))
plt.subplot(121)
plt.scatter(X.T[0], X.T[1], s=10, alpha=.5)
plt.contour(x, y, z01)
plt.contour(x, y, z02)
plt.subplot(122)
plt.scatter(X.T[0], X.T[1], s=10, alpha=.5)
plt.contour(x, y, z01+z02)
fig.savefig('draft04 - actual.png')
plt.show()
Explanation: Plotting the sample with actual parameters
End of explanation
n_iter = 50 # number of iterations
# guessing mixture 01
mu01 = get_random(X)
cov01 = np.cov(X.T.copy())
# known variables mix01
df01 = 15
p01 = 2
# guessing mixture 02
mu02 = get_random(X)
cov02 = np.cov(X.T.copy())
# known variables mix 02
df02 = 15
p02 = 2
# guessing the pi parameter
pi = .5
t01 = multivariate_t(mu01, cov01, df01)
t02 = multivariate_t(mu02, cov02, df02)
start = time.time()
for i in range(n_iter):
# E-step: Calculating tau
wp1 = t01.pdf(X) * pi
wp2 = t02.pdf(X) * (1 - pi)
wp_total = wp1 + wp2
wp1 /= wp_total; wp1 = wp1.reshape(-1, 1)
wp2 /= wp_total; wp2 = wp2.reshape(-1, 1)
# E-Step: Calculating u
u01 = []
for delta in X-mu01:
u01.append(delta.dot(inv(cov01)).dot(delta))
u01 = np.array(u01)
u01 = (df01 + p01)/(df01 + u01); u01 = u01.reshape(-1, 1)
u02 = []
for delta in X-mu02:
u02.append(delta.dot(inv(cov02)).dot(delta))
u02 = np.array(u02)
u02 = (df02 + p02)/(df02 + u02); u02 = u02.reshape(-1, 1)
# M-step
mu01, cov01 = m_step(X, mu01, cov01, u01, wp1)
mu02, cov02 = m_step(X, mu02, cov02, u02, wp2)
t01.mu = mu01; t01.sigma = cov01
t02.mu = mu02; t02.sigma = cov02
pi = wp1.sum()/len(wp1)
print 'elapsed time: %s' % (time.time() - start)
print 'pi: {0:4.06}'.format(pi)
print 'mu01: {0}; mu02: {1}'.format(mu01, mu02)
print 'cov01\n%s' % cov01
print 'cov02\n%s' % cov02
xmin, xmax = min(X.T[0]), max(X.T[0])
ymin, ymax = min(X.T[1]), max(X.T[1])
x, y = np.mgrid[xmin:xmax:.1, ymin:ymax:.1]
xy = np.column_stack([x.ravel(),y.ravel()])
xy.shape
t01 = multivariate_t(mu01, cov01, df01)
t02 = multivariate_t(mu02, cov02, df02)
z01 = []
z02 = []
z03 = []
for _ in xy:
_ = _.reshape(1, -1)
z01.append(t01.pdf(_))
z02.append(t02.pdf(_))
z03.append(pi*t01.pdf(_) + (1-pi)*t02.pdf(_))
z01 = np.reshape(z01, x.shape)
z02 = np.reshape(z02, x.shape)
z03 = np.reshape(z03, x.shape)
fig = plt.figure(figsize=(14, 5))
plt.subplot(121)
plt.scatter(X.T[0], X.T[1], s=10, alpha=.5)
plt.contour(x, y, z01, cmap='ocean')
plt.contour(x, y, z02, cmap='hot')
plt.subplot(122)
plt.scatter(X.T[0], X.T[1], s=10, alpha=.5)
plt.contour(x, y, z03)
fig.savefig('draft04 - estimated.png')
plt.show()
Explanation: Estimating parameters
End of explanation |
13,472 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'sandbox-1', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: MIROC
Source ID: SANDBOX-1
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
13,473 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Programowanie w języku Python
http
Step1: Cechy Pythona
W Pythonie typ posiadają wartości, a nie zmienne, tak więc Python jest językiem z typami dynamicznymi.
Wszystkie wartości przekazywane są przez referencję.
Dla typów numerycznych zdefiniowana jest automatyczna konwersja, tak więc możliwe jest np. mnożenie liczby zespolonej przez liczbę całkowitą typu long bez rzutowania.
Elementy składni Pythona - literały
Literał to zapis wartości stałych pewnych wbudowanych typów.
Przykłady
Step2: Elementy składni Pythona
Typy
bool typ logiczny
True, False
int liczba całkowita
1, 13
float liczba zmiennoprzecinkowa
3.1415
complex liczba zespolona
1 + 3j
str napis (niezmienny)
list lista (zmienna zawartość i długość)
[2, "Ala", -12.32]
tuple krotka (niezmienna)
(2, "Ala", -12.32)
set zbiór (zmienny)
set([2, "Ala", -12.32])
dict słownik (tablica asocjacyjna) (zmienny)
{1
Step3: Liczba całkowita int, w Python 3 może być dowolnie duża.
Instrukcje sterujące
instrukcja warunkowa if
Przykład
Step4: Help
Takie polecenie działa tylko w jupyter
Step5: a poniższa komenda jest buildin-em pythonowym
Step6: Listy
Kilka funkcji, które są przydatne do pracy z listami.
len(lista) - zwraca liczbę elementów listy
append(x) - dodaje element x na koniec listy
insert(i,x) - dodaje do listy element x w miejsce o indeksie i
remove(x) - usuwa z listy pierwszy napotkany element x. Jeśli na - liście nie ma elementu o wartości x, Sage wyświetli błąd.
pop(i) - usuwa z listy element o indeksie i, jednocześnie zmniejszając rozmiar tablicy o 1. Jeśli wywołamy pop() bez podawania wartości i, usuniemy ostatni element listy.
count(x) - zwraca liczbę wystąpień x na liście
sort() - sortuje elementy listy rosnąco
Step7: Iteracje po liscie
Step8: Ponieważ możemy automatycznie wygenerować listę liczb całkowitych z dowolnym krokiem (zob. help(range))
Step9: to można w następujący sposób tworzyć pętle znane z C, Fortranu itp.
Step10: Odwzorowywanie list (list comprehensions)
Warto zajrzeć do
Step11: Ten sam efekt można otrzymać stosując pętle for w klasyczny sposób
Step12: Można zagnieżdżać pętle
Step13: Model programowania Map - Reduce
http | Python Code:
import this
Explanation: Programowanie w języku Python
http://mumin.pl/Skrypt_A_do_Z/
https://docs.python.org/3/tutorial/
http://books.icse.us.edu.pl/
Python
<img src="http://upload.wikimedia.org/wikipedia/commons/thumb/6/66/Guido_van_Rossum_OSCON_2006.jpg/320px-Guido_van_Rossum_OSCON_2006.jpg"
align='right'>
Język Python język programowania wysokopoziomowego, stworzony w latach 80-tych przez Guido van Rossum.
Ideą przewodnią jest zwięzłość i czytelność kodu źródłowego.
Python wspiera paradygmaty programowania:
- imperatywny
- objektowy
- funkcyjny
Historia Pythona <img src="http://upload.wikimedia.org/wikipedia/en/2/25/PythonProgLogo.png" align='right'>
Python 1.0 - Styczeń 1994
Python 2.0 - Październik 16, 2000
Python 2.7 - Lipiec 3, 2010
Python 3.0 - Grudzień 3, 2008
Python 3.4 - Marzec 16, 2014
<img src="http://upload.wikimedia.org/wikipedia/commons/thumb/f/f8/Python_logo_and_wordmark.svg/320px-Python_logo_and_wordmark.svg.png" align="left">
End of explanation
print("Spróbujmy sami!")
Explanation: Cechy Pythona
W Pythonie typ posiadają wartości, a nie zmienne, tak więc Python jest językiem z typami dynamicznymi.
Wszystkie wartości przekazywane są przez referencję.
Dla typów numerycznych zdefiniowana jest automatyczna konwersja, tak więc możliwe jest np. mnożenie liczby zespolonej przez liczbę całkowitą typu long bez rzutowania.
Elementy składni Pythona - literały
Literał to zapis wartości stałych pewnych wbudowanych typów.
Przykłady:
python
1
1.2
"bleble"
Elementy składni Pythona - słowa kluczowe
<table class="table table-bordered">
<tr><td>and</td><td>exec</td><td>not</td></tr>
<tr><td>assert</td><td>finally</td><td>or</td></tr>
<tr><td>break</td><td>for</td><td>pass</td></tr>
<tr><td>class</td><td>from</td><td>print</td></tr>
<tr><td>continue</td><td>global</td><td>raise</td></tr>
<tr><td>def</td><td>if</td><td>return</td></tr>
<tr><td>del</td><td>import</td><td>try</td></tr>
<tr><td>elif</td><td>in</td><td>while</td></tr>
<tr><td>else</td><td>is</td><td>with </td></tr>
<tr><td>except</td><td>lambda</td><td>yield</td></tr>
</table>
Elementy składni Pythona - operatory
Następujące tokeny są operatorami:
<P>
<div class="verbatim"><pre>
+ - * ** / // %
<< >> & | ^ ~
< > <= >= == != <>
</pre></div>
## Elementy składni Pythona - rozgraniczniki (*delimiters*)
Następujące tokeny są rozgranicznikami:
<div class="highlight-none"><div class="highlight"><pre><span></span>( ) [ ] { } @
, : . ` = ;
+= -= *= /= //= %=
&= |= ^= >>= <<= **=
</pre></div>
## Elementy składni Pythona - nazwy (identyfikatory)
Przykłady:
```python
a = 12
slowo = "coś dobrego"
```
## Elementy składni Pythona - komentarze
Następuja linia to komentarz:
```python
# to jest komentarz
# i to też
```
Uwaga: w notatniku w naturalny sposób możemy komentować kod korzytając z komórek tekstowych.
End of explanation
type(1)
Explanation: Elementy składni Pythona
Typy
bool typ logiczny
True, False
int liczba całkowita
1, 13
float liczba zmiennoprzecinkowa
3.1415
complex liczba zespolona
1 + 3j
str napis (niezmienny)
list lista (zmienna zawartość i długość)
[2, "Ala", -12.32]
tuple krotka (niezmienna)
(2, "Ala", -12.32)
set zbiór (zmienny)
set([2, "Ala", -12.32])
dict słownik (tablica asocjacyjna) (zmienny)
{1: "jeden", "dwa": 2}
type(None) odpowiednik null
None
End of explanation
import numpy as np
from numpy import sin,cos
print( sin(2.0))
print( sin(np.array([1,2,3,4])))
from math import sin
dir()
from math import *
dir()
Explanation: Liczba całkowita int, w Python 3 może być dowolnie duża.
Instrukcje sterujące
instrukcja warunkowa if
Przykład:
```python
liczba = 12
if liczba % 2 == 0:
p_czy_np = ''
else:
p_czy_np = 'nie'
```
Instrukcje sterujące
pętla for
Przykład:
python
for i in [1,2,3,4,5]:
print(i)
Moduly w Pythonie
W Pythonie liblioteki zewnętrzne są zorganizowane w modułach.
Polecenie np.:
import numpy
wczytuje moduł numpy i udostępnia wszystkie jego objekty w przestrzeni nazw numpy.
Można zmienić to na przykład skracając do np:
import numpy as np
Możemy też zaimportować do bieżącej przestrzeni nazw pojedyńcze funkcje z danego modułu:
from numpy import sin,cos
albo nawet wszystkie:
from numpy import *
Więcej szczegółów: https://docs.python.org/3.5/tutorial/modules.html
End of explanation
atan2?
Explanation: Help
Takie polecenie działa tylko w jupyter:
End of explanation
help(atan2)
Explanation: a poniższa komenda jest buildin-em pythonowym:
End of explanation
l = [1,2,3,4]
l
l[4]
print (l[0:3])
print (l[3])
Explanation: Listy
Kilka funkcji, które są przydatne do pracy z listami.
len(lista) - zwraca liczbę elementów listy
append(x) - dodaje element x na koniec listy
insert(i,x) - dodaje do listy element x w miejsce o indeksie i
remove(x) - usuwa z listy pierwszy napotkany element x. Jeśli na - liście nie ma elementu o wartości x, Sage wyświetli błąd.
pop(i) - usuwa z listy element o indeksie i, jednocześnie zmniejszając rozmiar tablicy o 1. Jeśli wywołamy pop() bez podawania wartości i, usuniemy ostatni element listy.
count(x) - zwraca liczbę wystąpień x na liście
sort() - sortuje elementy listy rosnąco
End of explanation
l = [1,2,33,4]
for el in l:
Explanation: Iteracje po liscie:
W Pythonie klasyczna pętla for jest pętlą for-each, np.:
End of explanation
range(10)
Explanation: Ponieważ możemy automatycznie wygenerować listę liczb całkowitych z dowolnym krokiem (zob. help(range)):
End of explanation
for el in range(10,20,2):
print (el)
Explanation: to można w następujący sposób tworzyć pętle znane z C, Fortranu itp.:
End of explanation
[a**2 for a in [1,2,3]]
[a**2 for a in [1,2,3] if a<3]
Explanation: Odwzorowywanie list (list comprehensions)
Warto zajrzeć do:
https://docs.python.org/3.5/tutorial/datastructures.html#list-comprehensions
Jest to zwięzły zapis odpowiadający matematycznemu, np.:
$$ a^2 \;\mathrm{ dla }\; a\in {1,2,3}$$
End of explanation
l = [11,22,44]
print(l)
l2 = []
for el in l:
l2.append(el**2)
print(l2)
Explanation: Ten sam efekt można otrzymać stosując pętle for w klasyczny sposób:
End of explanation
[(i,j) for i in range(4) for j in range(3) if i!=j]
Explanation: Można zagnieżdżać pętle:
End of explanation
def f(x):
return x+2
l = [11,22,44]
map( f, l )
sum( l )
import functools
def prod(x,y):
return x*y
functools.reduce ( prod, l )
def maks(x,y):
return max(x,y)
print( l, functools.reduce ( maks, l ))
max(l)
Explanation: Model programowania Map - Reduce
http://en.wikipedia.org/wiki/MapReduce
Python ma wbudowane funkcje umożliwiające wyrażanie zagadnienia w postaci operacji mapowania i redukcji dla danych znajdujących się na liście.
End of explanation |
13,474 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Iris univariate joint probability distribution
Again, it's the Iris dataset (I promise I will unleash some 'real' datasets at some point). I've done a lot of bivariate cluster plots, so I wanted to put together a 1d probability distribution based upon a custom query.
In this instance, it's purely plotting the joint probability distribution of each of the variables given the class, e.g. P(sepal_length|iris_class), P(petal_length|iris_class) ... and so on.
Step1: Create the network, specifying a latent variable.
Step2: And finally, query the model, specifying each variable in a separate query (otherwise the query will return a covariance matrix) | Python Code:
%matplotlib inline
import pandas as pd
import sys
sys.path.append("../../../bayesianpy")
import bayesianpy
from bayesianpy.network import Builder as builder
import logging
import os
import math
import numpy as np
import scipy.stats as ss
import matplotlib.pyplot as plt
import seaborn as sns
logger = logging.getLogger()
logger.addHandler(logging.StreamHandler())
logger.setLevel(logging.INFO)
bayesianpy.jni.attach(logger)
db_folder = bayesianpy.utils.get_path_to_parent_dir("")
iris = pd.read_csv(os.path.join(db_folder, "data/iris.csv"), index_col=False)
Explanation: Iris univariate joint probability distribution
Again, it's the Iris dataset (I promise I will unleash some 'real' datasets at some point). I've done a lot of bivariate cluster plots, so I wanted to put together a 1d probability distribution based upon a custom query.
In this instance, it's purely plotting the joint probability distribution of each of the variables given the class, e.g. P(sepal_length|iris_class), P(petal_length|iris_class) ... and so on.
End of explanation
network = bayesianpy.network.create_network()
cluster = builder.create_cluster_variable(network, 4)
node = builder.create_multivariate_continuous_node(network, iris.drop('iris_class',axis=1).columns.tolist(), "joint")
builder.create_link(network, cluster, node)
class_variable = builder.create_discrete_variable(network, iris, 'iris_class', iris['iris_class'].unique())
builder.create_link(network, cluster, class_variable)
Explanation: Create the network, specifying a latent variable.
End of explanation
head_variables = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
with bayesianpy.data.DataSet(iris, db_folder, logger) as dataset:
model = bayesianpy.model.NetworkModel(network, logger)
model.train(dataset)
queries = [bayesianpy.model.QueryConditionalJointProbability(
head_variables=[v],
tail_variables=['iris_class']) for v in head_variables]
(engine, _, _) = bayesianpy.model.InferenceEngine(network).create()
query = bayesianpy.model.SingleQuery(network, engine, logger)
results = query.query(queries)
jd = bayesianpy.visual.JointDistribution()
fig = plt.figure(figsize=(10,10))
for i, r in enumerate(list(results)):
ax = fig.add_subplot(2, 2, i+1)
jd.plot_distribution_with_variance(ax, iris, queries[i].get_head_variables(), r)
plt.show()
Explanation: And finally, query the model, specifying each variable in a separate query (otherwise the query will return a covariance matrix)
End of explanation |
13,475 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generative Adversarial Networks
Generative Adversarial Networks
In Adversarial training procedure two models are trained together. The generative model, G, that estimates the data distribution and the discriminative model, D, that determines if a given sample has come from the dataset or artificially generated. G is evolved into making artificially generated samples that are with higher probability mistaken by the D model as coming from true data distribution.
One nice property of GANs is that the generator is not directly updated with data examples, but by the gradients coming through the discriminator.
Here we will make a conditional generative model p(x|c) by adding some class label c as input to both G and D.
Use this code with no warranty and please respect the accompanying license.
Step1: Network definitions
Step2: Adversarial Training
You can either get the trained models from my google drive or train your own models using the DCGAN.py script.
I will later look into what if we make our noise gaussian (and make some samples more frquent)?
Step3: Experiments
Create demo networks and restore weights
Step4: 1) Sample bag of numbers from the generator
Step5: 2) Sample single number from the generator
Step6: 3) Class Sweep
With the same z slowly move the class labels to smoothly generate different numbers
Step7: 3) Z-space Sweep
Vary a coefficient alpha that determines how much of two different Z values are used to sample from the generator.
This is also called interpolation in z space in the original paper. | Python Code:
# Imports
%reload_ext autoreload
%autoreload 1
import os, sys
sys.path.append('../')
sys.path.append('../common')
from tools_general import tf, np
from IPython.display import Image
from tools_train import get_train_params, OneHot, vis_square
import imageio
# define parameters
networktype = 'DCGAN_MNIST'
work_dir = '../trained_models/%s/' %networktype
Explanation: Generative Adversarial Networks
Generative Adversarial Networks
In Adversarial training procedure two models are trained together. The generative model, G, that estimates the data distribution and the discriminative model, D, that determines if a given sample has come from the dataset or artificially generated. G is evolved into making artificially generated samples that are with higher probability mistaken by the D model as coming from true data distribution.
One nice property of GANs is that the generator is not directly updated with data examples, but by the gradients coming through the discriminator.
Here we will make a conditional generative model p(x|c) by adding some class label c as input to both G and D.
Use this code with no warranty and please respect the accompanying license.
End of explanation
# define networks
from tools_networks import deconv, conv, dense, clipped_crossentropy, dropout
def concat_labels(X, labels):
if X.get_shape().ndims == 4:
X_shape = tf.shape(X)
labels_reshaped = tf.reshape(labels, [-1, 1, 1, 10])
a = tf.ones([X_shape[0], X_shape[1], X_shape[2], 10])
X = tf.concat([X, labels_reshaped * a], axis=3)
return X
def create_gan_G(z, labels, is_training, Cout=1, trainable=True, reuse=False, networktype='ganG'):
'''input : batchsize * 100 and labels to make the generator conditional
output: batchsize * 28 * 28 * 1'''
with tf.variable_scope(networktype, reuse=reuse):
z = tf.concat(axis=-1, values=[z, labels])
Gz = dense(z, is_training, Cout=4 * 4 * 256, act='reLu', norm='batchnorm', name='dense2')
Gz = tf.reshape(Gz, shape=[-1, 4, 4, 256]) # 4
Gz = deconv(Gz, is_training, kernel_w=5, stride=2, Cout=256, trainable=trainable, act='reLu', norm='batchnorm', name='deconv1') # 11
Gz = deconv(Gz, is_training, kernel_w=5, stride=2, Cout=128, trainable=trainable, act='reLu', norm='batchnorm', name='deconv2') # 25
Gz = deconv(Gz, is_training, kernel_w=4, stride=1, Cout=1, act=None, norm=None, name='deconv3') # 28
Gz = tf.nn.sigmoid(Gz)
return Gz
def create_gan_D(xz, labels, is_training, trainable=True, reuse=False, networktype='ganD'):
with tf.variable_scope(networktype, reuse=reuse):
xz = concat_labels(xz, labels)
Dxz = conv(xz, is_training, kernel_w=5, stride=2, Cout=128, trainable=trainable, act='lrelu', norm=None, name='conv1') # 12
Dxz = conv(Dxz, is_training, kernel_w=5, stride=2, Cout=256, trainable=trainable, act='lrelu', norm='batchnorm', name='conv2') # 4
Dxz = conv(Dxz, is_training, kernel_w=2, stride=2, Cout=256, trainable=trainable, act='lrelu', norm='batchnorm', name='conv3') # 2
Dxz = conv(Dxz, is_training, kernel_w=2, stride=2, Cout=1, trainable=trainable, act='lrelu', norm='batchnorm', name='conv4') # 2
Dxz = tf.nn.sigmoid(Dxz)
return Dxz
def create_dcgan_trainer(base_lr=1e-4, networktype='dcgan'):
'''Train a Generative Adversarial Network'''
# with tf.name_scope('train_%s' % networktype):
is_training = tf.placeholder(tf.bool, [], 'is_training')
inZ = tf.placeholder(tf.float32, [None, 100]) # tf.random_uniform(shape=[batch_size, 100], minval=-1., maxval=1., dtype=tf.float32)
inL = tf.placeholder(tf.float32, [None, 10]) # we want to condition the generated out put on some parameters of the input
inX = tf.placeholder(tf.float32, [None, 28, 28, 1])
Gz = create_gan_G(inZ, inL, is_training, Cout=1, trainable=True, reuse=False, networktype=networktype + '_G')
DGz = create_gan_D(Gz, inL, is_training, trainable=True, reuse=False, networktype=networktype + '_D')
Dx = create_gan_D(inX, inL, is_training, trainable=True, reuse=True, networktype=networktype + '_D')
ganG_var_list = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope=networktype + '_G')
print(len(ganG_var_list), [var.name for var in ganG_var_list])
ganD_var_list = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope=networktype + '_D')
print(len(ganD_var_list), [var.name for var in ganD_var_list])
Gscore = clipped_crossentropy(DGz, tf.ones_like(DGz))
Dscore = clipped_crossentropy(DGz, tf.zeros_like(DGz)) + clipped_crossentropy(Dx, tf.ones_like(Dx))
Gtrain = tf.train.AdamOptimizer(learning_rate=base_lr, beta1=0.5).minimize(Gscore, var_list=ganG_var_list)
Dtrain = tf.train.AdamOptimizer(learning_rate=base_lr, beta1=0.5).minimize(Dscore, var_list=ganD_var_list)
return Gtrain, Dtrain, Gscore, Dscore, is_training, inZ, inX, inL, Gz
Explanation: Network definitions
End of explanation
best_iter = 4280#visually selected
best_img = work_dir + 'Iter_%d.jpg' %best_iter
best_ganMNIST_model = work_dir + "%.3d_model.ckpt" % best_iter
Image(filename=best_img)
Explanation: Adversarial Training
You can either get the trained models from my google drive or train your own models using the DCGAN.py script.
I will later look into what if we make our noise gaussian (and make some samples more frquent)?
End of explanation
tf.reset_default_graph()
demo_sess = tf.InteractiveSession()
is_training = tf.placeholder(tf.bool, [], 'is_training')
inZ = tf.placeholder(tf.float32, [None, 100])
inL = tf.placeholder(tf.float32, [None, 10])
inX = tf.placeholder(tf.float32, [None, 28, 28, 1])
Gz = create_gan_G(inZ, inL, is_training, Cout=1, trainable=True, reuse=False, networktype=networktype + '_G')
DGz = create_gan_D(Gz, inL, is_training, trainable=True, reuse=False, networktype=networktype + '_D')
tf.global_variables_initializer().run()
ganG_var_list = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope=networktype + '_G')
saver = tf.train.Saver(var_list=ganG_var_list)
saver.restore(demo_sess, best_ganMNIST_model)
Explanation: Experiments
Create demo networks and restore weights
End of explanation
batch_size = 100
desired_number = 0
Z_test = np.random.uniform(size=[batch_size, 100], low=-1., high=1.).astype(np.float32)
labels_test = OneHot(np.repeat([[0,1,2,3,4,5,6,7,8,9]],batch_size//10,axis=0))
#labels_test = OneHot(np.repeat([[0,1,0,1,0,1,0,1,0,1]],batch_size//10,axis=0))
Gz_sample = demo_sess.run(Gz, feed_dict={inZ: Z_test, inL: labels_test, is_training:False})
img = vis_square(Gz_sample, [batch_size//10, 10],save_path=work_dir + 'test.jpg')
Image(filename=work_dir + 'test.jpg')
Explanation: 1) Sample bag of numbers from the generator
End of explanation
num = 3
Z1 = np.random.uniform(size=[1, 100], low=-1., high=1.).astype(np.float32)
L1 = OneHot(np.repeat([[num]],1,axis=1))
Gz_sample = demo_sess.run(Gz, feed_dict={inZ: Z1, inL: L1, is_training:False})
img = vis_square(Gz_sample, [1, 1],save_path=work_dir + 'test.jpg')
Image(filename=work_dir + 'test.jpg')
Explanation: 2) Sample single number from the generator
End of explanation
class_sweep_dir = work_dir+'class_sweep/'
if not os.path.exists(class_sweep_dir): os.makedirs(class_sweep_dir)
batch_size = 100
Z_test = np.random.uniform(size=[batch_size, 100], low=-1., high=1.).astype(np.float32)
labels_test = OneHot(np.repeat([[0,0,0,0,0,0,0,0,0,0]],batch_size//10,axis=0))
images = []
for num in range(9):
for p in np.linspace(1.,0.,30):
count=len(images)
fname = class_sweep_dir+ '%.2d_%d_%.2f.jpg'%(count,num,p)
for i in range(batch_size):
labels_test[i,num] = p; labels_test[i,num+1] = 1-p
Gz_sample = demo_sess.run(Gz, feed_dict={inZ: Z_test, inL: labels_test, is_training:False})
img = vis_square(Gz_sample, [batch_size//10, 10],save_path= fname)
images.append(imageio.imread(fname))
try: os.remove(fname)
except: pass
imageio.mimsave(class_sweep_dir+'class_sweep.gif', images)
#display(Image(url=class_sweep_dir+'class_sweep.gif'))
Image(url='../common/images/cgan_class_sweep.gif')
Explanation: 3) Class Sweep
With the same z slowly move the class labels to smoothly generate different numbers
End of explanation
zspace_sweep_dir = work_dir+'zspace_sweep/'
if not os.path.exists(zspace_sweep_dir): os.makedirs(zspace_sweep_dir)
batch_size = 100
Z1 = np.random.uniform(size=[batch_size, 100], low=-1., high=1.).astype(np.float32)
Z2 = np.random.uniform(size=[batch_size, 100], low=-1., high=1.).astype(np.float32)
images = []
for alpha in np.linspace(1.,0.,50):
count=len(images)
fname = zspace_sweep_dir+'%.2d.jpg'%(count)
Z_test = alpha * Z1 + (1-alpha)*Z2
labels_test = OneHot(np.repeat([[0,1,2,3,4,5,6,7,8,9]],batch_size//10,axis=0))
Gz_sample = demo_sess.run(Gz, feed_dict={inZ: Z_test, inL: labels_test, is_training:False})
img = vis_square(Gz_sample, [batch_size//10, 10],save_path=fname)
images.append(imageio.imread(fname))
try: os.remove(fname)
except: pass
imageio.mimsave(zspace_sweep_dir+'zspace_sweep.gif', images)
#display(Image(url=zspace_sweep_dir+'zspace_sweep.gif'))
Image(url='../common/images/cgan_zspace_sweep.gif')
Explanation: 3) Z-space Sweep
Vary a coefficient alpha that determines how much of two different Z values are used to sample from the generator.
This is also called interpolation in z space in the original paper.
End of explanation |
13,476 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Problem 1.
If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find the sum of all the multiples of 3 or 5 below 1000.
Step1: Problem 2.
Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be
Step2: Problem 3.
The prime factors of 13195 are 5, 7, 13 and 29.
What is the largest prime factor of the number 600851475143 ?
Step3: Problem 4.
A palindromic number reads the same both ways. The largest palindrome made from the product of two 2-digit numbers is 9009 = 91 × 99.
Find the largest palindrome made from the product of two 3-digit numbers.
Step4: Problem 5.
2520 is the smallest number that can be divided by each of the numbers from 1 to 10 without any remainder.
What is the smallest positive number that is evenly divisible by all of the numbers from 1 to 20?
Step5: Problem 6.
The sum of the squares of the first ten natural numbers is,
1^2 + 2^2 + ... + 10^2 = 385
The square of the sum of the first ten natural numbers is,
(1 + 2 + ... + 10)^2 = 552 = 3025
Hence the difference between the sum of the squares of the first ten natural numbers and the square of the sum is 3025 − 385 = 2640.
Find the difference between the sum of the squares of the first one hundred natural numbers and the square of the sum. | Python Code:
x = range(1,10)
x
def sum_number_mode(a,b,maxbarrier):
x = 0
for i in range(1,maxbarrier-1):
if i%a == 0 or i%b == 0:
x += i
return x
sum_number_mode(3,5,1001)
Explanation: Problem 1.
If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find the sum of all the multiples of 3 or 5 below 1000.
End of explanation
x = [3,4]
x[1]
def sum_even_fibonacci(mod, maxfibonacci):
x = [1,2]
sum_fib = 0
while x[1] < maxfibonacci:
if x[1]%mod == 0:
sum_fib += x[1]
y = x[1]
x[1] += x[0]
x[0] = y
return sum_fib
sum_even_fibonacci(2,40000000000)
Explanation: Problem 2.
Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be:
1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...
By considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms.
End of explanation
def greatest_factor(prime):
x = 2 # initiate first factor
while x != prime:
if prime%x == 0:
result = prime/x
prime = result
else:
x += 1
else:
return prime
greatest_factor(81)
Explanation: Problem 3.
The prime factors of 13195 are 5, 7, 13 and 29.
What is the largest prime factor of the number 600851475143 ?
End of explanation
import time
start_time = time.time()
def palindrome(number):
return str(number) == str(number)[::-1]
list_number = []
for x in range(800,1000):
for i in range(800,1000):
result = i*x
if palindrome(result) == True:
list_number += [result,]
max(list_number)
print("--- %s seconds ---" % (time.time() - start_time))
start_time = time.time()
list_number = []
for x in range(800,1000):
for i in range(800,1000):
result = i*x
s = list(__str__(result))
if (s[0] == s[5]) and (s[1] == s[4]) and (s[2] == s[3]):
list_number += [result,]
max(list_number)
print("--- %s seconds ---" % (time.time() - start_time))
Explanation: Problem 4.
A palindromic number reads the same both ways. The largest palindrome made from the product of two 2-digit numbers is 9009 = 91 × 99.
Find the largest palindrome made from the product of two 3-digit numbers.
End of explanation
def smallest_divided(start, max_number):
factors = range(start, max_number + 1)
x = (len(factors)) - 1
factors_result = factors[0]
while (factors[x]-1) >= factors[0]:
if factors_result%factors[x-1] !=0 :
factors_result *= factors[x-1]
print factors_result
x -= 1
#return factors_result
smallest_divided(1,20)
def factoring(prime):
x = 2 # initiate first factor
factors = []
result = 0
while x != prime:
while (prime%x == 0):
result = prime/x
prime = result
factors.append(x)
if prime == 1:
break
if prime == 1:
break
else:
x += 1
else:
factors.append(prime)
return factors
factoring(20)
def smallest_divided(start, max_number):
x = range(start, max_number + 1)
factors = []
for i in x:
z = factoring(i)
factors.append(z)
return factors
smallest_divided(1,20)
faktor = smallest_divided(1,20)
set_factor = faktor
def add_set(set_factor):
set_all = []
intersect = []
join = []
for i in set_factor:
intersect = i and set_all
i = i - intersect
set_all = i + set_all
return set_all
set_all_simulate = []
for i in set_factor:
set_all_simulate += i
print set_all_simulate
factor_set = set(set_all_simulate)
factor_set
z = 1
for i in factor_set:
z *= i
print z
set_factor
Explanation: Problem 5.
2520 is the smallest number that can be divided by each of the numbers from 1 to 10 without any remainder.
What is the smallest positive number that is evenly divisible by all of the numbers from 1 to 20?
End of explanation
def sum_of_squares(start, end):
sum_all = 0
for i in range(start, end +1):
sum_all += (i)**2
return sum_all
su_sq = sum_of_squares(1,100)
def square_of_sum(start, end):
sum_all_i = 0
b = 0
for i in range(start, end + 1):
sum_all_i += i
b = sum_all_i * sum_all_i
return b
sq_su = square_of_sum(1,100)
sq_su - su_sq
Explanation: Problem 6.
The sum of the squares of the first ten natural numbers is,
1^2 + 2^2 + ... + 10^2 = 385
The square of the sum of the first ten natural numbers is,
(1 + 2 + ... + 10)^2 = 552 = 3025
Hence the difference between the sum of the squares of the first ten natural numbers and the square of the sum is 3025 − 385 = 2640.
Find the difference between the sum of the squares of the first one hundred natural numbers and the square of the sum.
End of explanation |
13,477 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Pyro
Michael Zingale, Alice Harpole
Stony Brook University
Why pyro?
Python is a good introductory language—it helps make the way these algorithms work clearer
High level introduction to core hydro algorithms for students
Supplemented with extensive notes deriving the methods ( https
Step1: Grids
Step2: Data is stored as an ArrayIndexer object, which makes it easy to implement differencing on the entire array.
To implement
Step3: Running
Each solver has a collection of problem setups (initial conditions) and inputs files
Commandline
Step4: Example
Step5: Example | Python Code:
import mesh.patch as patch
import mesh.boundary as bnd
import numpy as np
g = patch.Grid2d(16, 16, ng=2)
print(g)
bc = bnd.BC(xlb="periodic", xrb="periodic", ylb="reflect", yrb="outflow")
print(bc)
d = patch.CellCenterData2d(g)
d.register_var("a", bc)
d.create()
print(d)
Explanation: Introduction to Pyro
Michael Zingale, Alice Harpole
Stony Brook University
Why pyro?
Python is a good introductory language—it helps make the way these algorithms work clearer
High level introduction to core hydro algorithms for students
Supplemented with extensive notes deriving the methods ( https://github.com/Open-Astrophysics-Bookshelf/numerical_exercises)
Enables easy ability to rapidly prototype code—core intrastructure is in place
Allows for sharing exploration in Jupyter notebooks
Design ideas:
Clarity is emphasized over performance
Single driver implements core evolution
Object-oriented structure: each solver provides a simulation class to manage the different parts of the update
All solvers are 2-d: right balance of complexity and usefulness
Realtime visualization when run in commandline mode
History:
First version in 2003: python + Numeric + C extensions
May 2004: switch to python + numarray + C extensions
cvs commit:
convert from Numeric to numarray, since numarray seems to be the future.
May 2012: revived, rewritten in python + NumPy + f2py
Nov 2018: python + NumPy + Numba
Our usage
We start new undergraduate researchers out with pyro to learn about simulation workflows
Typically have UG add a new problem setup
Current Solvers
linear advection: 2nd and 4th order FV, WENO; CTU, RK, and SDC time integration
compressible hydrodynamics: 2nd order CTU PLM, 2nd order MOL RK, 4th order FV solver with RK or SDC integration
shallow water hydrodynamics
multigrid: force non-constant coefficient general elliptic equations
implicit thermal diffusion: using multigrid
incompressible hydrodynamics: 2nd order accurate approximate projection method
low Mach number atmospheric hydrodynamics: pseudo-imcompressible method
special relativistic compressible hydrodynamics
Main driver:
parse runtime parameters
setup the grid
initialize the data for the desired problem
do any necessary pre-evolution initialization
evolve while t < tmax and n < max_steps
fill boundary conditions
get the timestep
evolve for a single timestep
t = t + dt
output
visualization
clean-up
<div class="alert alert-block alert-info">
This driver is flexible enough for all of the time-dependent solvers
</div>
Grids
patch module manages grids and data that lives on them
Fills boundaries, does prolongation/restriction for multigrid
Many convenience functions
End of explanation
a = d.get_var("a")
Explanation: Grids
End of explanation
b = g.scratch_array()
b.v()[:,:] = (a.ip(1) - a.ip(-1))/(2.0*a.g.dx)
Explanation: Data is stored as an ArrayIndexer object, which makes it easy to implement differencing on the entire array.
To implement:
$$ b = \frac{a_{i+1,j} - a_{i-1,j}}{2 \Delta x}$$
End of explanation
from pyro import Pyro
pyro_sim = Pyro("advection")
pyro_sim.initialize_problem("tophat", "inputs.tophat",
other_commands=["mesh.nx=8", "mesh.ny=8",
"vis.dovis=0"])
pyro_sim.run_sim()
Explanation: Running
Each solver has a collection of problem setups (initial conditions) and inputs files
Commandline:
./pyro.py solver problem inputs
Jupyter: all functionality accessible through Pyro class.
Example: advection
End of explanation
dens = pyro_sim.get_var("density")
dens.pretty_print(show_ghost=True, fmt="%6.2f")
Explanation: Example: advection
End of explanation
pyro_sim.sim.dovis()
Explanation: Example: advection
End of explanation |
13,478 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: Classify Flowers with Transfer Learning
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step4: The flowers dataset
The flowers dataset consists of images of flowers with 5 possible class labels.
When training a machine learning model, we split our data into training and test datasets. We will train the model on our training data and then evaluate how well the model performs on data it has never seen - the test set.
Let's download our training and test examples (it may take a while) and split them into train and test sets.
Run the following two cells
Step10: Explore the data
The flowers dataset consists of examples which are labeled images of flowers. Each example contains a JPEG flower image and the class label
Step12: Build the model
We will load a TF-Hub image feature vector module, stack a linear classifier on it, and add training and evaluation ops. The following cell builds a TF graph describing the model and its training, but it doesn't run the training (that will be the next step).
Step16: Train the network
Now that our model is built, let's train it and see how it perfoms on our test set.
Step17: Incorrect predictions
Let's a take a closer look at the test examples that our model got wrong.
Are there any mislabeled examples in our test set?
Is there any bad data in the test set - images that aren't actually pictures of flowers?
Are there images where you can understand why the model made a mistake? | Python Code:
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
Explanation: Copyright 2018 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
import collections
import io
import math
import os
import random
from six.moves import urllib
from IPython.display import clear_output, Image, display, HTML
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
import tensorflow_hub as hub
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import sklearn.metrics as sk_metrics
import time
Explanation: Classify Flowers with Transfer Learning
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/hub/tutorials/image_feature_vector"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/image_feature_vector.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/hub/blob/master/examples/colab/image_feature_vector.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/image_feature_vector.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://tfhub.dev/google/imagenet/mobilenet_v2_035_128/feature_vector/2"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a>
</td>
</table>
Have you ever seen a beautiful flower and wondered what kind of flower it is? Well, you're not the first, so let's build a way to identify the type of flower from a photo!
For classifying images, a particular type of deep neural network, called a convolutional neural network has proved to be particularly powerful. However, modern convolutional neural networks have millions of parameters. Training them from scratch requires a lot of labeled training data and a lot of computing power (hundreds of GPU-hours or more). We only have about three thousand labeled photos and want to spend much less time, so we need to be more clever.
We will use a technique called transfer learning where we take a pre-trained network (trained on about a million general images), use it to extract features, and train a new layer on top for our own task of classifying images of flowers.
Setup
End of explanation
FLOWERS_DIR = './flower_photos'
TRAIN_FRACTION = 0.8
RANDOM_SEED = 2018
def download_images():
If the images aren't already downloaded, save them to FLOWERS_DIR.
if not os.path.exists(FLOWERS_DIR):
DOWNLOAD_URL = 'http://download.tensorflow.org/example_images/flower_photos.tgz'
print('Downloading flower images from %s...' % DOWNLOAD_URL)
urllib.request.urlretrieve(DOWNLOAD_URL, 'flower_photos.tgz')
!tar xfz flower_photos.tgz
print('Flower photos are located in %s' % FLOWERS_DIR)
def make_train_and_test_sets():
Split the data into train and test sets and get the label classes.
train_examples, test_examples = [], []
shuffler = random.Random(RANDOM_SEED)
is_root = True
for (dirname, subdirs, filenames) in tf.gfile.Walk(FLOWERS_DIR):
# The root directory gives us the classes
if is_root:
subdirs = sorted(subdirs)
classes = collections.OrderedDict(enumerate(subdirs))
label_to_class = dict([(x, i) for i, x in enumerate(subdirs)])
is_root = False
# The sub directories give us the image files for training.
else:
filenames.sort()
shuffler.shuffle(filenames)
full_filenames = [os.path.join(dirname, f) for f in filenames]
label = dirname.split('/')[-1]
label_class = label_to_class[label]
# An example is the image file and it's label class.
examples = list(zip(full_filenames, [label_class] * len(filenames)))
num_train = int(len(filenames) * TRAIN_FRACTION)
train_examples.extend(examples[:num_train])
test_examples.extend(examples[num_train:])
shuffler.shuffle(train_examples)
shuffler.shuffle(test_examples)
return train_examples, test_examples, classes
# Download the images and split the images into train and test sets.
download_images()
TRAIN_EXAMPLES, TEST_EXAMPLES, CLASSES = make_train_and_test_sets()
NUM_CLASSES = len(CLASSES)
print('\nThe dataset has %d label classes: %s' % (NUM_CLASSES, CLASSES.values()))
print('There are %d training images' % len(TRAIN_EXAMPLES))
print('there are %d test images' % len(TEST_EXAMPLES))
Explanation: The flowers dataset
The flowers dataset consists of images of flowers with 5 possible class labels.
When training a machine learning model, we split our data into training and test datasets. We will train the model on our training data and then evaluate how well the model performs on data it has never seen - the test set.
Let's download our training and test examples (it may take a while) and split them into train and test sets.
Run the following two cells:
End of explanation
#@title Show some labeled images
def get_label(example):
Get the label (number) for given example.
return example[1]
def get_class(example):
Get the class (string) of given example.
return CLASSES[get_label(example)]
def get_encoded_image(example):
Get the image data (encoded jpg) of given example.
image_path = example[0]
return tf.gfile.GFile(image_path, 'rb').read()
def get_image(example):
Get image as np.array of pixels for given example.
return plt.imread(io.BytesIO(get_encoded_image(example)), format='jpg')
def display_images(images_and_classes, cols=5):
Display given images and their labels in a grid.
rows = int(math.ceil(len(images_and_classes) / cols))
fig = plt.figure()
fig.set_size_inches(cols * 3, rows * 3)
for i, (image, flower_class) in enumerate(images_and_classes):
plt.subplot(rows, cols, i + 1)
plt.axis('off')
plt.imshow(image)
plt.title(flower_class)
NUM_IMAGES = 15 #@param {type: 'integer'}
display_images([(get_image(example), get_class(example))
for example in TRAIN_EXAMPLES[:NUM_IMAGES]])
Explanation: Explore the data
The flowers dataset consists of examples which are labeled images of flowers. Each example contains a JPEG flower image and the class label: what type of flower it is. Let's display a few images together with their labels.
End of explanation
LEARNING_RATE = 0.01
tf.reset_default_graph()
# Load a pre-trained TF-Hub module for extracting features from images. We've
# chosen this particular module for speed, but many other choices are available.
image_module = hub.Module('https://tfhub.dev/google/imagenet/mobilenet_v2_035_128/feature_vector/2')
# Preprocessing images into tensors with size expected by the image module.
encoded_images = tf.placeholder(tf.string, shape=[None])
image_size = hub.get_expected_image_size(image_module)
def decode_and_resize_image(encoded):
decoded = tf.image.decode_jpeg(encoded, channels=3)
decoded = tf.image.convert_image_dtype(decoded, tf.float32)
return tf.image.resize_images(decoded, image_size)
batch_images = tf.map_fn(decode_and_resize_image, encoded_images, dtype=tf.float32)
# The image module can be applied as a function to extract feature vectors for a
# batch of images.
features = image_module(batch_images)
def create_model(features):
Build a model for classification from extracted features.
# Currently, the model is just a single linear layer. You can try to add
# another layer, but be careful... two linear layers (when activation=None)
# are equivalent to a single linear layer. You can create a nonlinear layer
# like this:
# layer = tf.layers.dense(inputs=..., units=..., activation=tf.nn.relu)
layer = tf.layers.dense(inputs=features, units=NUM_CLASSES, activation=None)
return layer
# For each class (kind of flower), the model outputs some real number as a score
# how much the input resembles this class. This vector of numbers is often
# called the "logits".
logits = create_model(features)
labels = tf.placeholder(tf.float32, [None, NUM_CLASSES])
# Mathematically, a good way to measure how much the predicted probabilities
# diverge from the truth is the "cross-entropy" between the two probability
# distributions. For numerical stability, this is best done directly from the
# logits, not the probabilities extracted from them.
cross_entropy = tf.nn.softmax_cross_entropy_with_logits_v2(logits=logits, labels=labels)
cross_entropy_mean = tf.reduce_mean(cross_entropy)
# Let's add an optimizer so we can train the network.
optimizer = tf.train.GradientDescentOptimizer(learning_rate=LEARNING_RATE)
train_op = optimizer.minimize(loss=cross_entropy_mean)
# The "softmax" function transforms the logits vector into a vector of
# probabilities: non-negative numbers that sum up to one, and the i-th number
# says how likely the input comes from class i.
probabilities = tf.nn.softmax(logits)
# We choose the highest one as the predicted class.
prediction = tf.argmax(probabilities, 1)
correct_prediction = tf.equal(prediction, tf.argmax(labels, 1))
# The accuracy will allow us to eval on our test set.
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
Explanation: Build the model
We will load a TF-Hub image feature vector module, stack a linear classifier on it, and add training and evaluation ops. The following cell builds a TF graph describing the model and its training, but it doesn't run the training (that will be the next step).
End of explanation
# How long will we train the network (number of batches).
NUM_TRAIN_STEPS = 100 #@param {type: 'integer'}
# How many training examples we use in each step.
TRAIN_BATCH_SIZE = 10 #@param {type: 'integer'}
# How often to evaluate the model performance.
EVAL_EVERY = 10 #@param {type: 'integer'}
def get_batch(batch_size=None, test=False):
Get a random batch of examples.
examples = TEST_EXAMPLES if test else TRAIN_EXAMPLES
batch_examples = random.sample(examples, batch_size) if batch_size else examples
return batch_examples
def get_images_and_labels(batch_examples):
images = [get_encoded_image(e) for e in batch_examples]
one_hot_labels = [get_label_one_hot(e) for e in batch_examples]
return images, one_hot_labels
def get_label_one_hot(example):
Get the one hot encoding vector for the example.
one_hot_vector = np.zeros(NUM_CLASSES)
np.put(one_hot_vector, get_label(example), 1)
return one_hot_vector
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(NUM_TRAIN_STEPS):
# Get a random batch of training examples.
train_batch = get_batch(batch_size=TRAIN_BATCH_SIZE)
batch_images, batch_labels = get_images_and_labels(train_batch)
# Run the train_op to train the model.
train_loss, _, train_accuracy = sess.run(
[cross_entropy_mean, train_op, accuracy],
feed_dict={encoded_images: batch_images, labels: batch_labels})
is_final_step = (i == (NUM_TRAIN_STEPS - 1))
if i % EVAL_EVERY == 0 or is_final_step:
# Get a batch of test examples.
test_batch = get_batch(batch_size=None, test=True)
batch_images, batch_labels = get_images_and_labels(test_batch)
# Evaluate how well our model performs on the test set.
test_loss, test_accuracy, test_prediction, correct_predicate = sess.run(
[cross_entropy_mean, accuracy, prediction, correct_prediction],
feed_dict={encoded_images: batch_images, labels: batch_labels})
print('Test accuracy at step %s: %.2f%%' % (i, (test_accuracy * 100)))
def show_confusion_matrix(test_labels, predictions):
Compute confusion matrix and normalize.
confusion = sk_metrics.confusion_matrix(
np.argmax(test_labels, axis=1), predictions)
confusion_normalized = confusion.astype("float") / confusion.sum(axis=1)
axis_labels = list(CLASSES.values())
ax = sns.heatmap(
confusion_normalized, xticklabels=axis_labels, yticklabels=axis_labels,
cmap='Blues', annot=True, fmt='.2f', square=True)
plt.title("Confusion matrix")
plt.ylabel("True label")
plt.xlabel("Predicted label")
show_confusion_matrix(batch_labels, test_prediction)
Explanation: Train the network
Now that our model is built, let's train it and see how it perfoms on our test set.
End of explanation
incorrect = [
(example, CLASSES[prediction])
for example, prediction, is_correct in zip(test_batch, test_prediction, correct_predicate)
if not is_correct
]
display_images(
[(get_image(example), "prediction: {0}\nlabel:{1}".format(incorrect_prediction, get_class(example)))
for (example, incorrect_prediction) in incorrect[:20]])
Explanation: Incorrect predictions
Let's a take a closer look at the test examples that our model got wrong.
Are there any mislabeled examples in our test set?
Is there any bad data in the test set - images that aren't actually pictures of flowers?
Are there images where you can understand why the model made a mistake?
End of explanation |
13,479 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Emissions
Step1: Loading a configuration from file
We will use the following file as an example
Step2: The following line loads the setup detailled in that file
Step3: Browsing the configuration setup
Browsing through the configuration tree
The HEMCO setup consists of several extensions (including the HEMCO Core extension), which may have one or several attached emission data fields. Each emission data field may, in turn, have one or several scale factor or mask data fields that will be applied to it.
Use the extensions attribute to access to all extensions listed in the setup
Step4: The extensions attribute is a RecordList object (see the pygchem.utils.data_structures module). That object mainly behaves like a Python list, but it also allows to select extensions based on their properties, to remove selected extensions or to add new extensions.
Each item in the RecordList is an EmmisionExt object, which inherits from the Record class.
Select one extension based on its name
Step5: Select one or several extensions based on other attributes
Step6: Some parameters related to the previously selected extension
Step7: Get all base emission fields assigned to the extension
Step8: Get all extension data fields for this extension
Step9: Get all base emission fields in the HEMCO configuration | Python Code:
from pygchem import emissions
Explanation: Emissions: HEMCO Python API
The module pygchem.emissions provides an API for Harvard-NASA Emissions Component (HEMCO). Currently, it allows to read / write HEMCO configuration files and to browse or edit an existing configuration (or create a new configuration from scratch).
Note: this module is under active development and doesn't work yet with the last release of HEMCO.
End of explanation
hemco_example_file = '../data/HEMCO_test'
Explanation: Loading a configuration from file
We will use the following file as an example:
End of explanation
hemco_setup = emissions.load_setup(hemco_example_file)
Explanation: The following line loads the setup detailled in that file:
End of explanation
hemco_setup.extensions
Explanation: Browsing the configuration setup
Browsing through the configuration tree
The HEMCO setup consists of several extensions (including the HEMCO Core extension), which may have one or several attached emission data fields. Each emission data field may, in turn, have one or several scale factor or mask data fields that will be applied to it.
Use the extensions attribute to access to all extensions listed in the setup:
End of explanation
megan_ext = hemco_setup.extensions.select_item("MEGAN")
megan_ext
Explanation: The extensions attribute is a RecordList object (see the pygchem.utils.data_structures module). That object mainly behaves like a Python list, but it also allows to select extensions based on their properties, to remove selected extensions or to add new extensions.
Each item in the RecordList is an EmmisionExt object, which inherits from the Record class.
Select one extension based on its name:
End of explanation
selection = hemco_setup.extensions.select(enabled=False)
# get the names of the selected extensions
selection.keys
Explanation: Select one or several extensions based on other attributes:
End of explanation
megan_ext.name
megan_ext.enabled
megan_ext.eid # extension ID
megan_ext.settings
Explanation: Some parameters related to the previously selected extension:
End of explanation
megan_bef = megan_ext.base_emission_fields
print megan_bef
Explanation: Get all base emission fields assigned to the extension:
End of explanation
megan_df = megan_ext.extension_data
print megan_df
Explanation: Get all extension data fields for this extension:
End of explanation
all_bef = hemco_setup.base_emission_fields
# get the names of the base emission fields
print all_bef.keys
bef = []
for ext in hemco_setup.extensions:
print ext
print ext.extension_data + ext.base_emission_fields
bef.extend(ext.base_emission_fields + ext.extension_data)
Explanation: Get all base emission fields in the HEMCO configuration:
End of explanation |
13,480 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Time Series Prediction
Objectives
1. Build a linear, DNN and CNN model in keras to predict stock market behavior.
2. Build a simple RNN model and a multi-layer RNN model in keras.
3. Combine RNN and CNN architecture to create a keras model to predict stock market behavior.
In this lab we will build a custom Keras model to predict stock market behavior using the stock market dataset we created in the previous labs. We'll start with a linear, DNN and CNN model
Since the features of our model are sequential in nature, we'll next look at how to build various RNN models in keras. We'll start with a simple RNN model and then see how to create a multi-layer RNN in keras. We'll also see how to combine features of 1-dimensional CNNs with a typical RNN architecture.
We will be exploring a lot of different model types in this notebook. To keep track of your results, record the accuracy on the validation set in the table here. In machine learning there are rarely any "one-size-fits-all" so feel free to test out different hyperparameters (e.g. train steps, regularization, learning rates, optimizers, batch size) for each of the models. Keep track of your model performance in the chart below.
| Model | Validation Accuracy |
|----------|
Step1: Explore time series data
We'll start by pulling a small sample of the time series data from Big Query and write some helper functions to clean up the data for modeling. We'll use the data from the percent_change_sp500 table in BigQuery. The close_values_prior_260 column contains the close values for any given stock for the previous 260 days.
Step3: The function clean_data below does three things
Step6: Read data and preprocessing
Before we begin modeling, we'll preprocess our features by scaling to the z-score. This will ensure that the range of the feature values being fed to the model are comparable and should help with convergence during gradient descent.
Step7: Let's plot a few examples and see that the preprocessing steps were implemented correctly.
Step11: Make train-eval-test split
Next, we'll make repeatable splits for our train/validation/test datasets and save these datasets to local csv files. The query below will take a subsample of the entire dataset and then create a 70-15-15 split for the train/validation/test sets.
Step12: Modeling
For experimentation purposes, we'll train various models using data we can fit in memory using the .csv files we created above.
Step14: To monitor training progress and compare evaluation metrics for different models, we'll use the function below to plot metrics captured from the training job such as training and validation loss or accuracy.
Step15: Baseline
Before we begin modeling in keras, let's create a benchmark using a simple heuristic. Let's see what kind of accuracy we would get on the validation set if we predict the majority class of the training set.
Step16: Ok. So just naively guessing the most common outcome UP will give about 29.5% accuracy on the validation set.
Linear model
We'll start with a simple linear model, mapping our sequential input to a single fully dense layer.
Step17: The accuracy seems to level out pretty quickly. To report the accuracy, we'll average the accuracy on the validation set across the last few epochs of training.
Step18: Deep Neural Network
The linear model is an improvement on our naive benchmark. Perhaps we can do better with a more complicated model. Next, we'll create a deep neural network with keras. We'll experiment with a two layer DNN here but feel free to try a more complex model or add any other additional techniques to try an improve your performance.
Step19: Convolutional Neural Network
The DNN does slightly better. Let's see how a convolutional neural network performs.
A 1-dimensional convolutional can be useful for extracting features from sequential data or deriving features from shorter, fixed-length segments of the data set. Check out the documentation for how to implement a Conv1d in Tensorflow. Max pooling is a downsampling strategy commonly used in conjunction with convolutional neural networks. Next, we'll build a CNN model in keras using the Conv1D to create convolution layers and MaxPool1D to perform max pooling before passing to a fully connected dense layer.
Step20: Recurrent Neural Network
RNNs are particularly well-suited for learning sequential data. They retain state information from one iteration to the next by feeding the output from one cell as input for the next step. In the cell below, we'll build a RNN model in keras. The final state of the RNN is captured and then passed through a fully connected layer to produce a prediction.
Step21: Multi-layer RNN
Next, we'll build multi-layer RNN. Just as multiple layers of a deep neural network allow for more complicated features to be learned during training, additional RNN layers can potentially learn complex features in sequential data. For a multi-layer RNN the output of the first RNN layer is fed as the input into the next RNN layer.
Step22: Combining CNN and RNN architecture
Finally, we'll look at some model architectures which combine aspects of both convolutional and recurrant networks. For example, we can use a 1-dimensional convolution layer to process our sequences and create features which are then passed to a RNN model before prediction.
Step23: We can also try building a hybrid model which uses a 1-dimensional CNN to create features from the outputs of an RNN. | Python Code:
import os
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import tensorflow as tf
from google.cloud import bigquery
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import (Dense, DenseFeatures,
Conv1D, MaxPool1D,
Reshape, RNN,
LSTM, GRU, Bidirectional)
from tensorflow.keras.callbacks import TensorBoard, ModelCheckpoint
from tensorflow.keras.optimizers import Adam
# To plot pretty figures
%matplotlib inline
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# For reproducible results.
from numpy.random import seed
seed(1)
tf.random.set_seed(2)
PROJECT = "your-gcp-project-here" # REPLACE WITH YOUR PROJECT NAME
BUCKET = "your-gcp-bucket-here" # REPLACE WITH YOUR BUCKET
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
%env
PROJECT = PROJECT
BUCKET = BUCKET
REGION = REGION
Explanation: Time Series Prediction
Objectives
1. Build a linear, DNN and CNN model in keras to predict stock market behavior.
2. Build a simple RNN model and a multi-layer RNN model in keras.
3. Combine RNN and CNN architecture to create a keras model to predict stock market behavior.
In this lab we will build a custom Keras model to predict stock market behavior using the stock market dataset we created in the previous labs. We'll start with a linear, DNN and CNN model
Since the features of our model are sequential in nature, we'll next look at how to build various RNN models in keras. We'll start with a simple RNN model and then see how to create a multi-layer RNN in keras. We'll also see how to combine features of 1-dimensional CNNs with a typical RNN architecture.
We will be exploring a lot of different model types in this notebook. To keep track of your results, record the accuracy on the validation set in the table here. In machine learning there are rarely any "one-size-fits-all" so feel free to test out different hyperparameters (e.g. train steps, regularization, learning rates, optimizers, batch size) for each of the models. Keep track of your model performance in the chart below.
| Model | Validation Accuracy |
|----------|:---------------:|
| Baseline | 0.295 |
| Linear | -- |
| DNN | -- |
| 1-d CNN | -- |
| simple RNN | -- |
| multi-layer RNN | -- |
| RNN using CNN features | -- |
| CNN using RNN features | -- |
Load necessary libraries and set up environment variables
End of explanation
%%time
bq = bigquery.Client(project=PROJECT)
bq_query = '''
#standardSQL
SELECT
symbol,
Date,
direction,
close_values_prior_260
FROM
`stock_market.eps_percent_change_sp500`
LIMIT
100
'''
df_stock_raw = bq.query(bq_query).to_dataframe()
df_stock_raw.head()
Explanation: Explore time series data
We'll start by pulling a small sample of the time series data from Big Query and write some helper functions to clean up the data for modeling. We'll use the data from the percent_change_sp500 table in BigQuery. The close_values_prior_260 column contains the close values for any given stock for the previous 260 days.
End of explanation
def clean_data(input_df):
Cleans data to prepare for training.
Args:
input_df: Pandas dataframe.
Returns:
Pandas dataframe.
df = input_df.copy()
# Remove inf/na values.
real_valued_rows = ~(df == np.inf).max(axis=1)
df = df[real_valued_rows].dropna()
# TF doesn't accept datetimes in DataFrame.
df['Date'] = pd.to_datetime(df['Date'], errors='coerce')
df['Date'] = df['Date'].dt.strftime('%Y-%m-%d')
# TF requires numeric label.
df['direction_numeric'] = df['direction'].apply(lambda x: {'DOWN': 0,
'STAY': 1,
'UP': 2}[x])
return df
df_stock = clean_data(df_stock_raw)
df_stock.head()
Explanation: The function clean_data below does three things:
1. First, we'll remove any inf or NA values
2. Next, we parse the Date field to read it as a string.
3. Lastly, we convert the label direction into a numeric quantity, mapping 'DOWN' to 0, 'STAY' to 1 and 'UP' to 2.
End of explanation
STOCK_HISTORY_COLUMN = 'close_values_prior_260'
COL_NAMES = ['day_' + str(day) for day in range(0, 260)]
LABEL = 'direction_numeric'
def _scale_features(df):
z-scale feature columns of Pandas dataframe.
Args:
features: Pandas dataframe.
Returns:
Pandas dataframe with each column standardized according to the
values in that column.
avg = df.mean()
std = df.std()
return (df - avg) / std
def create_features(df, label_name):
Create modeling features and label from Pandas dataframe.
Args:
df: Pandas dataframe.
label_name: str, the column name of the label.
Returns:
Pandas dataframe
# Expand 1 column containing a list of close prices to 260 columns.
time_series_features = df[STOCK_HISTORY_COLUMN].apply(pd.Series)
# Rename columns.
time_series_features.columns = COL_NAMES
time_series_features = _scale_features(time_series_features)
# Concat time series features with static features and label.
label_column = df[LABEL]
return pd.concat([time_series_features,
label_column], axis=1)
df_features = create_features(df_stock, LABEL)
df_features.head()
Explanation: Read data and preprocessing
Before we begin modeling, we'll preprocess our features by scaling to the z-score. This will ensure that the range of the feature values being fed to the model are comparable and should help with convergence during gradient descent.
End of explanation
ix_to_plot = [0, 1, 9, 5]
fig, ax = plt.subplots(1, 1, figsize=(15, 8))
for ix in ix_to_plot:
label = df_features['direction_numeric'].iloc[ix]
example = df_features[COL_NAMES].iloc[ix]
ax = example.plot(label=label, ax=ax)
ax.set_ylabel('scaled price')
ax.set_xlabel('prior days')
ax.legend()
Explanation: Let's plot a few examples and see that the preprocessing steps were implemented correctly.
End of explanation
def _create_split(phase):
Create string to produce train/valid/test splits for a SQL query.
Args:
phase: str, either TRAIN, VALID, or TEST.
Returns:
String.
floor, ceiling = '2002-11-01', '2010-07-01'
if phase == 'VALID':
floor, ceiling = '2010-07-01', '2011-09-01'
elif phase == 'TEST':
floor, ceiling = '2011-09-01', '2012-11-30'
return '''
WHERE Date >= '{0}'
AND Date < '{1}'
'''.format(floor, ceiling)
def create_query(phase):
Create SQL query to create train/valid/test splits on subsample.
Args:
phase: str, either TRAIN, VALID, or TEST.
sample_size: str, amount of data to take for subsample.
Returns:
String.
basequery =
#standardSQL
SELECT
symbol,
Date,
direction,
close_values_prior_260
FROM
`stock_market.eps_percent_change_sp500`
return basequery + _create_split(phase)
bq = bigquery.Client(project=PROJECT)
for phase in ['TRAIN', 'VALID', 'TEST']:
# 1. Create query string
query_string = create_query(phase)
# 2. Load results into DataFrame
df = bq.query(query_string).to_dataframe()
# 3. Clean, preprocess dataframe
df = clean_data(df)
df = create_features(df, label_name='direction_numeric')
# 3. Write DataFrame to CSV
if not os.path.exists('../data'):
os.mkdir('../data')
df.to_csv('../data/stock-{}.csv'.format(phase.lower()),
index_label=False, index=False)
print("Wrote {} lines to {}".format(
len(df),
'../data/stock-{}.csv'.format(phase.lower())))
ls -la ../data
Explanation: Make train-eval-test split
Next, we'll make repeatable splits for our train/validation/test datasets and save these datasets to local csv files. The query below will take a subsample of the entire dataset and then create a 70-15-15 split for the train/validation/test sets.
End of explanation
N_TIME_STEPS = 260
N_LABELS = 3
Xtrain = pd.read_csv('../data/stock-train.csv')
Xvalid = pd.read_csv('../data/stock-valid.csv')
ytrain = Xtrain.pop(LABEL)
yvalid = Xvalid.pop(LABEL)
ytrain_categorical = to_categorical(ytrain.values)
yvalid_categorical = to_categorical(yvalid.values)
Explanation: Modeling
For experimentation purposes, we'll train various models using data we can fit in memory using the .csv files we created above.
End of explanation
def plot_curves(train_data, val_data, label='Accuracy'):
Plot training and validation metrics on single axis.
Args:
train_data: list, metrics obtrained from training data.
val_data: list, metrics obtained from validation data.
label: str, title and label for plot.
Returns:
Matplotlib plot.
plt.plot(np.arange(len(train_data)) + 0.5,
train_data,
"b.-", label="Training " + label)
plt.plot(np.arange(len(val_data)) + 1,
val_data, "r.-",
label="Validation " + label)
plt.gca().xaxis.set_major_locator(mpl.ticker.MaxNLocator(integer=True))
plt.legend(fontsize=14)
plt.xlabel("Epochs")
plt.ylabel(label)
plt.grid(True)
Explanation: To monitor training progress and compare evaluation metrics for different models, we'll use the function below to plot metrics captured from the training job such as training and validation loss or accuracy.
End of explanation
sum(yvalid == ytrain.value_counts().idxmax()) / yvalid.shape[0]
Explanation: Baseline
Before we begin modeling in keras, let's create a benchmark using a simple heuristic. Let's see what kind of accuracy we would get on the validation set if we predict the majority class of the training set.
End of explanation
# TODO 1a
model = Sequential()
model.add(Dense(units=N_LABELS,
activation='softmax',
kernel_regularizer=tf.keras.regularizers.l1(l=0.1)))
model.compile(optimizer=Adam(lr=0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(x=Xtrain.values,
y=ytrain_categorical,
batch_size=Xtrain.shape[0],
validation_data=(Xvalid.values, yvalid_categorical),
epochs=30,
verbose=0)
plot_curves(history.history['loss'],
history.history['val_loss'],
label='Loss')
plot_curves(history.history['accuracy'],
history.history['val_accuracy'],
label='Accuracy')
Explanation: Ok. So just naively guessing the most common outcome UP will give about 29.5% accuracy on the validation set.
Linear model
We'll start with a simple linear model, mapping our sequential input to a single fully dense layer.
End of explanation
np.mean(history.history['val_accuracy'][-5:])
Explanation: The accuracy seems to level out pretty quickly. To report the accuracy, we'll average the accuracy on the validation set across the last few epochs of training.
End of explanation
# TODO 1b
dnn_hidden_units = [16, 8]
model = Sequential()
for layer in dnn_hidden_units:
model.add(Dense(units=layer,
activation="relu"))
model.add(Dense(units=N_LABELS,
activation="softmax",
kernel_regularizer=tf.keras.regularizers.l1(l=0.1)))
model.compile(optimizer=Adam(lr=0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(x=Xtrain.values,
y=ytrain_categorical,
batch_size=Xtrain.shape[0],
validation_data=(Xvalid.values, yvalid_categorical),
epochs=10,
verbose=0)
plot_curves(history.history['loss'],
history.history['val_loss'],
label='Loss')
plot_curves(history.history['accuracy'],
history.history['val_accuracy'],
label='Accuracy')
np.mean(history.history['val_accuracy'][-5:])
Explanation: Deep Neural Network
The linear model is an improvement on our naive benchmark. Perhaps we can do better with a more complicated model. Next, we'll create a deep neural network with keras. We'll experiment with a two layer DNN here but feel free to try a more complex model or add any other additional techniques to try an improve your performance.
End of explanation
# TODO 1c
model = Sequential()
# Convolutional layer
model.add(Reshape(target_shape=[N_TIME_STEPS, 1]))
model.add(Conv1D(filters=5,
kernel_size=5,
strides=2,
padding="valid",
input_shape=[None, 1]))
model.add(MaxPool1D(pool_size=2,
strides=None,
padding='valid'))
# Flatten the result and pass through DNN.
model.add(tf.keras.layers.Flatten())
model.add(Dense(units=N_TIME_STEPS//4,
activation="relu"))
model.add(Dense(units=N_LABELS,
activation="softmax",
kernel_regularizer=tf.keras.regularizers.l1(l=0.1)))
model.compile(optimizer=Adam(lr=0.01),
loss='categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(x=Xtrain.values,
y=ytrain_categorical,
batch_size=Xtrain.shape[0],
validation_data=(Xvalid.values, yvalid_categorical),
epochs=10,
verbose=0)
plot_curves(history.history['loss'],
history.history['val_loss'],
label='Loss')
plot_curves(history.history['accuracy'],
history.history['val_accuracy'],
label='Accuracy')
np.mean(history.history['val_accuracy'][-5:])
Explanation: Convolutional Neural Network
The DNN does slightly better. Let's see how a convolutional neural network performs.
A 1-dimensional convolutional can be useful for extracting features from sequential data or deriving features from shorter, fixed-length segments of the data set. Check out the documentation for how to implement a Conv1d in Tensorflow. Max pooling is a downsampling strategy commonly used in conjunction with convolutional neural networks. Next, we'll build a CNN model in keras using the Conv1D to create convolution layers and MaxPool1D to perform max pooling before passing to a fully connected dense layer.
End of explanation
# TODO 2a
model = Sequential()
# Reshape inputs to pass through RNN layer.
model.add(Reshape(target_shape=[N_TIME_STEPS, 1]))
model.add(LSTM(N_TIME_STEPS // 8,
activation='relu',
return_sequences=False))
model.add(Dense(units=N_LABELS,
activation='softmax',
kernel_regularizer=tf.keras.regularizers.l1(l=0.1)))
# Create the model.
model.compile(optimizer=Adam(lr=0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(x=Xtrain.values,
y=ytrain_categorical,
batch_size=Xtrain.shape[0],
validation_data=(Xvalid.values, yvalid_categorical),
epochs=40,
verbose=0)
plot_curves(history.history['loss'],
history.history['val_loss'],
label='Loss')
plot_curves(history.history['accuracy'],
history.history['val_accuracy'],
label='Accuracy')
np.mean(history.history['val_accuracy'][-5:])
Explanation: Recurrent Neural Network
RNNs are particularly well-suited for learning sequential data. They retain state information from one iteration to the next by feeding the output from one cell as input for the next step. In the cell below, we'll build a RNN model in keras. The final state of the RNN is captured and then passed through a fully connected layer to produce a prediction.
End of explanation
# TODO 2b
rnn_hidden_units = [N_TIME_STEPS // 16,
N_TIME_STEPS // 32]
model = Sequential()
# Reshape inputs to pass through RNN layer.
model.add(Reshape(target_shape=[N_TIME_STEPS, 1]))
for layer in rnn_hidden_units[:-1]:
model.add(GRU(units=layer,
activation='relu',
return_sequences=True))
model.add(GRU(units=rnn_hidden_units[-1],
return_sequences=False))
model.add(Dense(units=N_LABELS,
activation="softmax",
kernel_regularizer=tf.keras.regularizers.l1(l=0.1)))
model.compile(optimizer=Adam(lr=0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(x=Xtrain.values,
y=ytrain_categorical,
batch_size=Xtrain.shape[0],
validation_data=(Xvalid.values, yvalid_categorical),
epochs=50,
verbose=0)
plot_curves(history.history['loss'],
history.history['val_loss'],
label='Loss')
plot_curves(history.history['accuracy'],
history.history['val_accuracy'],
label='Accuracy')
np.mean(history.history['val_accuracy'][-5:])
Explanation: Multi-layer RNN
Next, we'll build multi-layer RNN. Just as multiple layers of a deep neural network allow for more complicated features to be learned during training, additional RNN layers can potentially learn complex features in sequential data. For a multi-layer RNN the output of the first RNN layer is fed as the input into the next RNN layer.
End of explanation
# TODO 3a
model = Sequential()
# Reshape inputs for convolutional layer
model.add(Reshape(target_shape=[N_TIME_STEPS, 1]))
model.add(Conv1D(filters=20,
kernel_size=4,
strides=2,
padding="valid",
input_shape=[None, 1]))
model.add(MaxPool1D(pool_size=2,
strides=None,
padding='valid'))
model.add(LSTM(units=N_TIME_STEPS//2,
return_sequences=False,
kernel_regularizer=tf.keras.regularizers.l1(l=0.1)))
model.add(Dense(units=N_LABELS, activation="softmax"))
model.compile(optimizer=Adam(lr=0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(x=Xtrain.values,
y=ytrain_categorical,
batch_size=Xtrain.shape[0],
validation_data=(Xvalid.values, yvalid_categorical),
epochs=30,
verbose=0)
plot_curves(history.history['loss'],
history.history['val_loss'],
label='Loss')
plot_curves(history.history['accuracy'],
history.history['val_accuracy'],
label='Accuracy')
np.mean(history.history['val_accuracy'][-5:])
Explanation: Combining CNN and RNN architecture
Finally, we'll look at some model architectures which combine aspects of both convolutional and recurrant networks. For example, we can use a 1-dimensional convolution layer to process our sequences and create features which are then passed to a RNN model before prediction.
End of explanation
# TODO 3b
rnn_hidden_units = [N_TIME_STEPS // 32,
N_TIME_STEPS // 64]
model = Sequential()
# Reshape inputs and pass through RNN layer.
model.add(Reshape(target_shape=[N_TIME_STEPS, 1]))
for layer in rnn_hidden_units:
model.add(LSTM(layer, return_sequences=True))
# Apply 1d convolution to RNN outputs.
model.add(Conv1D(filters=5,
kernel_size=3,
strides=2,
padding="valid"))
model.add(MaxPool1D(pool_size=4,
strides=None,
padding='valid'))
# Flatten the convolution output and pass through DNN.
model.add(tf.keras.layers.Flatten())
model.add(Dense(units=N_TIME_STEPS // 32,
activation="relu",
kernel_regularizer=tf.keras.regularizers.l1(l=0.1)))
model.add(Dense(units=N_LABELS,
activation="softmax",
kernel_regularizer=tf.keras.regularizers.l1(l=0.1)))
model.compile(optimizer=Adam(lr=0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(x=Xtrain.values,
y=ytrain_categorical,
batch_size=Xtrain.shape[0],
validation_data=(Xvalid.values, yvalid_categorical),
epochs=80,
verbose=0)
plot_curves(history.history['loss'],
history.history['val_loss'],
label='Loss')
plot_curves(history.history['accuracy'],
history.history['val_accuracy'],
label='Accuracy')
np.mean(history.history['val_accuracy'][-5:])
Explanation: We can also try building a hybrid model which uses a 1-dimensional CNN to create features from the outputs of an RNN.
End of explanation |
13,481 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
FDMS TME3
Kaggle How Much Did It Rain? II
Florian Toque & Paul Willot
Data Vize
Step1: 13.765.202 lines in train.csv
8.022.757 lines in test.csv
Load the dataset
Step2: Per wikipedia, a value of more than 421 mm/h is considered "Extreme/large hail"
If we encounter the value 327.40 meter per hour, we should probably start building Noah's ark
Therefor, it seems reasonable to drop values too large, considered as outliers
Step3: Quick analysis for the sparsity by column
Step4: We see that except for the fixed features minutes_past, radardist_km and Expected the dataset is mainly sparse.
Let's transform the dataset to conduct more analysis
We regroup the data by ID
Step5: How much observations is there for each ID ?
Step6: We see there is a lot of ID with 6 or 12 observations, that mean one every 5 or 10 minutes on average.
Step7: Now let's do the analysis on different subsets
Step8: Strangely we notice that the less observations there is, the more it rains on average
However more of the expected rainfall fall below 0.5
What prediction should we make if there is no data?
Step9: Predicitons
As a first try, we make predictions on the complete data, and return the 50th percentile and uncomplete and fully empty data
Step10: | Python Code:
# from __future__ import exam_success
from __future__ import absolute_import
from __future__ import print_function
%matplotlib inline
import sklearn
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import random
import pandas as pd
import scipy.stats as stats
# Sk cheats
from sklearn.cross_validation import cross_val_score # cross val
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.preprocessing import Imputer # get rid of nan
Explanation: FDMS TME3
Kaggle How Much Did It Rain? II
Florian Toque & Paul Willot
Data Vize
End of explanation
%%time
filename = "data/reduced_train_100000.csv"
#filename = "data/reduced_test_100000.csv"
raw = pd.read_csv(filename)
raw = raw.set_index('Id')
raw['Expected'].describe()
Explanation: 13.765.202 lines in train.csv
8.022.757 lines in test.csv
Load the dataset
End of explanation
# Considering that the gauge may concentrate the rainfall, we set the cap to 1000
# Comment this line to analyse the complete dataset
l = len(raw)
raw = raw[raw['Expected'] < 1000]
print("Dropped %d (%0.2f%%)"%(l-len(raw),(l-len(raw))/float(l)*100))
raw.head(5)
Explanation: Per wikipedia, a value of more than 421 mm/h is considered "Extreme/large hail"
If we encounter the value 327.40 meter per hour, we should probably start building Noah's ark
Therefor, it seems reasonable to drop values too large, considered as outliers
End of explanation
l = float(len(raw["minutes_past"]))
comp = [[1-raw[i].isnull().sum()/l , i] for i in raw.columns]
comp.sort(key=lambda x: x[0], reverse=True)
sns.barplot(zip(*comp)[0],zip(*comp)[1],palette=sns.cubehelix_palette(len(comp), start=.5, rot=-.75))
plt.title("Percentage of non NaN data")
plt.show()
Explanation: Quick analysis for the sparsity by column
End of explanation
# We select all features except for the minutes past,
# because we ignore the time repartition of the sequence for now
features_columns = list([u'Ref', u'Ref_5x5_10th',
u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
u'Kdp_5x5_50th', u'Kdp_5x5_90th'])
def getXy(raw):
selected_columns = list([ u'radardist_km', u'Ref', u'Ref_5x5_10th',
u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
u'Kdp_5x5_50th', u'Kdp_5x5_90th'])
data = raw[selected_columns]
docX, docY = [], []
for i in data.index.unique():
if isinstance(data.loc[i],pd.core.series.Series):
m = [data.loc[i].as_matrix()]
docX.append(m)
docY.append(float(raw.loc[i]["Expected"]))
else:
m = data.loc[i].as_matrix()
docX.append(m)
docY.append(float(raw.loc[i][:1]["Expected"]))
X , y = np.array(docX) , np.array(docY)
return X,y
raw.index.unique()
raw.isnull().sum()
Explanation: We see that except for the fixed features minutes_past, radardist_km and Expected the dataset is mainly sparse.
Let's transform the dataset to conduct more analysis
We regroup the data by ID
End of explanation
X,y=getXy(raw)
tmp = []
for i in X:
tmp.append(len(i))
tmp = np.array(tmp)
sns.countplot(tmp,order=range(tmp.min(),tmp.max()+1))
plt.title("Number of ID per number of observations\n(On complete dataset)")
plt.plot()
print("Average gauge observation in mm: %0.2f"%y.mean())
Explanation: How much observations is there for each ID ?
End of explanation
pd.DataFrame(y).describe()
Explanation: We see there is a lot of ID with 6 or 12 observations, that mean one every 5 or 10 minutes on average.
End of explanation
#noAnyNan = raw.loc[raw[features_columns].dropna(how='any').index.unique()]
noAnyNan = raw.dropna()
noAnyNan.isnull().sum()
X,y=getXy(noAnyNan)
tmp = []
for i in X:
tmp.append(len(i))
tmp = np.array(tmp)
sns.countplot(tmp,order=range(tmp.min(),tmp.max()+1))
plt.title("Number of ID per number of observations\n(On fully filled dataset)")
plt.plot()
print("Average gauge observation in mm: %0.2f"%y.mean())
pd.DataFrame(y).describe()
noFullNan = raw.loc[raw[features_columns].dropna(how='all').index.unique()]
noFullNan.isnull().sum()
X,y=getXy(noFullNan)
tmp = []
for i in X:
tmp.append(len(i))
tmp = np.array(tmp)
sns.countplot(tmp,order=range(tmp.min(),tmp.max()+1))
plt.title("Number of ID per number of observations\n(On partly filled dataset)")
plt.plot()
print("Average gauge observation in mm: %0.2f"%y.mean())
pd.DataFrame(y).describe()
fullNan = raw.drop(raw[features_columns].dropna(how='all').index)
fullNan.isnull().sum()
X,y=getXy(fullNan)
tmp = []
for i in X:
tmp.append(len(i))
tmp = np.array(tmp)
sns.countplot(tmp,order=range(tmp.min(),tmp.max()+1))
plt.title("Number of ID per number of observations\n(On fully empty dataset)")
plt.plot()
print("Average gauge observation in mm: %0.2f"%y.mean())
pd.DataFrame(y).describe()
Explanation: Now let's do the analysis on different subsets:
On fully filled dataset
End of explanation
print("%d observations" %(len(raw)))
#print("%d fully filled, %d partly filled, %d fully empty"
# %(len(noAnyNan),len(noFullNan),len(raw)-len(noFullNan)))
print("%0.1f%% fully filled, %0.1f%% partly filled, %0.1f%% fully empty"
%(len(noAnyNan)/float(len(raw))*100,
len(noFullNan)/float(len(raw))*100,
(len(raw)-len(noFullNan))/float(len(raw))*100))
Explanation: Strangely we notice that the less observations there is, the more it rains on average
However more of the expected rainfall fall below 0.5
What prediction should we make if there is no data?
End of explanation
etreg = ExtraTreesRegressor(n_estimators=100, max_depth=None, min_samples_split=1, random_state=0)
X,y=getXy(noAnyNan)
XX = [np.array(t).mean(0) for t in X]
split = 0.2
ps = int(len(XX) * (1-split))
X_train = XX[:ps]
y_train = y[:ps]
X_test = XX[ps:]
y_test = y[ps:]
%%time
etreg.fit(X_train,y_train)
%%time
et_score = cross_val_score(etreg, XX, y, cv=5)
print("Score: %s\tMean: %.03f"%(et_score,et_score.mean()))
err = (etreg.predict(X_test)-y_test)**2
err.sum()/len(err)
r = random.randrange(len(X_train))
print(r)
print(etreg.predict(X_train[r]))
print(y_train[r])
r = random.randrange(len(X_test))
print(r)
print(etreg.predict(X_test[r]))
print(y_test[r])
Explanation: Predicitons
As a first try, we make predictions on the complete data, and return the 50th percentile and uncomplete and fully empty data
End of explanation
filename = "data/reduced_test_5000.csv"
test = pd.read_csv(filename)
test = test.set_index('Id')
features_columns = list([u'Ref', u'Ref_5x5_10th',
u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
u'Kdp_5x5_50th', u'Kdp_5x5_90th'])
def getX(raw):
selected_columns = list([ u'radardist_km', u'Ref', u'Ref_5x5_10th',
u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
u'Kdp_5x5_50th', u'Kdp_5x5_90th'])
data = raw[selected_columns]
docX= []
for i in data.index.unique():
if isinstance(data.loc[i],pd.core.series.Series):
m = [data.loc[i].as_matrix()]
docX.append(m)
else:
m = data.loc[i].as_matrix()
docX.append(m)
X = np.array(docX)
return X
X=getX(test)
tmp = []
for i in X:
tmp.append(len(i))
tmp = np.array(tmp)
sns.countplot(tmp,order=range(tmp.min(),tmp.max()+1))
plt.title("Number of ID per number of observations\n(On test dataset)")
plt.plot()
#print("Average gauge observation in mm: %0.2f"%y.mean())
etreg.predict(X_test)
testFull = test.dropna()
X=getX(testFull)
XX = [np.array(t).mean(0) for t in X]
pd.DataFrame(etreg.predict(XX)).describe()
predFull = zip(testFull.index.unique(),etreg.predict(XX))
b = np.empty(len(a))
b.fill(3.14)
zip(a,b)
predFull[:10]
testNan = test.drop(test[features_columns].dropna(how='all').index)
tmp = np.empty(len(testNan))
tmp.fill(0.445000) # 50th percentile of full Nan dataset
predNan = zip(testNan.index.unique(),tmp)
predNan[:10]
testLeft = test.drop(testNan.index.unique()).drop(testFull.index.unique())
tmp = np.empty(len(testLeft))
tmp.fill(1.27) # 50th percentile of full Nan dataset
predLeft = zip(testLeft.index.unique(),tmp)
len(testFull.index.unique())
len(testNan.index.unique())
len(testLeft.index.unique())
pred = predFull + predNan + predLeft
pred.sort(key=lambda x: x[0], reverse=False)
submission = pd.DataFrame(pred)
submission.columns = ["Id","Expected"]
submission.head()
submission.to_csv("first_submit.csv",index=False)
Explanation:
End of explanation |
13,482 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Please find torch implementation of this notebook here
Step1: Imports
Step2: Load Data
Step3: Let's view some images(Because the images are normalized so we need to first convert them to the range of 0 to 1) in order to view them.
Step4: Model
Step5: Training
Step6: Testing The Model
<ul>
<li>0 = No HotDog</li>
<li>1 = HotDog</li>
</ul> | Python Code:
# Install Augmax for Image Augmentation
try:
import augmax
except ModuleNotFoundError:
%pip install -qq git+https://github.com/khdlr/augmax.git -q
import augmax
# Install the jax-resnet
try:
import jax_resnet
except ModuleNotFoundError:
%pip install -qq git+https://github.com/n2cholas/jax-resnet.git -q
import jax_resnet
# Download and Extract Data
!wget http://d2l-data.s3-accelerate.amazonaws.com/hotdog.zip
!unzip -qq /content/hotdog.zip -d /content/
Explanation: Please find torch implementation of this notebook here: https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/book1/19/finetune_cnn_torch.ipynb
<a href="https://colab.research.google.com/drive/1c0yus2G9AIHXjstUDGT9u7cXgAkJ4tCF?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
Author of the Notebook : Susnato Dhar (Github : https://github.com/susnato)
This notebook is JAX compatible version of the main notebook which can be found <a href="https://github.com/probml/probml-notebooks/blob/main/notebooks-d2l/finetune_cnn_torch.ipynb">here</a>.
<br>All the credits goes to the author of the main notebook, I just converted it to JAX.
I used <a href="https://github.com/n2cholas/jax-resnet">this repository</a> to impelement the pre-trained version of ResNet18 in order to fine tune it!<br>I used the Dataset HotDog VS No HotDog from this <a href="http://d2l-data.s3-accelerate.amazonaws.com/hotdog.zip">link</a>.
End of explanation
import os
import sys
try:
import cv2
except ModuleNotFoundError:
%pip install -qq opencv-python
import cv2
import glob
try:
import tqdm
except ModuleNotFoundError:
%pip install -qq tqdm
import tqdm
import shutil
from typing import Any
from IPython import display
import matplotlib.pyplot as plt
try:
from skimage.util import montage
except ModuleNotFoundError:
%pip install -qq scikit-image
from skimage.util import montage
%matplotlib inline
import jax
import jax.numpy as jnp
import jax.random as jrand
key = jrand.PRNGKey(42)
Explanation: Imports
End of explanation
try:
import tensorflow as tf
except ModuleNotFoundError:
%pip install -qq tensorflow
import tensorflow as tf
def load_img(dir, shape=False):
img = cv2.imread(dir)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
if shape:
img = cv2.resize(img, shape)
return jnp.array(img)
augs = augmax.Chain(
augmax.HorizontalFlip(), augmax.Resize(224, 224), augmax.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
)
def apply_augs(img, augs, key):
img = augs(key, img)
return img
class DataLoader(tf.keras.utils.Sequence):
def __init__(self, batch_size, motiv, shuffle=False):
self.batch_size = batch_size
assert motiv in ["train", "test"]
self.motiv = motiv
self.hot_dogs_list = f"/content/hotdog/{motiv}/hotdog"
self.non_hot_dogs_list = f"/content/hotdog/{motiv}/not-hotdog"
self.key = jrand.PRNGKey(42)
self.shuffle = shuffle
def __len__(self):
return len(os.listdir(self.hot_dogs_list)) // self.batch_size
def __getitem__(self, ix):
X, Y = [], []
hdl = os.listdir(self.hot_dogs_list)[ix * self.batch_size : (ix + 1) * self.batch_size]
nhdl = os.listdir(self.non_hot_dogs_list)[ix * self.batch_size : (ix + 1) * self.batch_size]
for lst in zip(hdl, nhdl):
X.append(apply_augs(load_img(os.path.join(self.hot_dogs_list, lst[0])), augs, self.key))
Y.append(1)
X.append(apply_augs(load_img(os.path.join(self.non_hot_dogs_list, lst[1])), augs, self.key))
Y.append(0)
X = jnp.array(X).reshape(self.batch_size * 2, 224, 224, 3).astype(jnp.float16)
Y = (
jnp.array(Y)
.reshape(
self.batch_size * 2,
)
.astype(jnp.uint8)
)
ix = jnp.arange(X.shape[0])
ix = jrand.shuffle(key, ix)
X = X[ix]
Y = Y[ix]
return X, Y
train_dl = DataLoader(batch_size=32, motiv="train")
val_dl = DataLoader(batch_size=32, motiv="test")
Explanation: Load Data
End of explanation
example_x, example_y = train_dl.__getitem__(0)
viewable_imgs = example_x[:32]
viewable_imgs = (viewable_imgs - viewable_imgs.min()) / (viewable_imgs.max() - viewable_imgs.min())
viewable_imgs = viewable_imgs * 255.0
viewable_imgs = viewable_imgs.astype(jnp.uint8)
plt.imshow(montage(viewable_imgs, multichannel=True, grid_shape=(4, 8)))
plt.show();
Explanation: Let's view some images(Because the images are normalized so we need to first convert them to the range of 0 to 1) in order to view them.
End of explanation
import jax
try:
import optax
except ModuleNotFoundError:
%pip install -qq optax
import optax
try:
import flax.linen as nn
except ModuleNotFoundError:
%pip install -qq flax
import flax.linen as nn
from flax.training import train_state
try:
from jax_resnet import pretrained_resnet, pretrained_resnest
except ModuleNotFoundError:
%pip install -qq jax_resnet
from jax_resnet import pretrained_resnet, pretrained_resnest
from jax_resnet.common import Sequential
class MyResnet(nn.Module):
@nn.compact
def __call__(self, data):
ResNet18, _ = pretrained_resnet(18)
model = ResNet18()
model = Sequential(model.layers[:-1])
x = model(data)
x = nn.Dense(features=256)(x)
x = nn.relu(x)
x = nn.Dense(features=2)(x)
return x
class TrainState(train_state.TrainState):
batch_stats: Any
model = MyResnet()
vars = model.init(key, jnp.ones((1, 224, 224, 3)))
state = TrainState.create(
apply_fn=model.apply, params=vars["params"], batch_stats=vars["batch_stats"], tx=optax.adam(learning_rate=0.00001)
)
@jax.jit
def compute_metrics(pred, true):
loss = jnp.mean(optax.softmax_cross_entropy(logits=pred, labels=jax.nn.one_hot(true, num_classes=2)))
pred = nn.softmax(pred)
accuracy = jnp.mean(jnp.argmax(pred, -1) == true)
return {"loss": loss, "accuracy": jnp.mean(accuracy)}
@jax.jit
def eval_step(state, batch):
variables = {"params": state.params, "batch_stats": state.batch_stats}
logits, _ = state.apply_fn(variables, batch["x"], mutable=["batch_stats"])
return compute_metrics(pred=logits, true=batch["y"])
def train(state, epochs):
@jax.jit
def bce_loss(params):
y_pred, new_model_state = state.apply_fn(
{"params": params, "batch_stats": state.batch_stats}, batch["x"], mutable=["batch_stats"]
)
y_true = jax.nn.one_hot(batch["y"], num_classes=2)
loss = optax.softmax_cross_entropy(logits=y_pred, labels=y_true)
return jnp.mean(loss), (new_model_state, y_pred)
grad_fn = jax.value_and_grad(bce_loss, has_aux=True)
for e in range(epochs):
batch_metrics = []
for i in range(train_dl.__len__()):
batch = {}
batch["x"], batch["y"] = train_dl.__getitem__(i)
aux, grad = grad_fn(state.params)
batch_loss, (new_model_state, batch_pred) = aux
state = state.apply_gradients(grads=grad, batch_stats=new_model_state["batch_stats"])
computed_metrics = compute_metrics(pred=batch_pred, true=batch["y"])
sys.stdout.write(
"\rEpoch : {}/{} Iteration : {}/{} Loss : {} Accuracy : {}".format(
e + 1, epochs, i + 1, train_dl.__len__(), computed_metrics["loss"], computed_metrics["accuracy"]
)
)
batch_metrics.append(computed_metrics)
print("\n")
val_batch_loss, val_batch_acc = [], []
for i in range(val_dl.__len__()):
val_batch = {}
val_batch["x"], val_batch["y"] = val_dl.__getitem__(i)
val_metrics = eval_step(state, val_batch)
val_batch_loss.append(val_metrics["loss"])
val_batch_acc.append(val_metrics["accuracy"])
eval_loss, eval_acc = jnp.mean(jnp.array(val_batch_loss)), jnp.mean(jnp.array(val_batch_acc))
sys.stdout.write(
"Validation Results : Epoch : {} Validation Loss : {} Validation Accuracy : {}".format(
e + 1, jax.device_get(eval_loss), jax.device_get(eval_acc)
)
)
print("\n")
return state
Explanation: Model
End of explanation
epochs = 10
trained_state = train(state, epochs)
Explanation: Training
End of explanation
test_dl = DataLoader(batch_size=1, motiv="test")
ix = jrand.randint(key, shape=(1, 1), minval=0, maxval=test_dl.__len__() - 1)
test_imgs, test_labels = test_dl.__getitem__(jax.device_get(ix)[0][0])
test_img1 = test_imgs[0]
test_label1 = jax.device_get(test_labels)[0]
viewable_img1 = ((test_img1 - test_img1.min()) / (test_img1.max() - test_img1.min())) * 255.0
plt.imshow(viewable_img1.astype(jnp.uint8))
plt.show()
print("True Label : ", test_label1)
print(
"Prediction : ",
jax.device_get(
jnp.argmax(
jax.nn.softmax(
trained_state.apply_fn(
{"params": trained_state.params, "batch_stats": trained_state.batch_stats},
test_img1.reshape(1, 224, 224, 3),
)
)
)
),
)
test_img2 = test_imgs[1]
test_label2 = jax.device_get(test_labels)[1]
viewable_img2 = ((test_img2 - test_img2.min()) / (test_img2.max() - test_img2.min())) * 255.0
plt.imshow(viewable_img2.astype(jnp.uint8))
plt.show()
print("True Label : ", test_label2)
print(
"Prediction : ",
jax.device_get(
jnp.argmax(
jax.nn.softmax(
trained_state.apply_fn(
{"params": trained_state.params, "batch_stats": trained_state.batch_stats},
test_img2.reshape(1, 224, 224, 3),
)
)
)
),
)
Explanation: Testing The Model
<ul>
<li>0 = No HotDog</li>
<li>1 = HotDog</li>
</ul>
End of explanation |
13,483 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The problem with my w(\theta) calculation appears to be fairly fundamental
Step1: Load up the tptY3 buzzard mocks.
Step2: Load up a snapshot at a redshift near the center of this bin.
Step3: This code load a particular snapshot and and a particular HOD model. In this case, 'redMagic' is the Zheng07 HOD with the f_c variable added in.
Step4: Take the zspec in our selected zbin to calculate the dN/dz distribution. The below cell calculate the redshift distribution prefactor
$$ W = \frac{2}{c}\int_0^{\infty} dz H(z) \left(\frac{dN}{dz} \right)^2 $$
Step5: If we happened to choose a model with assembly bias, set it to 0. Leave all parameters as their defaults, for now.
Step6: Use my code's wrapper for halotools' xi calculator. Full source code can be found here.
Step7: Interpolate with a Gaussian process. May want to do something else "at scale", but this is quick for now.
Step8: This plot looks bad on large scales. I will need to implement a linear bias model for larger scales; however I believe this is not the cause of this issue. The overly large correlation function at large scales if anything should increase w(theta).
This plot shows the regimes of concern. The black lines show the value of r for u=0 in the below integral for each theta bin. The red lines show the maximum value of r for the integral I'm performing.
Step9: Perform the below integral in each theta bin
Step10: The below plot shows the problem. There appears to be a constant multiplicative offset between the redmagic calculation and the one we just performed. The plot below it shows their ratio. It is near-constant, but there is some small radial trend. Whether or not it is significant is tough to say.
Step11: The below cell calculates the integrals jointly instead of separately. It doesn't change the results significantly, but is quite slow. I've disabled it for that reason. | Python Code:
from pearce.mocks import cat_dict
import numpy as np
from os import path
from astropy.io import fits
from astropy import constants as const, units as unit
import george
from george.kernels import ExpSquaredKernel
import matplotlib
#matplotlib.use('Agg')
from matplotlib import pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set()
Explanation: The problem with my w(\theta) calculation appears to be fairly fundamental: calculating in a snapshot just gives very strange answers. I'm gonna try directly integratng the 3-d correlation function. I'm gonna try doing that directly, but it is possible that I'll have to emulate to get that quite right.
There is a prefactor which I'm going to compute directly from the redMagic data that I have.
End of explanation
fname = '/u/ki/jderose/public_html/bcc/measurement/y3/3x2pt/buzzard/flock/buzzard-2/tpt_Y3_v0.fits'
hdulist = fits.open(fname)
z_bins = np.array([0.15, 0.3, 0.45, 0.6, 0.75, 0.9])
zbin=1
a = 0.81120
z = 1.0/a - 1.0
Explanation: Load up the tptY3 buzzard mocks.
End of explanation
print z
Explanation: Load up a snapshot at a redshift near the center of this bin.
End of explanation
cosmo_params = {'simname':'chinchilla', 'Lbox':400.0, 'scale_factors':[a]}
cat = cat_dict[cosmo_params['simname']](**cosmo_params)#construct the specified catalog!
cat.load_catalog(a)
#cat.h = 1.0
#halo_masses = cat.halocat.halo_table['halo_mvir']
cat.load_model(a, 'redMagic')
hdulist.info()
Explanation: This code load a particular snapshot and and a particular HOD model. In this case, 'redMagic' is the Zheng07 HOD with the f_c variable added in.
End of explanation
nz_zspec = hdulist[8]
#N = 0#np.zeros((5,))
N_total = np.sum([row[2+zbin] for row in nz_zspec.data])
dNdzs = []
zs = []
W = 0
for row in nz_zspec.data:
N = row[2+zbin]
dN = N*1.0/N_total
#volIn, volOut = cat.cosmology.comoving_volume(row[0]), cat.cosmology.comoving_volume(row[2])
#fullsky_volume = volOut-volIn
#survey_volume = fullsky_volume*area/full_sky
#nd = dN/survey_volume
dz = row[2] - row[0]
#print row[2], row[0]
dNdz = dN/dz
H = cat.cosmology.H(row[1])
W+= dz*H*(dNdz)**2
dNdzs.append(dNdz)
zs.append(row[1])
#for idx, n in enumerate(row[3:]):
# N[idx]+=n
W = 2*W/const.c
print W
N_z = [row[2+zbin] for row in nz_zspec.data]
N_total = np.sum(N_z)#*0.01
plt.plot(zs,N_z/N_total)
plt.xlim(0,1.0)
len(dNdzs)
plt.plot(zs, dNdzs)
plt.vlines(z, 0,8)
plt.xlim(0,1.0)
plt.xlabel(r'$z$')
plt.ylabel(r'$dN/dz$')
len(nz_zspec.data)
np.sum(dNdzs)
np.sum(dNdzs)/len(nz_zspec.data)
W.to(1/unit.Mpc)
Explanation: Take the zspec in our selected zbin to calculate the dN/dz distribution. The below cell calculate the redshift distribution prefactor
$$ W = \frac{2}{c}\int_0^{\infty} dz H(z) \left(\frac{dN}{dz} \right)^2 $$
End of explanation
4.51077317e-03
params = cat.model.param_dict.copy()
#params['mean_occupation_centrals_assembias_param1'] = 0.0
#params['mean_occupation_satellites_assembias_param1'] = 0.0
params['logMmin'] = 13.4
params['sigma_logM'] = 0.1
params['f_c'] = 0.19
params['alpha'] = 1.0
params['logM1'] = 14.0
params['logM0'] = 12.0
print params
cat.populate(params)
nd_cat = cat.calc_analytic_nd()
print nd_cat
area = 5063 #sq degrees
full_sky = 41253 #sq degrees
volIn, volOut = cat.cosmology.comoving_volume(z_bins[zbin-1]), cat.cosmology.comoving_volume(z_bins[zbin])
fullsky_volume = volOut-volIn
survey_volume = fullsky_volume*area/full_sky
nd_mock = N_total/survey_volume
print nd_mock
nd_mock.value/nd_cat
#compute the mean mass
mf = cat.calc_mf()
HOD = cat.calc_hod()
mass_bin_range = (9,16)
mass_bin_size = 0.01
mass_bins = np.logspace(mass_bin_range[0], mass_bin_range[1], int( (mass_bin_range[1]-mass_bin_range[0])/mass_bin_size )+1 )
mean_host_mass = np.sum([mass_bin_size*mf[i]*HOD[i]*(mass_bins[i]+mass_bins[i+1])/2 for i in xrange(len(mass_bins)-1)])/\
np.sum([mass_bin_size*mf[i]*HOD[i] for i in xrange(len(mass_bins)-1)])
print mean_host_mass
10**0.35
N_total
theta_bins = np.logspace(np.log10(0.004), 0, 24)#/60
tpoints = (theta_bins[1:]+theta_bins[:-1])/2
r_bins = np.logspace(-0.5, 1.7, 16)
rpoints = (r_bins[1:]+r_bins[:-1])/2
Explanation: If we happened to choose a model with assembly bias, set it to 0. Leave all parameters as their defaults, for now.
End of explanation
xi = cat.calc_xi(r_bins, do_jackknife=False)
Explanation: Use my code's wrapper for halotools' xi calculator. Full source code can be found here.
End of explanation
kernel = ExpSquaredKernel(0.05)
gp = george.GP(kernel)
gp.compute(np.log10(rpoints))
print xi
xi[xi<=0] = 1e-2 #ack
from scipy.stats import linregress
m,b,_,_,_ = linregress(np.log10(rpoints), np.log10(xi))
plt.plot(rpoints, (2.22353827e+03)*(rpoints**(-1.88359)))
#plt.plot(rpoints, b2*(rpoints**m2))
plt.scatter(rpoints, xi)
plt.loglog();
plt.plot(np.log10(rpoints), b+(np.log10(rpoints)*m))
#plt.plot(np.log10(rpoints), b2+(np.log10(rpoints)*m2))
#plt.plot(np.log10(rpoints), 90+(np.log10(rpoints)*(-2)))
plt.scatter(np.log10(rpoints), np.log10(xi) )
#plt.loglog();
print m,b
rpoints_dense = np.logspace(-0.5, 2, 500)
plt.scatter(rpoints, xi)
plt.plot(rpoints_dense, np.power(10, gp.predict(np.log10(xi), np.log10(rpoints_dense))[0]))
plt.loglog();
Explanation: Interpolate with a Gaussian process. May want to do something else "at scale", but this is quick for now.
End of explanation
theta_bins_rm = np.logspace(np.log10(2.5), np.log10(250), 21)/60 #binning used in buzzard mocks
tpoints_rm = (theta_bins_rm[1:]+theta_bins_rm[:-1])/2.0
rpoints_dense = np.logspace(-1.5, 2, 500)
x = cat.cosmology.comoving_distance(z)
plt.scatter(rpoints, xi)
plt.plot(rpoints_dense, np.power(10, gp.predict(np.log10(xi), np.log10(rpoints_dense))[0]))
plt.vlines((a*x*np.radians(tpoints_rm)).value, 1e-2, 1e4)
plt.vlines((a*np.sqrt(x**2*np.radians(tpoints_rm)**2+unit.Mpc*unit.Mpc*10**(1.7*2))).value, 1e-2, 1e4, color = 'r')
plt.loglog();
Explanation: This plot looks bad on large scales. I will need to implement a linear bias model for larger scales; however I believe this is not the cause of this issue. The overly large correlation function at large scales if anything should increase w(theta).
This plot shows the regimes of concern. The black lines show the value of r for u=0 in the below integral for each theta bin. The red lines show the maximum value of r for the integral I'm performing.
End of explanation
x = cat.cosmology.comoving_distance(z)
print x
-
np.radians(tpoints_rm)
#a subset of the data from above. I've verified it's correct, but we can look again.
wt_redmagic = np.loadtxt('/u/ki/swmclau2/Git/pearce/bin/mcmc/buzzard2_wt_%d%d.npy'%(zbin,zbin))
tpoints_rm
mathematica_calc = np.array([122.444, 94.8279, 73.4406, 56.8769, 44.049, 34.1143, 26.4202, \
20.4614, 15.8466, 12.2726, 9.50465, 7.36099, 5.70081, 4.41506, \
3.41929, 2.64811, 2.05086, 1.58831, 1.23009, 0.952656])#*W
Explanation: Perform the below integral in each theta bin:
$$ w(\theta) = W \int_0^\infty du \xi \left(r = \sqrt{u^2 + \bar{x}^2(z)\theta^2} \right) $$
Where $\bar{x}$ is the median comoving distance to z.
End of explanation
print W.value
print W.to("1/Mpc").value
print W.value
from scipy.special import gamma
def wt_analytic(m,b,t,x):
return W.to("1/Mpc").value*b*np.sqrt(np.pi)*(t*x)**(1 + m)*(gamma(-(1./2) - m/2.)/(2*gamma(-(m/2.))) )
plt.plot(tpoints_rm, wt, label = 'My Calculation')
plt.plot(tpoints_rm, wt_redmagic, label = 'Buzzard Mock')
#plt.plot(tpoints_rm, W.to("1/Mpc").value*mathematica_calc, label = 'Mathematica Calc')
#plt.plot(tpoints_rm, wt_analytic(m,10**b, np.radians(tpoints_rm), x),label = 'Mathematica Calc' )
plt.ylabel(r'$w(\theta)$')
plt.xlabel(r'$\theta \mathrm{[degrees]}$')
plt.loglog();
plt.legend(loc='best')
wt_redmagic/(W.to("1/Mpc").value*mathematica_calc)
import cPickle as pickle
with open('/u/ki/jderose/ki23/bigbrother-addgals/bbout/buzzard-flock/buzzard-0/buzzard0_lb1050_xigg_ministry.pkl') as f:
xi_rm = pickle.load(f)
xi_rm.metrics[0].xi.shape
xi_rm.metrics[0].mbins
xi_rm.metrics[0].cbins
#plt.plot(np.log10(rpoints), b2+(np.log10(rpoints)*m2))
#plt.plot(np.log10(rpoints), 90+(np.log10(rpoints)*(-2)))
plt.scatter(rpoints, xi)
for i in xrange(3):
for j in xrange(3):
plt.plot(xi_rm.metrics[0].rbins[:-1], xi_rm.metrics[0].xi[:,i,j,0])
plt.loglog();
plt.subplot(211)
plt.plot(tpoints_rm, wt_redmagic/wt)
plt.xscale('log')
#plt.ylim([0,10])
plt.subplot(212)
plt.plot(tpoints_rm, wt_redmagic/wt)
plt.xscale('log')
plt.ylim([2.0,4])
xi_rm.metrics[0].xi.shape
xi_rm.metrics[0].rbins #Mpc/h
Explanation: The below plot shows the problem. There appears to be a constant multiplicative offset between the redmagic calculation and the one we just performed. The plot below it shows their ratio. It is near-constant, but there is some small radial trend. Whether or not it is significant is tough to say.
End of explanation
x = cat.cosmology.comoving_distance(z)*a
#ubins = np.linspace(10**-6, 10**2.0, 1001)
ubins = np.logspace(-6, 2.0, 51)
ubc = (ubins[1:]+ubins[:-1])/2.0
#NLL
def liklihood(params, wt_redmagic,x, tpoints):
#print _params
#prior = np.array([ PRIORS[pname][0] < v < PRIORS[pname][1] for v,pname in zip(_params, param_names)])
#print param_names
#print prior
#if not np.all(prior):
# return 1e9
#params = {p:v for p,v in zip(param_names, _params)}
#cat.populate(params)
#nd_cat = cat.calc_analytic_nd(parmas)
#wt = np.zeros_like(tpoints_rm[:-5])
#xi = cat.calc_xi(r_bins, do_jackknife=False)
#m,b,_,_,_ = linregress(np.log10(rpoints), np.log10(xi))
#if np.any(xi < 0):
# return 1e9
#kernel = ExpSquaredKernel(0.05)
#gp = george.GP(kernel)
#gp.compute(np.log10(rpoints))
#for bin_no, t_med in enumerate(np.radians(tpoints_rm[:-5])):
# int_xi = 0
# for ubin_no, _u in enumerate(ubc):
# _du = ubins[ubin_no+1]-ubins[ubin_no]
# u = _u*unit.Mpc*a
# du = _du*unit.Mpc*a
#print np.sqrt(u**2+(x*t_med)**2)
# r = np.sqrt((u**2+(x*t_med)**2))#*cat.h#not sure about the h
#if r > unit.Mpc*10**1.7: #ignore large scales. In the full implementation this will be a transition to a bias model.
# int_xi+=du*0
#else:
# the GP predicts in log, so i predict in log and re-exponate
# int_xi+=du*(np.power(10, \
# gp.predict(np.log10(xi), np.log10(r.value), mean_only=True)[0]))
# int_xi+=du*(10**b)*(r.to("Mpc").value**m)
#print (((int_xi*W))/wt_redmagic[0]).to("m/m")
#break
# wt[bin_no] = int_xi*W.to("1/Mpc")
wt = wt_analytic(params[0],params[1], tpoints, x.to("Mpc").value)
chi2 = np.sum(((wt - wt_redmagic[:-5])**2)/(1e-3*wt_redmagic[:-5]) )
#chi2=0
#print nd_cat
#print wt
#chi2+= ((nd_cat-nd_mock.value)**2)/(1e-6)
#mf = cat.calc_mf()
#HOD = cat.calc_hod()
#mass_bin_range = (9,16)
#mass_bin_size = 0.01
#mass_bins = np.logspace(mass_bin_range[0], mass_bin_range[1], int( (mass_bin_range[1]-mass_bin_range[0])/mass_bin_size )+1 )
#mean_host_mass = np.sum([mass_bin_size*mf[i]*HOD[i]*(mass_bins[i]+mass_bins[i+1])/2 for i in xrange(len(mass_bins)-1)])/\
# np.sum([mass_bin_size*mf[i]*HOD[i] for i in xrange(len(mass_bins)-1)])
#chi2+=((13.35-np.log10(mean_host_mass))**2)/(0.2)
print chi2
return chi2 #nll
print nd_mock
print wt_redmagic[:-5]
import scipy.optimize as op
results = op.minimize(liklihood, np.array([-2.2, 10**1.7]),(wt_redmagic,x, tpoints_rm[:-5]))
results
#plt.plot(tpoints_rm, wt, label = 'My Calculation')
plt.plot(tpoints_rm, wt_redmagic, label = 'Buzzard Mock')
plt.plot(tpoints_rm, wt_analytic(-1.88359, 2.22353827e+03,tpoints_rm, x.to("Mpc").value), label = 'Mathematica Calc')
plt.ylabel(r'$w(\theta)$')
plt.xlabel(r'$\theta \mathrm{[degrees]}$')
plt.loglog();
plt.legend(loc='best')
plt.plot(np.log10(rpoints), np.log10(2.22353827e+03)+(np.log10(rpoints)*(-1.88)))
plt.scatter(np.log10(rpoints), np.log10(xi) )
np.array([v for v in params.values()])
Explanation: The below cell calculates the integrals jointly instead of separately. It doesn't change the results significantly, but is quite slow. I've disabled it for that reason.
End of explanation |
13,484 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Previous
2.8 多行匹配模式
问题
你正在试着使用正则表达式去匹配一大块的文本,而你需要跨越多行去匹配。
解决方案
这个问题很典型的出现在当你用点 (.) 去匹配任意字符的时候,忘记了点 (.) 不能匹配换行符的事实。 比如,假设你想试着去匹配 C 语言分割的注释:
Step1: 为了修正这个问题,你可以修改模式字符串,增加对换行的支持。比如:
Step2: 在这个模式中, (? | Python Code:
import re
comment = re.compile(r"/\*(.*?)\*/")
text1 = '/* this is a comment */'
text2 = '''/* this is a
multiline comment */
'''
comment.findall(text1)
comment.findall(text2)
Explanation: Previous
2.8 多行匹配模式
问题
你正在试着使用正则表达式去匹配一大块的文本,而你需要跨越多行去匹配。
解决方案
这个问题很典型的出现在当你用点 (.) 去匹配任意字符的时候,忘记了点 (.) 不能匹配换行符的事实。 比如,假设你想试着去匹配 C 语言分割的注释:
End of explanation
comment = re.compile(r'/\*((?:.|\n)*?)\*/')
comment.findall(text2)
Explanation: 为了修正这个问题,你可以修改模式字符串,增加对换行的支持。比如:
End of explanation
comment = re.compile(r'/\*(.*?)\*/', re.DOTALL)
comment.findall(text2)
Explanation: 在这个模式中, (?:.|\n) 指定了一个非捕获组 (也就是它定义了一个仅仅用来做匹配,而不能通过单独捕获或者编号的组)。
讨论
re.compile() 函数接受一个标志参数叫 re.DOTALL ,在这里非常有用。 它可以让正则表达式中的点 (.) 匹配包括换行符在内的任意字符。比如:
End of explanation |
13,485 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Executing Squonk services
This notebook is an example of executing Squonk services using Python's requests module.
It assumes you are executing against the JobExector service running in an OpenShift environment.
Step1: Check basic operation
Step2: Authentication
Step3: List all services
Step4: Getting details of a particular service
Step5: List all jobs
Step6: Execute the 'Dataset Slice' service
Step7: Get the status of the current job
Step8: Get the results of a job.
Step9: Delete the job
Step10: Delete all jobs
This is to help clean up if you get into a mess!
Step11: Other services
In addition to the simple 'dataset slice' service many more meaningful ones are available.
Here are some examples illustrating the different categories of Squonk services | Python Code:
import requests
import json
# requests_toolbelt module is used to handle the multipart responses.
# Need to `pip install requests-toolbelt` from a terminal to install. This might need doing each time the Notebook pod starts
from requests_toolbelt.multipart import decoder
# Define some URLs and params
base_url = 'https://jobexecutor.prod.openrisknet.org/jobexecutor/rest'
services_url = base_url + '/v1/services'
jobexecutor_url = base_url + '/v1/jobs'
keycloak_url = 'https://sso.prod.openrisknet.org/auth/realms/openrisknet/protocol/openid-connect/token'
# set to False if self signed certificates are being used
tls_verify=True
Explanation: Executing Squonk services
This notebook is an example of executing Squonk services using Python's requests module.
It assumes you are executing against the JobExector service running in an OpenShift environment.
End of explanation
# Test the PING service. Should give a 200 response and return 'OK'.
# If not then nothing else is going to work.
url = base_url + '/ping'
print("Requesting GET " + url)
resp = requests.get(url, verify=tls_verify)
print('Response Code: ' + str(resp.status_code))
print(resp.text)
Explanation: Check basic operation
End of explanation
# Need to specify your Keycloak SSO username and password so that we can get a token
import getpass
username = input('Username')
password = getpass.getpass('Password')
# Get token from Keycloak. This will have a finite lifetime.
# If your requests are getting a 401 error your token has probably expired.
data = {'grant_type': 'password', 'client_id': 'squonk-jobexecutor', 'username': username, 'password': password}
kresp = requests.post(keycloak_url, data = data)
j = kresp.json()
token = j['access_token']
token
Explanation: Authentication
End of explanation
# Get a list of all the Squonk services that can be executed.
#
print("Requesting GET " + services_url)
jobs_resp = requests.get(services_url, headers={'Authorization': 'bearer ' + token}, verify=tls_verify)
print('Response Code: ' + str(jobs_resp.status_code))
json = jobs_resp.json()
print(str(len(json)) + " services found")
print(json)
Explanation: List all services
End of explanation
# find the service ID from the list in the list services cell
#service_id = 'core.dataset.filter.slice.v1'
#service_id = 'pipelines.rdkit.conformer.basic'
service_id = 'pipelines.rdkit.o3da.basic'
url = services_url + '/' + service_id
print("Requesting GET " + url)
jobs_resp = requests.get(url, headers={'Authorization': 'bearer ' + token}, verify=tls_verify)
print('Response Code: ' + str(jobs_resp.status_code))
json = jobs_resp.json()
print(json)
Explanation: Getting details of a particular service
End of explanation
# Result of the request is an array of JobStatus objects.
# The job ID and status are listed
print("Requesting GET " + jobexecutor_url)
jobs_resp = requests.get(jobexecutor_url, headers={'Authorization': 'bearer ' + token}, verify=tls_verify)
print('Response Code: ' + str(jobs_resp.status_code))
json = jobs_resp.json()
print(str(len(json)) + " jobs found")
for status in json:
print(status['jobId'] + ' ' + status['status'])
Explanation: List all jobs
End of explanation
# The 'Datast slice' takes a slice through a dataset specified by the number of records to skip and then the number to include.
# This is one of Squonk's 'internal' services.
# The job ID is stored in the job_id variable.
url = jobexecutor_url + '/core.dataset.filter.slice.v1'
data = {
'options': '{"skip":2,"count":3}',
'input_data': ('input_data', open('nci10.data', 'rb'), 'application/x-squonk-molecule-object+json'),
'input_metadata': ('input_metadata', open('nci10.metadata', 'rb'), 'application/x-squonk-dataset-metadata+json')
}
print("Requesting POST " + jobexecutor_url)
jobs_resp = requests.post(url, files=data, headers = {'Authorization': 'bearer ' + token, 'Content-Type': 'multipart/form'}, verify=tls_verify)
print('Response Code: ' + str(jobs_resp.status_code))
job_status = jobs_resp.json()
job_id = job_status['jobId']
print(job_status)
print("\nJobID: " + job_id)
Explanation: Execute the 'Dataset Slice' service
End of explanation
# The job is defined by the job_id variable and is probably the last job executed
url = jobexecutor_url + '/' + job_id + '/status'
print("Requesting GET " + url )
jobs_resp = requests.get(url, headers={'Authorization': 'bearer ' + token}, verify=tls_verify)
print('Response Code: ' + str(jobs_resp.status_code))
json = jobs_resp.json()
json
Explanation: Get the status of the current job
End of explanation
# The job is defined by the job_id variable and is probably the last job executed.
# The status of the job needs to be 'RESULTS_READY'
# The response is a multipart response, typically containing the job status, the results metadata and the results data.
# This method can be called for a job any number of times until the job is deleted.
url = jobexecutor_url + '/' + job_id + '/results'
print("Requesting GET " + url )
jobs_resp = requests.get(url, headers={'Authorization': 'bearer ' + token}, verify=tls_verify)
print('Response Code: ' + str(jobs_resp.status_code))
multipart_data = decoder.MultipartDecoder.from_response(jobs_resp)
for part in multipart_data.parts:
print(part.content)
print(part.headers)
Explanation: Get the results of a job.
End of explanation
# Once you have fetched the results you MUST delete the job.
# The job is defined by the job_id variable and is probably the last job executed.
url = jobexecutor_url + '/' + job_id
print("Requesting DELETE " + url)
jobs_resp = requests.delete(url, headers={'Authorization': 'bearer ' + token}, verify=tls_verify)
print('Response Code: ' + str(jobs_resp.status_code))
json = jobs_resp.json()
if 'status' in json and json['status'] == 'COMPLETED':
print('Job deleted')
else:
print('Problem deleting job')
Explanation: Delete the job
End of explanation
# Delete all jobs
# First get the current jobs
jobs_resp = requests.get(jobexecutor_url, headers={'Authorization': 'bearer ' + token}, verify=tls_verify)
print('Response Code: ' + str(jobs_resp.status_code))
json = jobs_resp.json()
print('Found ' + str(len(json)) + ' jobs')
# Now go through them and delete
# If successful the status of the job will then be COMPLETED.
for job in json:
id = job['jobId']
url = jobexecutor_url + '/' + id
print("Deleting " + url)
jobs_resp = requests.delete(url, headers={'Authorization': 'bearer ' + token}, verify=tls_verify)
j = jobs_resp.json()
print("Status: " + j['status'])
Explanation: Delete all jobs
This is to help clean up if you get into a mess!
End of explanation
# The 'Lipinski filter' takes calculates the classical rule of five properties and allows to filter based on these.
# We have implementations for ChemAxon and RDKit. Here we use the RDKit one.
# The default filter is the classical drug-likeness one defined by Lipinski but you can specify your owwn criteria instaead.
# This is one of Squonk's 'HTTP' services.
# The job ID is stored in the job_id variable.
url = jobexecutor_url + '/rdkit.calculators.lipinski'
data = {
'options': '{"filterMode":"INCLUDE_PASS"}',
'input_data': ('input_data', open('nci10.data', 'rb'), 'application/x-squonk-molecule-object+json'),
'input_metadata': ('input_metadata', open('nci10.metadata', 'rb'), 'application/x-squonk-dataset-metadata+json')
}
print("Requesting POST " + url)
jobs_resp = requests.post(url, files=data, headers = {'Authorization': 'bearer ' + token, 'Content-Type': 'multipart/form'}, verify=tls_verify)
print('Response Code: ' + str(jobs_resp.status_code))
job_status = jobs_resp.json()
job_id = job_status['jobId']
print(job_status)
print("\nJobID: " + job_id)
# passing data as SDF
url = jobexecutor_url + '/rdkit.calculators.lipinski'
data = {
'options': '{"filterMode":"INCLUDE_PASS"}',
'input': ('input', open('Kinase_inhibs.sdf', 'rb'), 'chemical/x-mdl-sdfile')
}
print("Requesting POST " + url)
jobs_resp = requests.post(url, files=data, headers = {'Authorization': 'bearer ' + token, 'Content-Type': 'multipart/form'}, verify=tls_verify)
print('Response Code: ' + str(jobs_resp.status_code))
job_status = jobs_resp.json()
job_id = job_status['jobId']
print(job_status)
print("\nJobID: " + job_id)
# sucos scoring passing 2 inputs as SDF
url = jobexecutor_url + '/pipelines.rdkit.sucos.basic'
data = {
'options': '{}',
'input': ('input', open('mols.sdf', 'rb'), 'chemical/x-mdl-sdfile'),
'target': ('target', open('benzene.sdf', 'rb'), 'chemical/x-mdl-sdfile')
}
print("Requesting POST " + url)
jobs_resp = requests.post(url, files=data, headers = {'Authorization': 'bearer ' + token, 'Content-Type': 'multipart/form'}, verify=tls_verify)
print('Response Code: ' + str(jobs_resp.status_code))
job_status = jobs_resp.json()
job_id = job_status['jobId']
print(job_status)
print("\nJobID: " + job_id)
# open3dAlign scoring passing 2 inputs as SDF
# passing the queryMol as pyrimethamine.mol does not work - it needs tob e converted to SDF
url = jobexecutor_url + '/pipelines.rdkit.o3da.basic'
data = {
'options': '{"arg.crippen":"false"}',
'input': ('input', open('dhfr_3d.sdf', 'rb'), 'chemical/x-mdl-sdfile'),
'queryMol': ('queryMol', open('pyrimethamine.sdf', 'rb'), 'chemical/x-mdl-sdfile')
}
print("Requesting POST " + url)
jobs_resp = requests.post(url, files=data, headers = {'Authorization': 'bearer ' + token, 'Content-Type': 'multipart/form'}, verify=tls_verify)
print('Response Code: ' + str(jobs_resp.status_code))
job_status = jobs_resp.json()
job_id = job_status['jobId']
print(job_status)
print("\nJobID: " + job_id)
# open3dAlign scoring passing inputs as dataset and query as SDF
url = jobexecutor_url + '/pipelines.rdkit.o3da.basic'
data = {
'options': '{"arg.crippen":"false"}',
'input_data': ('input_data', open('dhfr_3d.data.gz', 'rb'), 'application/x-squonk-molecule-object+json'),
'input_metadata': ('input_metadata', open('dhfr_3d.metadata', 'rb'), 'application/x-squonk-dataset-metadata+json'),
'queryMol': ('queryMol', open('pyrimethamine.sdf', 'rb'), 'chemical/x-mdl-sdfile')
}
print("Requesting POST " + url)
jobs_resp = requests.post(url, files=data, headers = {'Authorization': 'bearer ' + token, 'Content-Type': 'multipart/form'}, verify=tls_verify)
print('Response Code: ' + str(jobs_resp.status_code))
job_status = jobs_resp.json()
job_id = job_status['jobId']
print(job_status)
print("\nJobID: " + job_id)
# The 'Conformer generator' used RDKit ETKDG conformer generation tool to generate a number of conformers for the input structures.
# This is one of Squonk's 'Docker' services.
# The job ID is stored in the job_id variable.
service_id = 'pipelines.rdkit.conformer.basic'
data = {
'options': '{"arg.num":10,"arg.method":"RMSD"}',
'input_data': ('input_data', open('nci10.data', 'rb'), 'application/x-squonk-molecule-object+json'),
'input_metadata': ('input_metadata', open('nci10.metadata', 'rb'), 'application/x-squonk-dataset-metadata+json')
}
jobs_resp = requests.post(jobexecutor_url + '/' + service_id, files=data, headers = {'Authorization': 'bearer ' + token, 'Content-Type': 'multipart/form'}, verify=tls_verify)
print('Response Code: ' + str(jobs_resp.status_code))
job_status = jobs_resp.json()
job_id = job_status['jobId']
print(job_status)
print("\nJobID: " + job_id)
# Similarity screening using RDKit.
# This is one of Squonk's 'Nextflow' services.
# The job ID is stored in the job_id variable.
# NOTE: THIS IS NOT WORKING AS THE QUERY STRUCTURE IS NOT BEING PASSED CORRECTLY
service_id = 'pipelines.rdkit.screen.basic'
data = {
'options': '{"arg.query":{"source":"CC1=CC(=O)C=CC1=O","format":"smiles"},"arg.sim":{"minValue":0.5,"maxValue":1.0}}',
'input_data': ('input_data', open('nci10_data.json', 'rb'), 'application/x-squonk-molecule-object+json'),
'input_metadata': ('input_metadata', open('nci10_meta.json', 'rb'), 'application/x-squonk-dataset-metadata+json')
}
jobs_resp = requests.post(jobexecutor_url + '/' + service_id, files=data, headers = {'Authorization': 'bearer ' + token, 'Content-Type': 'multipart/form'}, verify=tls_verify)
print('Response Code: ' + str(jobs_resp.status_code))
job_status = jobs_resp.json()
job_id = job_status['jobId']
print(job_status)
print("\nJobID: " + job_id)
Explanation: Other services
In addition to the simple 'dataset slice' service many more meaningful ones are available.
Here are some examples illustrating the different categories of Squonk services:
Built in services running within the job executor Java process. These are limited to very simple and very fast operations
HTTP services running in the chemservices module that stream results and are designed for relatively short term execution (seconds or at most a few minutes) with the results being streamed immediately back to the requester.
Services running in a Docker container given the input data as files and writing the results as files. These are designed for more flexible implementation of services that can take longer to execute.
Nextflow services. Similar to Docker services, but defined as a Nextflow workflow that typically allows parallel execution on the K8S cluster or potentionally on an external cluster.
Execute one of these instead of the dataset slice one above.
End of explanation |
13,486 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TF-Agents Authors.
Step1: 네트워크
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 네트워크 정의하기
네트워크 API
TF-Agents에서는 Keras Networks를 하위 클래스화합니다. 그러면 다음을 수행할 수 있습니다.
대상 네트워크를 만들 때 필요한 복사 작업을 단순화합니다.
network.variables()를 호출할 때 자동 변수 생성을 수행합니다.
네트워크 input_spec을 기반으로 입력을 검증합니다.
EncodingNetwork는 대부분 선택적인 다음과 같은 레이어로 구성됩니다.
네트워크 인코딩의 특별한 점은 입력 전처리가 적용된다는 것입니다. 입력 전처리는 preprocessing_layers 및 preprocessing_combiner 레이어를 통해 가능합니다. 이러한 레이어 각각은 중첩 구조로 지정할 수 있습니다. preprocessing_layers 중첩이 input_tensor_spec 보다 얕으면 레이어에 하위 중첩이 생깁니다. 예를 들어, 다음과 같다면
전처리 레이어
전처리 결합기
전환
반음 낮추다
밀집한
전처리가 다음을 호출합니다.
input_tensor_spec = ([TensorSpec(3)] * 2, [TensorSpec(3)] * 5) preprocessing_layers = (Layer1(), Layer2())
그러나 다음과 같다면
preprocessed = [preprocessing_layers[0](observations[0]), preprocessing_layers[1](obsrevations[1])]
전처리는 다음을 호출합니다.
preprocessing_layers = ([Layer1() for _ in range(2)], [Layer2() for _ in range(5)])
전처리는 다음을 호출합니다.
python
preprocessed = [ layer(obs) for layer, obs in zip(flatten(preprocessing_layers), flatten(observations)) ]
사용자 정의 네트워크
자체 네트워크를 만들려면 __init__ 및 call 메서드만 재정의하면 됩니다. EncodingNetworks에 대해 배운 내용을 이용해 사용자 정의 네트워크를 만들어 이미지와 벡터를 포함한 관찰 값을 취하는 ActorNetwork를 만들어 보겠습니다.
Step3: RandomPyEnvironment를 생성하여 구조화된 관찰 값을 생성하고 구현을 검증해 보겠습니다.
Step4: 관찰 값이 사전이 되도록 정의했으므로 관찰 값을 처리하기 위한 전처리 레이어를 만들어야 합니다.
Step5: 이제 actor 네트워크가 생겼으므로 환경에서 관찰 값을 처리할 수 있습니다. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 The TF-Agents Authors.
End of explanation
!pip install tf-agents
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import abc
import tensorflow as tf
import numpy as np
from tf_agents.environments import random_py_environment
from tf_agents.environments import tf_py_environment
from tf_agents.networks import encoding_network
from tf_agents.networks import network
from tf_agents.networks import utils
from tf_agents.specs import array_spec
from tf_agents.utils import common as common_utils
from tf_agents.utils import nest_utils
tf.compat.v1.enable_v2_behavior()
Explanation: 네트워크
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/agents/tutorials/8_networks_tutorial"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/agents/tutorials/8_networks_tutorial.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행하기</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/agents/tutorials/8_networks_tutorial.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서 소스 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/agents/tutorials/8_networks_tutorial.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드하기</a></td>
</table>
소개
이 colab에서는 에이전트에 대한 사용자 정의 네트워크를 정의하는 방법을 다룹니다. 네트워크는 에이전트를 통해 훈련되는 모델을 정의하는 데 도움이 됩니다. TF-Agents에는 에이전트에 유용한 여러 가지 유형의 네트워크가 있습니다.
주요 네트워크
QNetwork: 불연속 행동이 있는 환경에 대한 Qlearning에서 사용되는 이 네트워크는 관찰 값을 가능한 각 행동에 대한 추정 값에 매핑합니다.
CriticNetworks: 문헌에서 ValueNetworks라고도 하며 일부 상태를 예상되는 정책 반환의 추정에 매핑하는 일부 Value 함수 버전을 추정하는 방법을 학습합니다. 이 네트워크는 에이전트의 현재 상태가 얼마나 좋은지 추정합니다.
ActorNetworks: 관찰 값에서 행동으로의 매핑을 학습합니다. 이러한 네트워크는 일반적으로 정책에서 행동을 생성하는 데 사용됩니다.
ActorDistributionNetworks: ActorNetworks와 유사하지만 정책이 행동을 생성하기 위해 샘플링할 수 있는 분포를 생성합니다.
도우미 네트워크
EncodingNetwork : 사용자는 네트워크의 입력에 적용 할 전처리 레이어의 매핑을 쉽게 정의 할 수 있습니다.
DynamicUnrollLayer : 시간 순서에 따라 적용되는 에피소드 경계에서 네트워크 상태를 자동으로 재설정합니다.
ProjectionNetwork: CategoricalProjectionNetwork 또는 NormalProjectionNetwork와 같은 네트워크는 입력을 받아 범주형 또는 정규 분포를 생성하는 데 필요한 매개변수를 생성합니다.
TF-Agents의 모든 예에는 사전 구성된 네트워크가 함께 제공됩니다. 그러나 이러한 네트워크는 복잡한 관찰 값을 처리하도록 설정되지 않습니다.
둘 이상의 관측 값/행동을 노출하는 환경에 있으면서 네트워크를 사용자 정의해야 하는 경우 이 튜토리얼을 제대로 찾았습니다!
설정
tf-agent를 아직 설치하지 않은 경우 다음을 실행하십시오.
End of explanation
class ActorNetwork(network.Network):
def __init__(self,
observation_spec,
action_spec,
preprocessing_layers=None,
preprocessing_combiner=None,
conv_layer_params=None,
fc_layer_params=(75, 40),
dropout_layer_params=None,
activation_fn=tf.keras.activations.relu,
enable_last_layer_zero_initializer=False,
name='ActorNetwork'):
super(ActorNetwork, self).__init__(
input_tensor_spec=observation_spec, state_spec=(), name=name)
# For simplicity we will only support a single action float output.
self._action_spec = action_spec
flat_action_spec = tf.nest.flatten(action_spec)
if len(flat_action_spec) > 1:
raise ValueError('Only a single action is supported by this network')
self._single_action_spec = flat_action_spec[0]
if self._single_action_spec.dtype not in [tf.float32, tf.float64]:
raise ValueError('Only float actions are supported by this network.')
kernel_initializer = tf.keras.initializers.VarianceScaling(
scale=1. / 3., mode='fan_in', distribution='uniform')
self._encoder = encoding_network.EncodingNetwork(
observation_spec,
preprocessing_layers=preprocessing_layers,
preprocessing_combiner=preprocessing_combiner,
conv_layer_params=conv_layer_params,
fc_layer_params=fc_layer_params,
dropout_layer_params=dropout_layer_params,
activation_fn=activation_fn,
kernel_initializer=kernel_initializer,
batch_squash=False)
initializer = tf.keras.initializers.RandomUniform(
minval=-0.003, maxval=0.003)
self._action_projection_layer = tf.keras.layers.Dense(
flat_action_spec[0].shape.num_elements(),
activation=tf.keras.activations.tanh,
kernel_initializer=initializer,
name='action')
def call(self, observations, step_type=(), network_state=()):
outer_rank = nest_utils.get_outer_rank(observations, self.input_tensor_spec)
# We use batch_squash here in case the observations have a time sequence
# compoment.
batch_squash = utils.BatchSquash(outer_rank)
observations = tf.nest.map_structure(batch_squash.flatten, observations)
state, network_state = self._encoder(
observations, step_type=step_type, network_state=network_state)
actions = self._action_projection_layer(state)
actions = common_utils.scale_to_spec(actions, self._single_action_spec)
actions = batch_squash.unflatten(actions)
return tf.nest.pack_sequence_as(self._action_spec, [actions]), network_state
Explanation: 네트워크 정의하기
네트워크 API
TF-Agents에서는 Keras Networks를 하위 클래스화합니다. 그러면 다음을 수행할 수 있습니다.
대상 네트워크를 만들 때 필요한 복사 작업을 단순화합니다.
network.variables()를 호출할 때 자동 변수 생성을 수행합니다.
네트워크 input_spec을 기반으로 입력을 검증합니다.
EncodingNetwork는 대부분 선택적인 다음과 같은 레이어로 구성됩니다.
네트워크 인코딩의 특별한 점은 입력 전처리가 적용된다는 것입니다. 입력 전처리는 preprocessing_layers 및 preprocessing_combiner 레이어를 통해 가능합니다. 이러한 레이어 각각은 중첩 구조로 지정할 수 있습니다. preprocessing_layers 중첩이 input_tensor_spec 보다 얕으면 레이어에 하위 중첩이 생깁니다. 예를 들어, 다음과 같다면
전처리 레이어
전처리 결합기
전환
반음 낮추다
밀집한
전처리가 다음을 호출합니다.
input_tensor_spec = ([TensorSpec(3)] * 2, [TensorSpec(3)] * 5) preprocessing_layers = (Layer1(), Layer2())
그러나 다음과 같다면
preprocessed = [preprocessing_layers[0](observations[0]), preprocessing_layers[1](obsrevations[1])]
전처리는 다음을 호출합니다.
preprocessing_layers = ([Layer1() for _ in range(2)], [Layer2() for _ in range(5)])
전처리는 다음을 호출합니다.
python
preprocessed = [ layer(obs) for layer, obs in zip(flatten(preprocessing_layers), flatten(observations)) ]
사용자 정의 네트워크
자체 네트워크를 만들려면 __init__ 및 call 메서드만 재정의하면 됩니다. EncodingNetworks에 대해 배운 내용을 이용해 사용자 정의 네트워크를 만들어 이미지와 벡터를 포함한 관찰 값을 취하는 ActorNetwork를 만들어 보겠습니다.
End of explanation
action_spec = array_spec.BoundedArraySpec((3,), np.float32, minimum=0, maximum=10)
observation_spec = {
'image': array_spec.BoundedArraySpec((16, 16, 3), np.float32, minimum=0,
maximum=255),
'vector': array_spec.BoundedArraySpec((5,), np.float32, minimum=-100,
maximum=100)}
random_env = random_py_environment.RandomPyEnvironment(observation_spec, action_spec=action_spec)
# Convert the environment to a TFEnv to generate tensors.
tf_env = tf_py_environment.TFPyEnvironment(random_env)
Explanation: RandomPyEnvironment를 생성하여 구조화된 관찰 값을 생성하고 구현을 검증해 보겠습니다.
End of explanation
preprocessing_layers = {
'image': tf.keras.models.Sequential([tf.keras.layers.Conv2D(8, 4),
tf.keras.layers.Flatten()]),
'vector': tf.keras.layers.Dense(5)
}
preprocessing_combiner = tf.keras.layers.Concatenate(axis=-1)
actor = ActorNetwork(tf_env.observation_spec(),
tf_env.action_spec(),
preprocessing_layers=preprocessing_layers,
preprocessing_combiner=preprocessing_combiner)
Explanation: 관찰 값이 사전이 되도록 정의했으므로 관찰 값을 처리하기 위한 전처리 레이어를 만들어야 합니다.
End of explanation
time_step = tf_env.reset()
actor(time_step.observation, time_step.step_type)
Explanation: 이제 actor 네트워크가 생겼으므로 환경에서 관찰 값을 처리할 수 있습니다.
End of explanation |
13,487 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Context
John Doe remarked in #AP1432 that there may be too much code in our application that isn't used at all. Before migrating the application to the new platform, we have to analyze which parts of the system are still in use and which are not.
Idea
To understand how much code isn't used, we recorded the executed code in production with the coverage tool JaCoCo. The measurement took place between 21st Oct 2017 and 27st Oct 2017. The results were exported into a CSV file using the JaCoCo command line tool with the following command
Step1: Analysis
It was stated that whole packages wouldn't be needed anymore and that they could be safely removed. Therefore, we sum up the coverage data per class for each package and calculate the coverage ratio for each package.
Step2: We plot the data for the coverage ratio to get a brief overview of the result. | Python Code:
import pandas as pd
coverage = pd.read_csv("../input/spring-petclinic/jacoco.csv")
coverage = coverage[['PACKAGE', 'CLASS', 'LINE_COVERED' ,'LINE_MISSED']]
coverage['LINES'] = coverage.LINE_COVERED + coverage.LINE_MISSED
coverage.head(1)
Explanation: Context
John Doe remarked in #AP1432 that there may be too much code in our application that isn't used at all. Before migrating the application to the new platform, we have to analyze which parts of the system are still in use and which are not.
Idea
To understand how much code isn't used, we recorded the executed code in production with the coverage tool JaCoCo. The measurement took place between 21st Oct 2017 and 27st Oct 2017. The results were exported into a CSV file using the JaCoCo command line tool with the following command:
bash
java -jar jacococli.jar report "C:\Temp\jacoco.exec" --classfiles \
C:\dev\repos\buschmais-spring-petclinic\target\classes --csv jacoco.csv
The CSV file contains all lines of code that were passed through during the measurement's time span. We just take the relevant data and add an additional LINES column to be able to calculate the ratio between covered and missed lines later on.
End of explanation
grouped_by_packages = coverage.groupby("PACKAGE").sum()
grouped_by_packages['RATIO'] = grouped_by_packages.LINE_COVERED / grouped_by_packages.LINES
grouped_by_packages = grouped_by_packages.sort_values(by='RATIO')
grouped_by_packages
Explanation: Analysis
It was stated that whole packages wouldn't be needed anymore and that they could be safely removed. Therefore, we sum up the coverage data per class for each package and calculate the coverage ratio for each package.
End of explanation
%matplotlib inline
grouped_by_packages[['RATIO']].plot(kind="barh", figsize=(8,2))b
# Add PowerPoint Slide Generation here
Explanation: We plot the data for the coverage ratio to get a brief overview of the result.
End of explanation |
13,488 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Row-reduce and variable transformations in non-linear equation systems, an applied example
Step1: Let's consider
Step2: Let's define the stoichiometry and composition
Step3: and now a function for the system of equations
Step4: note how we passed rref=True to linear_exprs, this will give a linear system in reduced row echolon form the system of equations. The four preservation equations (one for charge and three for atom types) has one linearly dependent equation which is dropped by pyneqsys.symbolic.linear_exprs, and after adding our two equations from the equilibria we are left with 5 equations (same number as unknowns).
Step5: Now let's see how pyneqsys can transform our system
Step6: Note how the conservation laws became non-linear while the expressions corresponding to the equilibria became linear. | Python Code:
from __future__ import (absolute_import, division, print_function)
from functools import reduce, partial
from operator import mul
import sympy as sp
import numpy as np
import matplotlib.pyplot as plt
from pyneqsys.symbolic import SymbolicSys, TransformedSys, linear_exprs
sp.init_printing()
Explanation: Row-reduce and variable transformations in non-linear equation systems, an applied example: Chemical equilibria
One of the strengths of pyneqsys is its ability to represent the system of non-linear equations symbolically. This allows the user to reformulate the problem in an physically equivalent but in a numerically different form.
In this notebook we will look at how we can remove linearly dependent equations automatically and go from an overdetermined system to a system with equal number of unknowns as equations. The latter is the preferred form (when it's possible to achive) since it gives a square Jacboian matrix and there are a larger family of numerial methods which we can use to optimize it (i.e. root finding).
End of explanation
texnames = 'H^+ OH^- NH_4^+ NH_3 H_2O'.split()
n = len(texnames)
NH3_idx = texnames.index('NH_3')
NH3_varied = np.logspace(-7, 0)
c0 = 1e-7, 1e-7, 1e-7, 1, 55
K = Kw, Ka = 10**-14/55, 10**-9.24
Explanation: Let's consider:
$$ \rm
H_2O \rightleftharpoons H^+ + OH^- \
NH_4^+ \rightleftharpoons H^+ + NH_3
$$
End of explanation
stoichs = [[1, 1, 0, 0, -1], [1, 0, -1, 1, 0]] # our 2 equilibria
H = [1, 1, 4, 3, 2]
N = [0, 0, 1, 1, 0]
O = [0, 1, 0, 0, 1]
q = [1, -1, 1, 0, 0] # charge
preserv = [H, N, O, q]
Explanation: Let's define the stoichiometry and composition:
End of explanation
prod = lambda x: reduce(mul, x)
def get_f(x, params, backend, lnK):
init_concs = params[:n]
eq_constants = params[n:]
le = linear_exprs(preserv, x, linear_exprs(preserv, init_concs), rref=True)
if lnK:
return le + [
sum(backend.log(xi)*p for xi, p in zip(x, coeffs)) - backend.log(K)
for coeffs, K in zip(stoichs, eq_constants)
]
else:
return le + [
prod(xi**p for xi, p in zip(x, coeffs)) - K for coeffs, K in zip(stoichs, eq_constants)
]
Explanation: and now a function for the system of equations:
End of explanation
neqsys = SymbolicSys.from_callback(
partial(get_f, lnK=False), n, n+len(K),
latex_names=[r'\mathrm{[%s]}' % nam for nam in texnames],
latex_param_names=[r'\mathrm{[%s]_0}' % nam for nam in texnames] + [r'K_{\rm w}', r'K_{\rm a}(\mathrm{NH_4^+})']
)
neqsys
neqsys.get_jac()
%matplotlib inline
def solve_and_plot(nsys):
fig = plt.figure(figsize=(12, 4))
ax_out = plt.subplot(1, 2, 1, xscale='log', yscale='log')
ax_err = plt.subplot(1, 2, 2, xscale='log')
ax_err.set_yscale('symlog', linthreshy=1e-14)
xres, extra = nsys.solve_and_plot_series(
c0, c0+K, NH3_varied, NH3_idx, 'scipy',
plot_kwargs=dict(ax=ax_out), plot_residuals_kwargs=dict(ax=ax_err))
for ax in (ax_out, ax_err):
ax.set_xlabel('[NH3]0 / M')
ax_out.set_ylabel('Concentration / M')
ax_out.legend(loc='best')
ax_err.set_ylabel('Residuals')
avg_nfev = np.average([nfo['nfev'] for nfo in extra['info']])
avg_njev = np.average([nfo['njev'] for nfo in extra['info']])
success = np.average([int(nfo['success']) for nfo in extra['info']])
return {'avg_nfev': avg_nfev, 'avg_njev': avg_njev, 'success': success}
solve_and_plot(neqsys)
Explanation: note how we passed rref=True to linear_exprs, this will give a linear system in reduced row echolon form the system of equations. The four preservation equations (one for charge and three for atom types) has one linearly dependent equation which is dropped by pyneqsys.symbolic.linear_exprs, and after adding our two equations from the equilibria we are left with 5 equations (same number as unknowns).
End of explanation
tneqsys = TransformedSys.from_callback(
partial(get_f, lnK=True), (sp.exp, sp.log), 5, 7,
latex_names=neqsys.latex_names, latex_param_names=neqsys.latex_param_names)
tneqsys
Explanation: Now let's see how pyneqsys can transform our system:
End of explanation
c_res, info = tneqsys.solve([1]*5, np.array(c0+K))
c0, c_res, info['success']
solve_and_plot(tneqsys)
Explanation: Note how the conservation laws became non-linear while the expressions corresponding to the equilibria became linear.
End of explanation |
13,489 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exploration of pairwise string similarity algorithms
Numerous string similarity measuring algirithms are studied below to understand how they work and which one would be the most stuitable. The end goal is to generate a pairwise dissimilarity matrix from (one of) the best algorithms. With such a matrix, we can use technqiues like Multi Dimensional Scaling to plot API strings as points. Clustering algorithms could then be used used to further understand the cohesity of API names. This will be the focus of the other associated jupyter notebook, $clustering.ipynb$
Step1: Levenshtein
The Levenshtein distance between two words is the minimum number of single-character edits (insertions, deletions or substitutions) required to change one word into the other.
It is a metric string distance. The implementation we use is dynamic programming (Wagner–Fischer algorithm).
The space requirement is $\mathcal{O}(M)$. The algorithm runs in $\mathcal{O}(M \times N)$. $M$ could be understood to be the length of the longer string.
Step2: Since we can find out pairwise distances betwen two strings, we can use such distances as our dissimilarty to generate dissimilarity matrices. such matrices can then help use "plot" words. Such a dissimilarity matrix would be symmteric and the diagonal elements are $0$. Code below to plot will be recycled for each algorithm.
Step3: We can now use this dissimilarity matrix to convert the words into a set of points on the $2D$ plane. We use MDS for this. MDs (Multidimensional scaling)
maps points residing in a higher-dimensional space to a lower-dimensional space while preserving the distances between those points as much as possible. Because of this, the pairwise distances between points in the lower-dimensional space are matched closely to their actual distances. Thus, our string points in some arbitrary $N$-dimensions (which is equal to their respective lengths), will be converted into $2-D$ points by preserving their respective distances. Note, we can also plot them in $3-D.$
Step4: Normalized Levenshtein
This distance is computed as levenshtein distance divided by the length of the longer string. The resulting value is always in the interval $[0.0, 1.0]$ but it is not a metric anymore. By metric, we mean it preserves triangle inequality.
Step5: Jaro-Winkler
Jaro-Winkler is a string edit distance that was developed in the area of duplicate detection. The Jaro-Winkler distance metric is designed and best suited for short strings such as person names, and to detect typos. As the vocabulary artifact is presumed to contain only reasonably long strings, this is chosen to be the most relevant variant of edit-distance to our problem.
The algorithm runs in $\mathcal{O}(M \times N)$, with similar space requirements as classic Levenstien's.
Step6: Longest Common Subsequence
The longest common subsequence (LCS) problem finds the longest subsequence common to two sequences. Subsequences are not required to occupy consecutive positions within the original sequences.
The LCS distance between strings $X$ (of length $n$) and $Y$ (of length $m$) is $n + m - 2 (LCS(X, Y)).$ It ranges from $[0, n + m]$
It has space and run time complexity of $\mathcal{O}(M \times N)$.
Step7: Metric Longest Common Subsequence
Based on Longest Common Subsequence, except now distance is
$ 1 - \frac{|LCS(X, Y)|}{max(|X|, |Y|)} $
Step8: N-Gram
The algorithm uses affixing with special character '\n' to increase the weight of first characters. The normalization is achieved by dividing the total similarity score the original length of the longest word. The main idea behind n-gram similarity is generalizing the concept of the longest
common subsequence to encompass n-grams, rather than just unigrams.
Step9: Dice Coefficient and other Q-Gram based algorithms
These algorithms use Q-grams, size $Q$ sets of characters from each string to compute distance. Q-gram distance is a lower bound on Levenshtein distance, but can be computed in $\mathcal{O}(M + N)$ , where as Levenshtein requires $\mathcal{O}(M \times N)$. There are numerous variations like Jaccard index, Cosine similarity, Szymkiewicz-Simpson, etc. Since it is computationally more efficient, we recommend anyone using classic Levenstein to give a look into this one.
Step10: Needleman-Wunsch Global Alignment
Global sequence alignment allows us to compare two strings and find the sequence of "mutations" that lead from one to the other. The goal of sequence alignment is to find homologous/similar sequences in the strings, and then base mutations off of the gaps that emerge between these sequences.
Step11: SIFT 4
This is an algorithm claimed to have been developed to produce a distance measure that matches as close as possible to the human perception of string distance. It takes into account elements like character substitution, character distance, longest common subsequence etc. However, It was developed open-source using experimental testing, and without theoretical background.
Step12: $3-D$ plotting
We can also do a couple $3-D$ visualizations to see how things would look.
Step13: Using some of the algorithms to compute and plot the Google API vocabulary words
Step14: plotting the entireity of Google's API words
Step15: using dbscan to callculate labes for our wirds using normalized levenstien.
Step16: we can find predicted lables for each algorithm using dbscan
Step17: we can qunaitfy how well our lableing worked by using the following clutering metrics.
Homogeniety | Python Code:
import scipy
import numpy as np
import strsimpy
from Bio import pairwise2
from Bio.Seq import Seq
from Bio.pairwise2 import format_alignment
import matplotlib
import matplotlib.pyplot as plt
similar_strings = ["user_id", "userid", "UserId", "userID"]
Explanation: Exploration of pairwise string similarity algorithms
Numerous string similarity measuring algirithms are studied below to understand how they work and which one would be the most stuitable. The end goal is to generate a pairwise dissimilarity matrix from (one of) the best algorithms. With such a matrix, we can use technqiues like Multi Dimensional Scaling to plot API strings as points. Clustering algorithms could then be used used to further understand the cohesity of API names. This will be the focus of the other associated jupyter notebook, $clustering.ipynb$
End of explanation
from strsimpy.levenshtein import Levenshtein
levenshtein = Levenshtein()
print('The levenshtein.distance between {} and {} is {}'. \
format(similar_strings[0], similar_strings[1], levenshtein.distance(similar_strings[0], similar_strings[1])))
Explanation: Levenshtein
The Levenshtein distance between two words is the minimum number of single-character edits (insertions, deletions or substitutions) required to change one word into the other.
It is a metric string distance. The implementation we use is dynamic programming (Wagner–Fischer algorithm).
The space requirement is $\mathcal{O}(M)$. The algorithm runs in $\mathcal{O}(M \times N)$. $M$ could be understood to be the length of the longer string.
End of explanation
def calculate_dissimilarity_matrix(api_strings, pairwise_dissimilarity_measure):
size = len(api_strings)
inconsistency_matrix = np.zeros((size, size))
for i in range(size):
for j in range(size):
if i < j:
string1 = api_strings[i]
string2 = api_strings[j]
if len(string1) == 0:
return len(string2)
if(len(string2) == 0):
return len(string1)
dissimilarity = pairwise_dissimilarity_measure(string1, string2)
inconsistency_matrix[i][j] = dissimilarity
inconsistency_matrix = inconsistency_matrix + inconsistency_matrix.T - np.diag(np.diag(inconsistency_matrix))
return inconsistency_matrix
inconsistency_matrix = calculate_dissimilarity_matrix(similar_strings, levenshtein.distance)
print(inconsistency_matrix)
Explanation: Since we can find out pairwise distances betwen two strings, we can use such distances as our dissimilarty to generate dissimilarity matrices. such matrices can then help use "plot" words. Such a dissimilarity matrix would be symmteric and the diagonal elements are $0$. Code below to plot will be recycled for each algorithm.
End of explanation
from sklearn.manifold import MDS
def plot_words(inconsistency_matrix, title, annotate=False):
embedding = MDS(n_components=2, dissimilarity = "precomputed")
fitted_strings = embedding.fit_transform(inconsistency_matrix)
fitted_strings
x = fitted_strings[:,0]
y = fitted_strings[:,1]
plt.scatter(x, y)
if annotate:
for i, txt in enumerate(similar_strings):
plt.annotate(txt, (x[i], y[i]))
plt.title(title)
matplotlib.rcParams["figure.dpi"] = 150
plot_words(inconsistency_matrix, title = " levenshtein.distance", annotate=True)
Explanation: We can now use this dissimilarity matrix to convert the words into a set of points on the $2D$ plane. We use MDS for this. MDs (Multidimensional scaling)
maps points residing in a higher-dimensional space to a lower-dimensional space while preserving the distances between those points as much as possible. Because of this, the pairwise distances between points in the lower-dimensional space are matched closely to their actual distances. Thus, our string points in some arbitrary $N$-dimensions (which is equal to their respective lengths), will be converted into $2-D$ points by preserving their respective distances. Note, we can also plot them in $3-D.$
End of explanation
from strsimpy.normalized_levenshtein import NormalizedLevenshtein
normalized_levenshtein = NormalizedLevenshtein()
inconsistency_matrix = calculate_dissimilarity_matrix(similar_strings, normalized_levenshtein.distance)
print(inconsistency_matrix)
plot_words(inconsistency_matrix, title = "normalized_levenshtein.distance", annotate=True)
Explanation: Normalized Levenshtein
This distance is computed as levenshtein distance divided by the length of the longer string. The resulting value is always in the interval $[0.0, 1.0]$ but it is not a metric anymore. By metric, we mean it preserves triangle inequality.
End of explanation
from strsimpy.jaro_winkler import JaroWinkler
jarowinkler = JaroWinkler()
print('The jarowinkler.distance between {} and {} is {}'. \
format(similar_strings[0], similar_strings[1], jarowinkler.distance(similar_strings[0], similar_strings[1])))
inconsistency_matrix = calculate_dissimilarity_matrix(similar_strings, jarowinkler.distance)
plot_words(inconsistency_matrix, title = "jarowinkler.distance", annotate=True)
Explanation: Jaro-Winkler
Jaro-Winkler is a string edit distance that was developed in the area of duplicate detection. The Jaro-Winkler distance metric is designed and best suited for short strings such as person names, and to detect typos. As the vocabulary artifact is presumed to contain only reasonably long strings, this is chosen to be the most relevant variant of edit-distance to our problem.
The algorithm runs in $\mathcal{O}(M \times N)$, with similar space requirements as classic Levenstien's.
End of explanation
from strsimpy.longest_common_subsequence import LongestCommonSubsequence
lcs = LongestCommonSubsequence()
print('The lcs.distance between {} and {} is {}'. \
format(similar_strings[0], similar_strings[1], lcs.distance(similar_strings[0], similar_strings[1])))
inconsistency_matrix = calculate_dissimilarity_matrix(similar_strings, lcs.distance)
plot_words(inconsistency_matrix, title = "lcs.distance", annotate=True)
Explanation: Longest Common Subsequence
The longest common subsequence (LCS) problem finds the longest subsequence common to two sequences. Subsequences are not required to occupy consecutive positions within the original sequences.
The LCS distance between strings $X$ (of length $n$) and $Y$ (of length $m$) is $n + m - 2 (LCS(X, Y)).$ It ranges from $[0, n + m]$
It has space and run time complexity of $\mathcal{O}(M \times N)$.
End of explanation
from strsimpy.metric_lcs import MetricLCS
metric_lcs = MetricLCS()
print('The metric_lcs.distance between {} and {} is {}'. \
format(similar_strings[0], similar_strings[1], metric_lcs.distance(similar_strings[0], similar_strings[1])))
inconsistency_matrix = calculate_dissimilarity_matrix(similar_strings, metric_lcs.distance)
plot_words(inconsistency_matrix, title = "metric_lcs.distance", annotate=True)
Explanation: Metric Longest Common Subsequence
Based on Longest Common Subsequence, except now distance is
$ 1 - \frac{|LCS(X, Y)|}{max(|X|, |Y|)} $
End of explanation
from strsimpy.ngram import NGram
fourgram = NGram(4)
print('The fourgram.distance between {} and {} is {}'. \
format(similar_strings[0], similar_strings[1], fourgram.distance(similar_strings[0], similar_strings[1])))
inconsistency_matrix = calculate_dissimilarity_matrix(similar_strings, fourgram.distance)
plot_words(inconsistency_matrix, title = "fourgram.distance", annotate=True)
Explanation: N-Gram
The algorithm uses affixing with special character '\n' to increase the weight of first characters. The normalization is achieved by dividing the total similarity score the original length of the longest word. The main idea behind n-gram similarity is generalizing the concept of the longest
common subsequence to encompass n-grams, rather than just unigrams.
End of explanation
from strsimpy.sorensen_dice import SorensenDice
dice = SorensenDice(2)
print('The 2-dice.distance between {} and {} is {}'. \
format(similar_strings[0], similar_strings[1], dice.distance(similar_strings[0], similar_strings[1])))
inconsistency_matrix = calculate_dissimilarity_matrix(similar_strings, dice.distance)
plot_words(inconsistency_matrix, title = "dice.distance", annotate=True)
Explanation: Dice Coefficient and other Q-Gram based algorithms
These algorithms use Q-grams, size $Q$ sets of characters from each string to compute distance. Q-gram distance is a lower bound on Levenshtein distance, but can be computed in $\mathcal{O}(M + N)$ , where as Levenshtein requires $\mathcal{O}(M \times N)$. There are numerous variations like Jaccard index, Cosine similarity, Szymkiewicz-Simpson, etc. Since it is computationally more efficient, we recommend anyone using classic Levenstein to give a look into this one.
End of explanation
seq1 = Seq(similar_strings[0])
seq2 = Seq(similar_strings[1])
def global_alignment_score(string1, string2):
alignments = pairwise2.align.globalms(seq1, seq2, 5, -4, -4, -4) # all alignment will have the same score, return any
return 5* max(len(similar_strings[0]), len(similar_strings[1])) - alignments[0].score
# alignments = global_alignment_score(seq1, seq2)
# print(format_alignment(*alignments))
# print('The global_alignment_score distance between {} and {} is {}'. \
# format(similar_strings[0], similar_strings[1], global_alignment_score(seq1, seq2)))
inconsistency_matrix = calculate_dissimilarity_matrix(similar_strings, global_alignment_score)
plot_words(inconsistency_matrix, title = "global_alignment_score", annotate=True)
Explanation: Needleman-Wunsch Global Alignment
Global sequence alignment allows us to compare two strings and find the sequence of "mutations" that lead from one to the other. The goal of sequence alignment is to find homologous/similar sequences in the strings, and then base mutations off of the gaps that emerge between these sequences.
End of explanation
from strsimpy import SIFT4
s = SIFT4()
print('The SIFT4 distance between {} and {} is {}'. \
format(similar_strings[0], similar_strings[1], s.distance(similar_strings[0], similar_strings[1])))
inconsistency_matrix = calculate_dissimilarity_matrix(similar_strings, s.distance)
plot_words(inconsistency_matrix, title = "SIFT4", annotate=True)
Explanation: SIFT 4
This is an algorithm claimed to have been developed to produce a distance measure that matches as close as possible to the human perception of string distance. It takes into account elements like character substitution, character distance, longest common subsequence etc. However, It was developed open-source using experimental testing, and without theoretical background.
End of explanation
def plot_words_3d(inconsistency_matrix, title, annotate=False):
embedding = MDS(n_components=3, dissimilarity = "precomputed")
fitted_strings = embedding.fit_transform(inconsistency_matrix)
fitted_strings
x = fitted_strings[:,0]
y = fitted_strings[:,1]
z = fitted_strings[:,2]
return x, y, z
matplotlib.rcParams["figure.dpi"] = 150
from mpl_toolkits import mplot3d
import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure()
ax = plt.axes(projection='3d')
inconsistency_matrix = calculate_dissimilarity_matrix(similar_strings, s.distance)
x, y, z = plot_words_3d(inconsistency_matrix, "Sift4")
ax.scatter(x, y, z)
ax.set_title( "SIFT4 - 3D plot")
matplotlib.rcParams["figure.dpi"] = 150
fig = plt.figure()
ax = plt.axes(projection='3d')
inconsistency_matrix = calculate_dissimilarity_matrix(similar_strings, dice.distance)
x, y, z = plot_words_3d(inconsistency_matrix, "Dice")
ax.scatter(x, y, z)
ax.set_title( "Dice - 3D plot")
Explanation: $3-D$ plotting
We can also do a couple $3-D$ visualizations to see how things would look.
End of explanation
import pandas as pd
df = pd.read_csv (r'/home/gelaw/work-stuff/gocode/src/registry-experimental/consistency/rpc/google/cloud/apigeeregistry/v1/similarity/algorithms /vocab.csv')
words = []
for word in df.values:
words.append(word[0])
res = []
for word in words:
if isinstance(word, float):
print(word)
else:
res.append(word)
inconsistency_matrix = calculate_dissimilarity_matrix(res[1:300], jarowinkler.distance)
def plot_words2(inconsistency_matrix, title, annotate=False):
embedding = MDS(n_components=2, dissimilarity = "precomputed")
fitted_strings = embedding.fit_transform(inconsistency_matrix)
fitted_strings
x = fitted_strings[:,0]
y = fitted_strings[:,1]
plt.scatter(x, y, s= 5)
if annotate:
for i, txt in enumerate(similar_strings):
plt.annotate(txt, (x[i], y[i]))
plt.title(title)
plot_words2(inconsistency_matrix, title = "jaro-winkler", annotate=False)
inconsistency_matrix = calculate_dissimilarity_matrix(res[1:300], fourgram.distance)
plot_words2(inconsistency_matrix, title = "four-gram", annotate=False)
Explanation: Using some of the algorithms to compute and plot the Google API vocabulary words
End of explanation
inconsistency_matrix = calculate_dissimilarity_matrix(res, fourgram.distance)
plot_words2(inconsistency_matrix, title = "four-gram", annotate=False)
matplotlib.rcParams["figure.dpi"] = 250
def plot_words3(inconsistency_matrix, title, annotate=False):
embedding = MDS(n_components=2, dissimilarity = "precomputed")
fitted_strings = embedding.fit_transform(inconsistency_matrix)
fitted_strings
x = fitted_strings[:,0]
y = fitted_strings[:,1]
plt.scatter(x, y, s= 1)
if annotate:
for i, txt in enumerate(similar_strings):
plt.annotate(txt, (x[i], y[i]))
plt.title(title)
inconsistency_matrix = calculate_dissimilarity_matrix(res, jarowinkler.distance)
plot_words3(inconsistency_matrix, title = "jaro-winkler", annotate=False)
from scipy.spatial.distance import pdist
from scipy.spatial.distance import squareform
def extract_lv (array_1, array_2):
return levenshtein.distance(array_1[0], array_2[0])
def extract_nlv (array_1, array_2):
return normalized_levenshtein.distance(array_1[0], array_2[0])
def extract_jarowinkler (array_1, array_2):
return jarowinkler.distance(array_1[0], array_2[0])
def extract_lcs (array_1, array_2):
return lcs.distance(array_1[0], array_2[0])
def extract_dice (array_1, array_2):
return dice.distance(array_1[0], array_2[0])
def extract_global_alignment (array_1, array_2):
return global_alignment_score(array_1[0], array_2[0])
def extract_sift4 (array_1, array_2):
return s.distance(array_1[0], array_2[0])
algorithm_wrapper = {
levenshtein: extract_lv,
normalized_levenshtein: extract_nlv,
jarowinkler: extract_jarowinkler,
lcs: extract_lcs,
dice: extract_dice,
global_alignment_score: extract_global_alignment,
s: extract_sift4
}
### an even more efficient implementation to calculate the dissimilairty matrix using python's pairwise distance calculating function
def efficient_dissimilarity_matrix(strings, dissimilarity_measuring_algorithm, return_square_matrix = True):
condensed_matrix = pdist(strings, algorithm_wrapper[dissimilarity_measuring_algorithm])
if return_square_matrix:
return squareform(condensed_matrix)
return condensed_matrix
reshaped_words = np.reshape(similar_strings, (len(similar_strings), 1))
print(reshaped_words)
print(reshaped_words.shape)
matrixx = efficient_dissimilarity_matrix(reshaped_words, normalized_levenshtein)
print(matrixx)
tagged_df = pd.read_csv (r'/home/gelaw/work-stuff/gocode/src/registry-experimental/consistency/rpc/google/cloud/apigeeregistry/v1/similarity/algorithms /vocab1000.csv')
tagged_df = tagged_df.drop(tagged_df.index[1000:])
word_labels = tagged_df.iloc[:, 0]
word_labels = word_labels.to_numpy()
tagged_words = tagged_df.iloc[:, 1]
tagged_words = tagged_words.to_numpy()
tagged_words_dissimilairty = efficient_dissimilarity_matrix(tagged_words.reshape(len(tagged_words), 1), normalized_levenshtein, return_square_matrix = True)
from strsimpy.normalized_levenshtein import NormalizedLevenshtein
from sklearn.cluster import dbscan
data = tagged_words
print(data[0])
normalized_levenshtein = NormalizedLevenshtein()
print(normalized_levenshtein.distance('My string', 'My string'))
import numpy as np
from sklearn.cluster import dbscan
data = tagged_words
def extract_indices_lv(x, y):
i, j = int(x[0]), int(y[0]) # extract indices
return levenshtein.distance(data[i], data[j])
def extract_indices_nlv(x, y):
i, j = int(x[0]), int(y[0]) # extract indices
return normalized_levenshtein.distance(data[i], data[j])
def extract_indices_jarowinkler(x, y):
i, j = int(x[0]), int(y[0]) # extract indices
return jarowinkler.distance(data[i], data[j])
def extract_indices_lcs(x, y):
i, j = int(x[0]), int(y[0]) # extract indices
return lcs.distance(data[i], data[j])
def extract_indices_dice(x, y):
i, j = int(x[0]), int(y[0]) # extract indices
return dice.distance(data[i], data[j])
def extract_indices_sift4(x, y):
i, j = int(x[0]), int(y[0]) # extract indices
return s.distance(data[i], data[j])
X = np.arange(len(data)).reshape(-1, 1)
# We need to specify algoritum='brute' as the default assumes
# a continuous feature space.
Explanation: plotting the entireity of Google's API words
End of explanation
db = dbscan(X, metric=extract_indices_nlv, eps=.3, min_samples=2, algorithm='brute')
print("the number of unique lables generated by dbscan is: ", len(set(db[1])))
db = dbscan(X, metric=extract_indices_dice, eps=.3, min_samples=2, algorithm='brute')
print("the number of unique lables generated by dbscan is: ", len(set(db[1])))
#print((db[1]).shape)
word_labels = np.array([int(i) for i in word_labels])
#print(word_labels.shape)
Explanation: using dbscan to callculate labes for our wirds using normalized levenstien.
End of explanation
def compute_predicted_lables(data, algorithm, dbscan_eps, dbscan_min_samples):
db = dbscan(data, metric=algorithm, eps=dbscan_eps, min_samples=dbscan_min_samples, algorithm='brute')
return db[1]
lables = compute_predicted_lables(data = X, algorithm = extract_indices_nlv, dbscan_eps = .3, dbscan_min_samples = 2)
Explanation: we can find predicted lables for each algorithm using dbscan
End of explanation
from sklearn.metrics.cluster import homogeneity_score
from sklearn.metrics.cluster import completeness_score
from sklearn.metrics.cluster import v_measure_score
v = v_measure_score(word_labels, lables)
c = completeness_score(word_labels, lables)
h = homogeneity_score(word_labels, lables)
print(" the v measure score for normalized levenstein is: ", str(round(v,2)*100) + "%")
print(" the completeness score for normalized levenstein is: ", str(round(c,2)*100) + "%")
print(" the homogeniety score for normalized levenstein is: ", str(round(h,2)*100) + "%")
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import make_scorer
params = [{'dbscan_eps': [.1, .2, .3, .4, .5, .6, .7, .8, .9],
'dbscan_min_samples': [2, 3, 4, 5, 6, 7, 8, 9],
}]
dbscan_eps_values = [.1, .2, .3, .4, .5, .6, .7, .8, .9]
dbscan_min_samples = [2, 3, 4, 5, 6, 7, 8, 9]
best_eps = -1
best_min_value = -1
best_score = 0
for i in dbscan_eps_values:
for j in dbscan_min_samples:
lables = compute_predicted_lables(data = X, algorithm = extract_indices_nlv, dbscan_eps = i, dbscan_min_samples = j)
current_score = v_measure_score(word_labels, lables)
if (current_score > best_score):
best_score = current_score
best_eps = i
best_min_value = j
print(i, j, current_score)
#print(best_eps, best_min_value, best_score)
from sklearn.metrics.cluster import homogeneity_score
from sklearn.metrics.cluster import completeness_score
lables = compute_predicted_lables(data = X, algorithm = extract_indices_nlv, dbscan_eps = .2, dbscan_min_samples = 2)
current_score = homogeneity_score(word_labels, lables)
print(current_score)
print(" abe ")
current_score = completeness_score(word_labels, lables)
print(current_score)
print(best_eps, best_min_value, best_score)
Explanation: we can qunaitfy how well our lableing worked by using the following clutering metrics.
Homogeniety: A clustering result satisfies homogeneity if all of its clusters contain only points which are members of a single class
Completeness: A clustering result satisfies completeness if all points that are members of a given class are elements of the same cluster.
V-measure: the harmonic mean between homogeneity and completeness. We use this for our performance analysis.
End of explanation |
13,490 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Feature Synthesis
Deep Feature Synthesis (DFS) is an automated method for performing feature engineering on relational and temporal data.
Input Data
Deep Feature Synthesis requires structured datasets in order to perform feature engineering. To demonstrate the capabilities of DFS, we will use a mock customer transactions dataset.
Step1: Once data is prepared as an .EntitySet, we are ready to automatically generate features for a target dataframe - e.g. customers.
Running DFS
Typically, without automated feature engineering, a data scientist would write code to aggregate data for a customer, and apply different statistical functions resulting in features quantifying the customer's behavior. In this example, an expert might be interested in features such as
Step2: In the example above, "count" is an aggregation primitive because it computes a single value based on many sessions related to one customer. "month" is called a transform primitive because it takes one value for a customer transforms it to another.
Creating "Deep Features"
The name Deep Feature Synthesis comes from the algorithm's ability to stack primitives to generate more complex features. Each time we stack a primitive we increase the "depth" of a feature. The max_depth parameter controls the maximum depth of the features returned by DFS. Let us try running DFS with max_depth=2
Step3: With a depth of 2, a number of features are generated using the supplied primitives. The algorithm to synthesize these definitions is described in this paper. In the returned feature matrix, let us understand one of the depth 2 features
Step4: For each customer this feature
calculates the sum of all transaction amounts per session to get total amount per session,
then applies the mean to the total amounts across multiple sessions to identify the average amount spent per session
We call this feature a "deep feature" with a depth of 2.
Let's look at another depth 2 feature that calculates for every customer the most common hour of the day when they start a session
Step5: For each customer this feature calculates
The hour of the day each of his or her sessions started, then
uses the statistical function mode to identify the most common hour he or she started a session
Stacking results in features that are more expressive than individual primitives themselves. This enables the automatic creation of complex patterns for machine learning.
Changing Target DataFrame
DFS is powerful because we can create a feature matrix for any dataframe in our dataset. If we switch our target dataframe to "sessions", we can synthesize features for each session instead of each customer. Now, we can use these features to predict the outcome of a session.
Step6: As we can see, DFS will also build deep features based on a parent dataframe, in this case the customer of a particular session. For example, the feature below calculates the mean transaction amount of the customer of the session. | Python Code:
import featuretools as ft
es = ft.demo.load_mock_customer(return_entityset=True)
es
Explanation: Deep Feature Synthesis
Deep Feature Synthesis (DFS) is an automated method for performing feature engineering on relational and temporal data.
Input Data
Deep Feature Synthesis requires structured datasets in order to perform feature engineering. To demonstrate the capabilities of DFS, we will use a mock customer transactions dataset.
End of explanation
feature_matrix, feature_defs = ft.dfs(entityset=es,
target_dataframe_name="customers",
agg_primitives=["count"],
trans_primitives=["month"],
max_depth=1)
feature_matrix
Explanation: Once data is prepared as an .EntitySet, we are ready to automatically generate features for a target dataframe - e.g. customers.
Running DFS
Typically, without automated feature engineering, a data scientist would write code to aggregate data for a customer, and apply different statistical functions resulting in features quantifying the customer's behavior. In this example, an expert might be interested in features such as: total number of sessions or month the customer signed up.
These features can be generated by DFS when we specify the target_dataframe as customers and "count" and "month" as primitives.
End of explanation
feature_matrix, feature_defs = ft.dfs(entityset=es,
target_dataframe_name="customers",
agg_primitives=["mean", "sum", "mode"],
trans_primitives=["month", "hour"],
max_depth=2)
feature_matrix
Explanation: In the example above, "count" is an aggregation primitive because it computes a single value based on many sessions related to one customer. "month" is called a transform primitive because it takes one value for a customer transforms it to another.
Creating "Deep Features"
The name Deep Feature Synthesis comes from the algorithm's ability to stack primitives to generate more complex features. Each time we stack a primitive we increase the "depth" of a feature. The max_depth parameter controls the maximum depth of the features returned by DFS. Let us try running DFS with max_depth=2
End of explanation
feature_matrix[['MEAN(sessions.SUM(transactions.amount))']]
Explanation: With a depth of 2, a number of features are generated using the supplied primitives. The algorithm to synthesize these definitions is described in this paper. In the returned feature matrix, let us understand one of the depth 2 features
End of explanation
feature_matrix[['MODE(sessions.HOUR(session_start))']]
Explanation: For each customer this feature
calculates the sum of all transaction amounts per session to get total amount per session,
then applies the mean to the total amounts across multiple sessions to identify the average amount spent per session
We call this feature a "deep feature" with a depth of 2.
Let's look at another depth 2 feature that calculates for every customer the most common hour of the day when they start a session
End of explanation
feature_matrix, feature_defs = ft.dfs(entityset=es,
target_dataframe_name="sessions",
agg_primitives=["mean", "sum", "mode"],
trans_primitives=["month", "hour"],
max_depth=2)
feature_matrix.head(5)
Explanation: For each customer this feature calculates
The hour of the day each of his or her sessions started, then
uses the statistical function mode to identify the most common hour he or she started a session
Stacking results in features that are more expressive than individual primitives themselves. This enables the automatic creation of complex patterns for machine learning.
Changing Target DataFrame
DFS is powerful because we can create a feature matrix for any dataframe in our dataset. If we switch our target dataframe to "sessions", we can synthesize features for each session instead of each customer. Now, we can use these features to predict the outcome of a session.
End of explanation
feature_matrix[['customers.MEAN(transactions.amount)']].head(5)
Explanation: As we can see, DFS will also build deep features based on a parent dataframe, in this case the customer of a particular session. For example, the feature below calculates the mean transaction amount of the customer of the session.
End of explanation |
13,491 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Quality Check API Example
Step1: Create a session. Note the api endpoint, lab-services.ovation.io for Ovation Service Lab.
Step2: Create a Quality Check (QC) activity
A QC activity determines the status of results for each Sample in a Workflow. Normally, QC activities are handled in the web application, but you can submit a new activity with the necessary information to complete the QC programaticallly.
First, we'll need a workflow and the label of the QC activity WorkflowActivity
Step3: Next, we'll get the WorkflowSampleResults for the batch. Each WorkflowSampleResult contains the parsed data for a single Sample within the batch. Each WorkflowSampleResult has a result_type that distinguishes each kind of data.
Step4: Within each WorkflowSampleResult you should see a result object containing records for each assay. In most cases, the results parser created a record for each line in an uploaded tabular (csv or tab-delimited) file. When that record has an entry identifiying the sample and an entry identifying the assay, the parser places that record into the WorkflowSampleResult for the corresponding Workflow Sample, result type, and assay. If more than one record matches this Sample > Result type > Assay, it will be appended to the records for that sample, result type, and assay.
A QC activity updates the status of assays and entire Workflow Sample Results. Each assay may recieve a status ("accepted", "rejected", or "repeat") indicating the QC outcome of that assay for a particular sample. In addition, the WorkflowSampleResult has a global status indicating the overall QC outcome for that sample and result type. Individual assay statuses may be used on repeat to determine which assays need to be repeated. The global status determines how the sample is routed following QC. In fact, there can be multiple routing options for each status (e.g. an "Accept and process for workflow A" and "Accept and process for workflow B" options). Ovation internally uses a routing value to indicate (uniquely) which routing option to chose from the configuration. In many cases routing is the same as status (but not always).
WorkflowSampleResult and assay statuses are set (overriding any existing status) by creating a QC activity, passing the updated status for each workflow sample result and contained assay(s).
In this example, we'll randomly choose statuses for each of the workflow samples above
Step5: The activity data we POST will look like this | Python Code:
import urllib
import ovation.lab.workflows as workflows
import ovation.session as session
Explanation: Quality Check API Example
End of explanation
s = session.connect(input('Email: '), api='https://lab-services.ovation.io')
Explanation: Create a session. Note the api endpoint, lab-services.ovation.io for Ovation Service Lab.
End of explanation
workflow_id = input('Workflow ID: ')
qc_activity_label = input('QC activity label: ')
Explanation: Create a Quality Check (QC) activity
A QC activity determines the status of results for each Sample in a Workflow. Normally, QC activities are handled in the web application, but you can submit a new activity with the necessary information to complete the QC programaticallly.
First, we'll need a workflow and the label of the QC activity WorkflowActivity:
End of explanation
result_type = input('Result type: ')
workflow_sample_results = s.get(s.path('workflow_sample_results'), params={'workflow_id': workflow_id,
'result_type': result_type})
workflow_sample_results
Explanation: Next, we'll get the WorkflowSampleResults for the batch. Each WorkflowSampleResult contains the parsed data for a single Sample within the batch. Each WorkflowSampleResult has a result_type that distinguishes each kind of data.
End of explanation
import random
WSR_STATUS = ["accepted", "rejected", "repeat"]
ASSAY_STATUS = ["accepted", "rejected"]
qc_results = []
for wsr in workflow_sample_results:
assay_results = {}
for assay_name, assay in wsr.result.items():
assay_results[assay_name] = {"status": random.choice(ASSAY_STATUS)}
wsr_status = random.choice(WSR_STATUS)
result = {'id': wsr.id,
'result_type': wsr.result_type,
'status': wsr_status,
'routing': wsr_status,
'result': assay_results}
qc_results.append(result)
Explanation: Within each WorkflowSampleResult you should see a result object containing records for each assay. In most cases, the results parser created a record for each line in an uploaded tabular (csv or tab-delimited) file. When that record has an entry identifiying the sample and an entry identifying the assay, the parser places that record into the WorkflowSampleResult for the corresponding Workflow Sample, result type, and assay. If more than one record matches this Sample > Result type > Assay, it will be appended to the records for that sample, result type, and assay.
A QC activity updates the status of assays and entire Workflow Sample Results. Each assay may recieve a status ("accepted", "rejected", or "repeat") indicating the QC outcome of that assay for a particular sample. In addition, the WorkflowSampleResult has a global status indicating the overall QC outcome for that sample and result type. Individual assay statuses may be used on repeat to determine which assays need to be repeated. The global status determines how the sample is routed following QC. In fact, there can be multiple routing options for each status (e.g. an "Accept and process for workflow A" and "Accept and process for workflow B" options). Ovation internally uses a routing value to indicate (uniquely) which routing option to chose from the configuration. In many cases routing is the same as status (but not always).
WorkflowSampleResult and assay statuses are set (overriding any existing status) by creating a QC activity, passing the updated status for each workflow sample result and contained assay(s).
In this example, we'll randomly choose statuses for each of the workflow samples above:
End of explanation
qc = workflows.create_activity(s, workflow_id, qc_activity_label,
activity={'workflow_sample_results': qc_results,
'custom_attributes': {} # Always an empty dictionary for QC activities
})
Explanation: The activity data we POST will look like this:
{"workflow_sample_results": [{"id": WORKFLOW_SAMPLE_RESULT_ID,
"result_type": RESULT_TYPE,
"status":"accepted"|"rejected"|"repeat",
"routing":"accepted",
"result":{ASSAY:{"status":"accepted"|"rejected"}}},
...]}}
End of explanation |
13,492 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numpy Exercise 4
Imports
Step1: Complete graph Laplacian
In discrete mathematics a Graph is a set of vertices or nodes that are connected to each other by edges or lines. If those edges don't have directionality, the graph is said to be undirected. Graphs are used to model social and communications networks (Twitter, Facebook, Internet) as well as natural systems such as molecules.
A Complete Graph, $K_n$ on $n$ nodes has an edge that connects each node to every other node.
Here is $K_5$
Step3: The Laplacian Matrix is a matrix that is extremely important in graph theory and numerical analysis. It is defined as $L=D-A$. Where $D$ is the degree matrix and $A$ is the adjecency matrix. For the purpose of this problem you don't need to understand the details of these matrices, although their definitions are relatively simple.
The degree matrix for $K_n$ is an $n \times n$ diagonal matrix with the value $n-1$ along the diagonal and zeros everywhere else. Write a function to compute the degree matrix for $K_n$ using NumPy.
Step5: The adjacency matrix for $K_n$ is an $n \times n$ matrix with zeros along the diagonal and ones everywhere else. Write a function to compute the adjacency matrix for $K_n$ using NumPy.
Step6: Use NumPy to explore the eigenvalues or spectrum of the Laplacian L of $K_n$. What patterns do you notice as $n$ changes? Create a conjecture about the general Laplace spectrum of $K_n$. | Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
Explanation: Numpy Exercise 4
Imports
End of explanation
import networkx as nx
K_5=nx.complete_graph(5)
nx.draw(K_5)
Explanation: Complete graph Laplacian
In discrete mathematics a Graph is a set of vertices or nodes that are connected to each other by edges or lines. If those edges don't have directionality, the graph is said to be undirected. Graphs are used to model social and communications networks (Twitter, Facebook, Internet) as well as natural systems such as molecules.
A Complete Graph, $K_n$ on $n$ nodes has an edge that connects each node to every other node.
Here is $K_5$:
End of explanation
def complete_deg(n):
Return the integer valued degree matrix D for the complete graph K_n.
# YOUR CODE HERE
#raise NotImplementedError()
#creates idenity matrix and multiplies by (n-1)
a = np.identity(n,dtype=int)
return a*(n-1)
D = complete_deg(5)
assert D.shape==(5,5)
assert D.dtype==np.dtype(int)
assert np.all(D.diagonal()==4*np.ones(5))
assert np.all(D-np.diag(D.diagonal())==np.zeros((5,5),dtype=int))
Explanation: The Laplacian Matrix is a matrix that is extremely important in graph theory and numerical analysis. It is defined as $L=D-A$. Where $D$ is the degree matrix and $A$ is the adjecency matrix. For the purpose of this problem you don't need to understand the details of these matrices, although their definitions are relatively simple.
The degree matrix for $K_n$ is an $n \times n$ diagonal matrix with the value $n-1$ along the diagonal and zeros everywhere else. Write a function to compute the degree matrix for $K_n$ using NumPy.
End of explanation
def complete_adj(n):
Return the integer valued adjacency matrix A for the complete graph K_n.
# YOUR CODE HERE
#raise NotImplementedError()
#creates n dimetion matrix of ones
a = np.ones((n,n), dtype=int)
#subtracts identity matrix from ones matrix
b = a - np.identity(n, dtype=int)
return b
A = complete_adj(5)
assert A.shape==(5,5)
assert A.dtype==np.dtype(int)
assert np.all(A+np.eye(5,dtype=int)==np.ones((5,5),dtype=int))
Explanation: The adjacency matrix for $K_n$ is an $n \times n$ matrix with zeros along the diagonal and ones everywhere else. Write a function to compute the adjacency matrix for $K_n$ using NumPy.
End of explanation
# YOUR CODE HERE
#raise NotImplementedError()
#creates 10 different Kn matricies and prints their eigenvalues
for i in range(1,10):
x = (complete_deg(i) - complete_adj(i))
print np.linalg.eigvals(x)
Explanation: Use NumPy to explore the eigenvalues or spectrum of the Laplacian L of $K_n$. What patterns do you notice as $n$ changes? Create a conjecture about the general Laplace spectrum of $K_n$.
End of explanation |
13,493 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Welcome to Python!
I know what you are all thinking...finally!
Okay let's check out the basics of Python.
I am typing this inside of Jupyter notebook which yields a markdown/programming environment similar to R markdown.
First let us discuss the basics of Python. Here are our standard types
Step1: Lists and dictionaries
Okay let's check out some syntetic data structures.
Step2: Now let us look at dictionaries.
Step3: There are also tuples, which are non-transformable.
Step4: List and dictionary comprehensions
Okay, now some of my favorite features, list and dictionary comprehensions, which allow us to use syntax similar to the mathematician's set notation
Step5: Note that exponentiation in python is done with the symbol **
Also note that the range function works a bit like slicing.
Step6: We can also select subsets
Step7: An example with functions
Just for fun let's see how to build a simple encrypter. First let us import a variable of printable characters from the module string and denote it by chars.
Step8: There is a couple new things going on in this previous line, so let's unpack it. First we have dictionary comprehension, which is defined like a list comprehension. We have used the integer division operateor // and the integer modulus operator %.
Step9: Now we are going to use one of the three core functions from functional programming (map, reduce, filter), namely reduce. This goes through a list item by item and applies a two variable function using an accumulated value for the first argument and the list element for the second. We only need to use this two variable function once, so we will use an anonymous/lambda function.
Finally, it is important to note the absence of brackets indicating the start and end of the function. Python accomplishes this using spacing. This is very unusual, but in Python spacing has meaning and if you use inconsistent spacing your program will not run.
Step10: Numpy and Pandas
Unlike R, Python was not designed for statistical analysis. Python was designed as a general purpose high level programming language. However, one of Python's strongest features is an truly vast collection of easy to use libraries (called modules) that drastically simplify our lives.
Two key core pieces of R functionality are lacking. We do not have an analogue of vectors (efficient lists containing only one type of element), so we are also lacking matrices and tensors, which are just fancier vectors. We are also lacking the data frame abstraction which plays a central role in R.
Vector functionality comes from numpy which is usually imported as np. This provides fast vectors and vectorized operations and should be used when possible instead of lists of numerical data. Dataframes come from pandas which is usually imported as pd. Pandas builds on numpy and is part of the scipy ecosystem, which includes many numerical libraries including more advanced statistics and linear algebra functions. The scipy ecosystem also includes matplotlib which is a pretty complex/flexible plotting library. I should also mention scikit-learn which is a standard machine learning library (although surprisingly limited) is built on scipy.
Step11: A useful numpy feature (although it takes some getting used to) is broadcasting, which is similar to functionality in R, which automatically converts an array of one shape into another shape when performing various operations according to these rules. Broadcasting can easiliy lead to bugs and confusion, so try to be careful.
Step12: Pay attention to the syntax for referencing. Think of the loc and iloc objects as dictionaries which will pull up the relevant pieces of the data frame and allow slicing notation (which is now inclusive on both ends). The difference is that loc searches by name and iloc only searches by numerical index.
Step13: Select only the first few symbols. | Python Code:
3
type(3)
3.0
type(3.0)
type('c')
type('ca')
type("ca")
True
type(True)
type(T) #Not defined unlike R
type(true)
type(x=3) #An assignment does not return a value. This is different from C/C++/R.
x=2 #assignment
x
x==3 #Boolean
Explanation: Welcome to Python!
I know what you are all thinking...finally!
Okay let's check out the basics of Python.
I am typing this inside of Jupyter notebook which yields a markdown/programming environment similar to R markdown.
First let us discuss the basics of Python. Here are our standard types:
End of explanation
y = [4.5,x, 'c'] #lists can contain different types
type(y)
y[0] #zero indexing
y[1]
y[-1] #last entry
y[-2]
len(y)
y = y + ['a','b','d']
y
y[1:3] #Slicing!
y[1:4]
y[1:6:2] # jump by twos
y[:] #copy entire list
z = y
z[1]=3
y
z = y[:]
z[1]=2
z == y
z[1]
y[1]
z = y[::-1] #Reverse order
z
Explanation: Lists and dictionaries
Okay let's check out some syntetic data structures.
End of explanation
a = {'x' : 1, 'y' : z, 'z' : 'entry'}
a
a['x']
a['y'][3]
a.values()
a.keys()
'abc'+'efg'
'abc'[2]
'abcdef'[-2]='x' # strings are immutable (as usual)
'abc'.upper()
Explanation: Now let us look at dictionaries.
End of explanation
x = (1,2,3)
x
type(x)
x[2]
x[-2]=3 # Fails
Explanation: There are also tuples, which are non-transformable.
End of explanation
w = [ a**2 for a in range(10)]
w
Explanation: List and dictionary comprehensions
Okay, now some of my favorite features, list and dictionary comprehensions, which allow us to use syntax similar to the mathematician's set notation
End of explanation
[a for a in range(1,20,2)]
Explanation: Note that exponentiation in python is done with the symbol **
Also note that the range function works a bit like slicing.
End of explanation
[a for a in range(1,20) if a % 2 != 0]
Explanation: We can also select subsets:
End of explanation
from string import printable as chars
chars
lc = len(chars); lc
codebook = {chars[i] : chars[(i+lc//2)%lc] for i in range(lc)}
Explanation: An example with functions
Just for fun let's see how to build a simple encrypter. First let us import a variable of printable characters from the module string and denote it by chars.
End of explanation
codebook['a']
codebook['Y']
Explanation: There is a couple new things going on in this previous line, so let's unpack it. First we have dictionary comprehension, which is defined like a list comprehension. We have used the integer division operateor // and the integer modulus operator %.
End of explanation
from functools import reduce
def encode_decode(s):
return reduce(lambda x,y: x+codebook[y],s,"")
encrypted = encode_decode('This is a secret message'); encrypted
encode_decode(encrypted)
Explanation: Now we are going to use one of the three core functions from functional programming (map, reduce, filter), namely reduce. This goes through a list item by item and applies a two variable function using an accumulated value for the first argument and the list element for the second. We only need to use this two variable function once, so we will use an anonymous/lambda function.
Finally, it is important to note the absence of brackets indicating the start and end of the function. Python accomplishes this using spacing. This is very unusual, but in Python spacing has meaning and if you use inconsistent spacing your program will not run.
End of explanation
import numpy as np
a=np.arange(10)
np.sin(a) # vectorized operation
Explanation: Numpy and Pandas
Unlike R, Python was not designed for statistical analysis. Python was designed as a general purpose high level programming language. However, one of Python's strongest features is an truly vast collection of easy to use libraries (called modules) that drastically simplify our lives.
Two key core pieces of R functionality are lacking. We do not have an analogue of vectors (efficient lists containing only one type of element), so we are also lacking matrices and tensors, which are just fancier vectors. We are also lacking the data frame abstraction which plays a central role in R.
Vector functionality comes from numpy which is usually imported as np. This provides fast vectors and vectorized operations and should be used when possible instead of lists of numerical data. Dataframes come from pandas which is usually imported as pd. Pandas builds on numpy and is part of the scipy ecosystem, which includes many numerical libraries including more advanced statistics and linear algebra functions. The scipy ecosystem also includes matplotlib which is a pretty complex/flexible plotting library. I should also mention scikit-learn which is a standard machine learning library (although surprisingly limited) is built on scipy.
End of explanation
a*2
list(range(10))*2
a*a
a.a
a.shape
b=a.reshape(10,1)
b
b.T
b.T.shape
c=np.dot(a,b); c
c.shape
d=np.zeros(shape=(2,3)); d
e = np.ones_like(d); e
f = np.ndarray(shape = (2,3,4), buffer = np.array(list(range(24))),dtype = np.int)
f
f[1,2,3]
f[1,1:3,3]
f[:,1:3,3]
for x in f:
print(x)
for outer in f:
for inner in outer:
for really_inner in inner:
print(really_inner)
import pandas as pd
df = pd.read_csv("crypto-markets.csv")
df.head()
df.symbol.unique()
len(df.symbol.unique())
df['symbol'].unique()
small_df = df.head(25)
small_df
small_df[['date', 'close']]
small_df[4:6]
small_df[4] # fails
small_df.loc[4]
small_df.loc[4,"open"]
small_df.iloc[4,4]
Explanation: A useful numpy feature (although it takes some getting used to) is broadcasting, which is similar to functionality in R, which automatically converts an array of one shape into another shape when performing various operations according to these rules. Broadcasting can easiliy lead to bugs and confusion, so try to be careful.
End of explanation
type(small_df.loc[4:4])
type(small_df.loc[4])
df['date'] = pd.to_datetime(df['date'])
df['date'].head()
Explanation: Pay attention to the syntax for referencing. Think of the loc and iloc objects as dictionaries which will pull up the relevant pieces of the data frame and allow slicing notation (which is now inclusive on both ends). The difference is that loc searches by name and iloc only searches by numerical index.
End of explanation
mask = df['symbol'].isin(df['symbol'].unique()[1:5])
trim_df = df[mask]
from ggplot import *
gg = ggplot(aes(x='date',y='close',color='symbol'),data = trim_df) + geom_line() + ggtitle("Cryptocurrency prices") + scale_y_log() + \
scale_x_date() + ylab("Closing price (log-scale)") + xlab("Date")
gg.show()
Explanation: Select only the first few symbols.
End of explanation |
13,494 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GEE nested covariance structure simulation study
This notebook is a simulation study that illustrates and evaluates the performance of the GEE nested covariance structure.
A nested covariance structure is based on a nested sequence of groups, or "levels". The top level in the hierarchy is defined by the groups argument to GEE. Subsequent levels are defined by the dep_data argument to GEE.
Step1: Set the number of covariates.
Step2: These parameters define the population variance for each level of grouping.
Step3: Set the number of groups
Step4: Set the number of observations at each level of grouping. Here, everything is balanced, i.e. within a level every group has the same size.
Step5: Calculate the total sample size.
Step6: Construct the design matrix.
Step7: Construct labels showing which group each observation belongs to at each level.
Step8: Simulate the random effects.
Step9: Simulate the response variable.
Step10: Put everything into a dataframe.
Step11: Fit the model.
Step12: The estimated covariance parameters should be similar to groups_var, level1_var, etc. as defined above. | Python Code:
import numpy as np
import pandas as pd
import statsmodels.api as sm
Explanation: GEE nested covariance structure simulation study
This notebook is a simulation study that illustrates and evaluates the performance of the GEE nested covariance structure.
A nested covariance structure is based on a nested sequence of groups, or "levels". The top level in the hierarchy is defined by the groups argument to GEE. Subsequent levels are defined by the dep_data argument to GEE.
End of explanation
p = 5
Explanation: Set the number of covariates.
End of explanation
groups_var = 1
level1_var = 2
level2_var = 3
resid_var = 4
Explanation: These parameters define the population variance for each level of grouping.
End of explanation
n_groups = 100
Explanation: Set the number of groups
End of explanation
group_size = 20
level1_size = 10
level2_size = 5
Explanation: Set the number of observations at each level of grouping. Here, everything is balanced, i.e. within a level every group has the same size.
End of explanation
n = n_groups * group_size * level1_size * level2_size
Explanation: Calculate the total sample size.
End of explanation
xmat = np.random.normal(size=(n, p))
Explanation: Construct the design matrix.
End of explanation
groups_ix = np.kron(np.arange(n // group_size), np.ones(group_size)).astype(int)
level1_ix = np.kron(np.arange(n // level1_size), np.ones(level1_size)).astype(int)
level2_ix = np.kron(np.arange(n // level2_size), np.ones(level2_size)).astype(int)
Explanation: Construct labels showing which group each observation belongs to at each level.
End of explanation
groups_re = np.sqrt(groups_var) * np.random.normal(size=n // group_size)
level1_re = np.sqrt(level1_var) * np.random.normal(size=n // level1_size)
level2_re = np.sqrt(level2_var) * np.random.normal(size=n // level2_size)
Explanation: Simulate the random effects.
End of explanation
y = groups_re[groups_ix] + level1_re[level1_ix] + level2_re[level2_ix]
y += np.sqrt(resid_var) * np.random.normal(size=n)
Explanation: Simulate the response variable.
End of explanation
df = pd.DataFrame(xmat, columns=["x%d" % j for j in range(p)])
df["y"] = y + xmat[:, 0] - xmat[:, 3]
df["groups_ix"] = groups_ix
df["level1_ix"] = level1_ix
df["level2_ix"] = level2_ix
Explanation: Put everything into a dataframe.
End of explanation
cs = sm.cov_struct.Nested()
dep_fml = "0 + level1_ix + level2_ix"
m = sm.GEE.from_formula(
"y ~ x0 + x1 + x2 + x3 + x4",
cov_struct=cs,
dep_data=dep_fml,
groups="groups_ix",
data=df,
)
r = m.fit()
Explanation: Fit the model.
End of explanation
r.cov_struct.summary()
Explanation: The estimated covariance parameters should be similar to groups_var, level1_var, etc. as defined above.
End of explanation |
13,495 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sources
Bowers, Johnson, Pease, "Prospective hot-spotting
Step1: Visualise the risk intensity directly
Our random data includes events from the past week, and from one (whole) week ago. We visualise this by plotting the older events in a fainter colour.
Step2: Visualise the top 1%, 5% and 10% of risk
It might seems strange that some events seem to confer so little risk that the cell they appear in does not reach the top 10%. This is because the weight is additive, and so we strongly weight cells which are near a number of different events. Remember also that we weight recent events more.
Furthermore, because we look at the percentiles, but above view the data using a linear colouring, we are not really comparing like with like. You can see this below, where we plot the (inverse of the) cumulative probability density and we see that the distribution has a heavy tail.
Step3: Continuous kernel estimation
Step4: We note that, unlike the "retrospective" hotspotting technique, the kernel is does not decay smoothly to zero. This accounts for the "noisier" looking image. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import open_cp
import open_cp.prohotspot as phs
# Generate some random data
import datetime
times = [datetime.datetime(2017,3,10) + datetime.timedelta(days=np.random.randint(0,10)) for _ in range(20)]
times.sort()
xc = np.random.random(size=20) * 500
yc = np.random.random(size=20) * 500
points = open_cp.TimedPoints.from_coords(times, xc, yc)
region = open_cp.RectangularRegion(0,500, 0,500)
predictor = phs.ProspectiveHotSpot(region)
predictor.data = points
# To be correct, we should also adjust the weight, as now we have changed
# the ratio between space and time.
predictor.grid = 20
prediction = predictor.predict(times[-1], times[-1])
Explanation: Sources
Bowers, Johnson, Pease, "Prospective hot-spotting: The future of crime mapping?", Brit. J. Criminol. (2004) 44 641--658. doi:10.1093/bjc/azh036
Johnson et al., "Prospective crime mapping in operational context", Home Office Online Report 19/07 Police online library
Algorithm
Grid the space
Divide the area of interest into a grid. The grid is used for both the algorithm, and for data visualisation. There is some discussion about reasonable grid sizes. 10m by 10m or 50m by 50m have been used. Later studies in the literature seem to use a somewhat larger grid size, 100m or 150m square.
Aim of algorithm
We select "bandwidths": space/time regions of the data. Common values are to look at events within 400m and the last 2 months (8 weeks). For each grid cell, for each event falling in this range, we compute a weighting for the event, and then sum all the weightings to produce a (un-normalised) "risk intensity" for that cell.
Choice of weights
I believe the original paper (1) is unclear on this. The discussion on page 9 shows a formula involving "complete 1/2 grid widths" but does not give details as to how, exactly, such a distance is to be computed. The next paragraph gives a couple of examples which seem unclear, as it simply talks about "neighbouring cell". No formula is given for total weight, but we can infer it from the examples.
Let $t_i$ be the elapsed time between now and the event, and $d_i$ the distance of event $i$ from the centre of the grid cell we are interested in. Then
$$ w = \sum_{t_i \leq t_\max, d_i \leq d_\max} \frac{1}{1+d_i} \frac{1}{1+t_i}. $$
For this to make sense, we introduce units:
$t_i$ is the number of whole weeks which have elapsed. So if today is 20th March, and the event occurred on the 17th March, $t_i=0$. If the event occurred on the 10th, $t_i=1$.
$d_i$ is the number of "whole 1/2 grid widths between the event" and the centre of the cell. Again, this is slightly unclear, as an event occurring very near the edge of a cell would (thanks to Pythagoras) have $d_i=1$, while the example in the paper suggests always $d_i=0$ in this case. We shall follow the examples, and take $d_i$ to be the distance measured on the grid (so neighbouring cells are distance 1 apart, and so forth). This is still unclear, as are diagonally adjacent cells "neighbours" or not? We give options in the code.
Paper (2) uses a different formula, and gives no examples:
$$ w = \sum_{t_i \leq t_\max, d_i \leq d_\max} \Big( 1 + \frac{1}{d_i} \Big) \frac{1}{t_i}. $$
where we told only that:
$t_i$ is the elapsed time (but using $1/t_i$ suggests very large weights for events occurring very close to the time of analysis).
$d_i$ is the "number of cells" between the event and the cell in question. The text also notes: "Thus, if a crime occurred within the cell under consideration, the distance would be zero (actually, for computational reasons 1) if it
occurred within an adjacent cell, two, and so on." What to do about diagonal cells is not specified: sensible choices might be that a cell diagonally offset from the cell of interest is either distance 2 or 3. However, either choice would seem to introduce an anisotropic component which seems unjustified.
It is not clear to me that (2) gives the temporal and spatial bandwidths used.
Coupled units
Notice that both weight functions couple the "units" of time and space. For example, if we halve the cell width used, then (roughly speaking) each $d_i$ will double, while the $t_i$ remain unchanged. This results in the time component now having a larger influence on the weight.
It hence seems sensible that we scale both time and distance together.
If we run one test with a grid size of 50m and the time unit as 7 days,
then another test could be a grid size of 25m, but also with the time unit as 3.5 days.
Variations
Paper (2) introduces a variation of this method:
For the second set of results for the prospective method, for each cell, the crime that confers
the most risk is identified and the cell is assigned the risk intensity value generated by that
one point.
Again, this is not made entirely clear, but I believe it means that we look at the sum above, and instead of actually computing the sum, we compute each summand and then take the largest value to be the weight.
Generating predictions
The "risk intensity" for each grid cell is computed, and then displayed graphically as relative risk. For example:
Visualise by plotting the top 1% of grid cells, top 5% and top 10% as different colours. Paper (2) does this.
Visualise by generating a "heat map". Paper (1) does this.
When using the risk intensity to make predictions, there are two reasonable choices:
Compute the risk intensity for today, using all the data up until today. Treat this as a risk profile for the next few days in time.
For each day into the future we wish to predict, recompute the risk intensity.
The difference between (1) and (2) is that events may change their time-based weighting (or event fall out of the temporal bandwidth completely). For example, if today is the 20th March and an event occurred on the 14th, we consider it as occuring zero whole weeks ago, and so it contributes a weight of $1/1 = 1$ (in the 1st formula, for example). However, if we recompute the risk for the 22nd March, this event is now one whole week in the past, and so the weight becomes $1/2$.
Aliasing issues
This issue falls under what I term an "aliasing issue" which comes about as we are taking continuous data and making it discrete:
We lay down a grid, making space discrete, because we measure distance as some multiple of "whole grid width".
We measure time in terms of "whole weeks" but seem to make day level predictions.
It would appear, a priori, that changing the offset of the grid (e.g. moving the whole grid 10m north) could cause a lot of events to jump from one grid cell to another.
Implementation
We keep the grid for "prediction" purposes, but we allow a large range of "weights" to be plugged in, from various "guesses" as to what the exactly the original studies used, to variations of our own making.
Note, however, that this is still ultimately a "discrete" algorithm. We give a variation which generates a continuous kernel (and then bins the result for visualisation / comparision purposes) as a different prediction method, see below.
End of explanation
def plot_events_by_week(ax):
for week in range(2):
start = times[-1] - datetime.timedelta(days = 7 * week)
end = times[-1] - datetime.timedelta(days = 7 * week + 7)
mask = ~( (end < points.timestamps) & (points.timestamps <= start) )
ax.scatter(np.ma.masked_where(mask, points.xcoords),
np.ma.masked_where(mask, points.ycoords),
marker="+", color="black", alpha = 1 - week * 0.5)
fig, ax = plt.subplots(figsize=(8,8))
ax.set(xlim=[region.xmin, region.xmax], ylim=[region.ymin, region.ymax])
ax.pcolormesh(*prediction.mesh_data(), prediction.intensity_matrix, cmap="Blues")
plot_events_by_week(ax)
Explanation: Visualise the risk intensity directly
Our random data includes events from the past week, and from one (whole) week ago. We visualise this by plotting the older events in a fainter colour.
End of explanation
import matplotlib.colors
def make_risk_chart(prediction, ax=None):
bins = np.array([0.9, 0.95, 0.99])
binned = np.digitize(prediction.percentile_matrix(), bins)
masked = np.ma.masked_where(binned == 0, binned)
fixed_colour = matplotlib.colors.ListedColormap(["blue", "yellow", "red"])
if ax is None:
_, ax = plt.subplots(figsize=(8,8))
ax.set(xlim=[region.xmin, region.xmax], ylim=[region.ymin, region.ymax])
ax.pcolormesh(*prediction.mesh_data(), masked, cmap=fixed_colour)
ax.scatter(points.xcoords, points.ycoords, marker="+", color="black")
make_risk_chart(prediction)
data = prediction.intensity_matrix.ravel().copy()
data.sort()
index = len(data) // 100
print("Intensity ranges from {} to {}".format(
np.min(prediction.intensity_matrix), np.max(prediction.intensity_matrix)))
print("1%, 5% and 10% points are {}, {}, {}".format(
data[len(data) - 1 - len(data) // 100],
data[len(data) - 1 - 5 * len(data) // 100],
data[len(data) - 1 - 10 * len(data) // 100] ))
fig, ax = plt.subplots(figsize=(10, 6))
ax.plot(np.arange(len(data)) * 100 / (len(data)-1), data)
ax.set_xlabel("Percentage")
ax.set_ylabel("Risk Intensity")
None
Explanation: Visualise the top 1%, 5% and 10% of risk
It might seems strange that some events seem to confer so little risk that the cell they appear in does not reach the top 10%. This is because the weight is additive, and so we strongly weight cells which are near a number of different events. Remember also that we weight recent events more.
Furthermore, because we look at the percentiles, but above view the data using a linear colouring, we are not really comparing like with like. You can see this below, where we plot the (inverse of the) cumulative probability density and we see that the distribution has a heavy tail.
End of explanation
cts_predictor = phs.ProspectiveHotSpotContinuous()
cts_predictor.data = points
cts_predictor.grid = 20
cts_prediction = cts_predictor.predict(times[-1], times[-1])
image_size = 250
density = np.empty((image_size, image_size))
for i in range(image_size):
for j in range(image_size):
density[j][i] = cts_prediction.risk((i + 0.5) / image_size * 500, (j + 0.5) / image_size * 500)
fig, ax = plt.subplots(figsize=(10,10))
ax.imshow(density, cmap="Blues", extent=(0,500,0,500),origin="bottom", interpolation="bilinear")
plot_events_by_week(ax)
ax.set(xlim=[0, 500], ylim=[0, 500])
None
Explanation: Continuous kernel estimation
End of explanation
grid = open_cp.predictors.GridPredictionArray.from_continuous_prediction_region(cts_prediction, region, 20, 20)
fig, ax = plt.subplots(ncols=2, figsize=(16,7))
ax[0].pcolormesh(*prediction.mesh_data(), prediction.intensity_matrix, cmap="Blues")
ax[1].pcolormesh(*grid.mesh_data(), grid.intensity_matrix, cmap="Blues")
for i, t in enumerate(["Grid based method", "Continuous kernel method"]):
ax[i].set(xlim=[region.xmin, region.xmax], ylim=[region.ymin, region.ymax])
plot_events_by_week(ax[i])
ax[i].set_title("From "+t)
fig, ax = plt.subplots(ncols=2, figsize=(16,7))
make_risk_chart(prediction, ax[0])
make_risk_chart(grid, ax[1])
for i, t in enumerate(["Grid based method", "Continuous kernel method"]):
ax[i].set_title("From "+t)
Explanation: We note that, unlike the "retrospective" hotspotting technique, the kernel is does not decay smoothly to zero. This accounts for the "noisier" looking image.
End of explanation |
13,496 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
如何用Python从海量文本抽取主题?
你在工作、学习中是否曾因信息过载叫苦不迭?有一种方法能够替你读海量文章,并将不同的主题和对应的关键词抽取出来,让你谈笑间观其大略。本文使用Python对超过1000条文本做主题抽取,一步步带你体会非监督机器学习LDA方法的魅力。想不想试试呢?
每个现代人,几乎都体会过信息过载的痛苦。文章读不过来,音乐听不过来,视频看不过来。可是现实的压力,使你又不能轻易放弃掉。
准备
pip install jieba
pip install pyldavis
pip install pandas,sklearn
为了处理表格数据,我们依然使用数据框工具Pandas。先调用它,然后读入我们的数据文件datascience.csv.
Step1: (1024, 3)
行列数都与我们爬取到的数量一致,通过。
分词
下面我们需要做一件重要工作——分词
我们首先调用jieba分词包。
我们此次需要处理的,不是单一文本数据,而是1000多条文本数据,因此我们需要把这项工作并行化。这就需要首先编写一个函数,处理单一文本的分词。
有了这个函数之后,我们就可以不断调用它来批量处理数据框里面的全部文本(正文)信息了。你当然可以自己写个循环来做这项工作。但这里我们使用更为高效的apply函数。如果你对这个函数有兴趣,可以点击这段教学视频查看具体的介绍。
下面这一段代码执行起来,可能需要一小段时间。请耐心等候。
Step2: 我们需要人为设定主题的数量。这个要求让很多人大跌眼镜——我怎么知道这一堆文章里面多少主题?!
别着急。应用LDA方法,指定(或者叫瞎猜)主题个数是必须的。如果你只需要把文章粗略划分成几个大类,就可以把数字设定小一些;相反,如果你希望能够识别出非常细分的主题,就增大主题个数。
对划分的结果,如果你觉得不够满意,可以通过继续迭代,调整主题数量来优化。
这里我们先设定为5个分类试试。
Step3: 到这里,LDA已经成功帮我们完成了主题抽取。但是我知道你不是很满意,因为结果不够直观。
那咱们就让它直观一些好了。
执行以下命令,会有有趣的事情发生。 | Python Code:
import pandas as pd
df = pd.read_csv("datascience.csv", encoding='gb18030') #注意它的编码是中文GB18030,不是Pandas默认设置的编码,所以此处需要显式指定编码类型,以免出现乱码错误。
# 之后看看数据框的头几行,以确认读取是否正确。
df.head()
#我们看看数据框的长度,以确认数据是否读取完整。
df.shape
Explanation: 如何用Python从海量文本抽取主题?
你在工作、学习中是否曾因信息过载叫苦不迭?有一种方法能够替你读海量文章,并将不同的主题和对应的关键词抽取出来,让你谈笑间观其大略。本文使用Python对超过1000条文本做主题抽取,一步步带你体会非监督机器学习LDA方法的魅力。想不想试试呢?
每个现代人,几乎都体会过信息过载的痛苦。文章读不过来,音乐听不过来,视频看不过来。可是现实的压力,使你又不能轻易放弃掉。
准备
pip install jieba
pip install pyldavis
pip install pandas,sklearn
为了处理表格数据,我们依然使用数据框工具Pandas。先调用它,然后读入我们的数据文件datascience.csv.
End of explanation
import jieba
def chinese_word_cut(mytext):
return " ".join(jieba.cut(mytext))
df["content_cutted"] = df.content.apply(chinese_word_cut)
#执行完毕之后,我们需要查看一下,文本是否已经被正确分词。
df.content_cutted.head()
#文本向量化
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
n_features = 1000
tf_vectorizer = CountVectorizer(strip_accents = 'unicode',
max_features=n_features,
stop_words='english',
max_df = 0.5,
min_df = 10)
tf = tf_vectorizer.fit_transform(df.content_cutted)
Explanation: (1024, 3)
行列数都与我们爬取到的数量一致,通过。
分词
下面我们需要做一件重要工作——分词
我们首先调用jieba分词包。
我们此次需要处理的,不是单一文本数据,而是1000多条文本数据,因此我们需要把这项工作并行化。这就需要首先编写一个函数,处理单一文本的分词。
有了这个函数之后,我们就可以不断调用它来批量处理数据框里面的全部文本(正文)信息了。你当然可以自己写个循环来做这项工作。但这里我们使用更为高效的apply函数。如果你对这个函数有兴趣,可以点击这段教学视频查看具体的介绍。
下面这一段代码执行起来,可能需要一小段时间。请耐心等候。
End of explanation
#应用LDA方法
from sklearn.decomposition import LatentDirichletAllocation
n_topics = 5
lda = LatentDirichletAllocation(n_topics=n_topics, max_iter=50,
learning_method='online',
learning_offset=50.,
random_state=0)
#这一部分工作量较大,程序会执行一段时间,Jupyter Notebook在执行中可能暂时没有响应。等待一会儿就好,不要着急。
lda.fit(tf)
#主题没有一个确定的名称,而是用一系列关键词刻画的。我们定义以下的函数,把每个主题里面的前若干个关键词显示出来:
def print_top_words(model, feature_names, n_top_words):
for topic_idx, topic in enumerate(model.components_):
print("Topic #%d:" % topic_idx)
print(" ".join([feature_names[i]
for i in topic.argsort()[:-n_top_words - 1:-1]]))
print()
#定义好函数之后,我们暂定每个主题输出前20个关键词。
n_top_words = 20
#以下命令会帮助我们依次输出每个主题的关键词表:
tf_feature_names = tf_vectorizer.get_feature_names()
print_top_words(lda, tf_feature_names, n_top_words)
Explanation: 我们需要人为设定主题的数量。这个要求让很多人大跌眼镜——我怎么知道这一堆文章里面多少主题?!
别着急。应用LDA方法,指定(或者叫瞎猜)主题个数是必须的。如果你只需要把文章粗略划分成几个大类,就可以把数字设定小一些;相反,如果你希望能够识别出非常细分的主题,就增大主题个数。
对划分的结果,如果你觉得不够满意,可以通过继续迭代,调整主题数量来优化。
这里我们先设定为5个分类试试。
End of explanation
import pyLDAvis
import pyLDAvis.sklearn
pyLDAvis.enable_notebook()
pyLDAvis.sklearn.prepare(lda, tf, tf_vectorizer)
Explanation: 到这里,LDA已经成功帮我们完成了主题抽取。但是我知道你不是很满意,因为结果不够直观。
那咱们就让它直观一些好了。
执行以下命令,会有有趣的事情发生。
End of explanation |
13,497 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Working with the python dyNET package
The dyNET package is intended for neural-network processing on the CPU, and is particularly suited for NLP applications. It is a python-wrapper for the dyNET C++ package written by Chris Dyer.
There are two modes of operation
Step1: The first block creates a model and populates it with parameters.
The second block creates a computation graph and adds the parameters to it, transforming them into Expressions.
The need to distinguish model parameters from "expressions" will become clearer later.
We now make use of the W and V expressions, in order to create the complete expression for the network.
Step2: Training
We now want to set the parameter weights such that the loss is minimized.
For this, we will use a trainer object. A trainer is constructed with respect to the parameters of a given model.
Step3: To use the trainer, we need to
Step4: The optimization step indeed made the loss decrease. We now need to run this in a loop.
To this end, we will create a training set, and iterate over it.
For the xor problem, the training instances are easy to create.
Step5: We now feed each question / answer pair to the network, and try to minimize the loss.
Step6: Our network is now trained. Let's verify that it indeed learned the xor function
Step7: In case we are curious about the parameter values, we can query them
Step8: To summarize
Here is a complete program
Step9: Dynamic Networks
Dynamic networks are very similar to static ones, but instead of creating the network once and then calling "set" in each training example to change the inputs, we just create a new network for each training example.
We present an example below. While the value of this may not be clear in the xor example, the dynamic approach
is very convenient for networks for which the structure is not fixed, such as recurrent or recursive networks. | Python Code:
# we assume that we have the dynet module in your path.
# OUTDATED: we also assume that LD_LIBRARY_PATH includes a pointer to where libcnn_shared.so is.
from dynet import *
# create a model and add the parameters.
m = Model()
pW = m.add_parameters((8,2))
pV = m.add_parameters((1,8))
pb = m.add_parameters((8))
renew_cg() # new computation graph. not strictly needed here, but good practice.
# associate the parameters with cg Expressions
W = parameter(pW)
V = parameter(pV)
b = parameter(pb)
#b[1:-1].value()
b.value()
Explanation: Working with the python dyNET package
The dyNET package is intended for neural-network processing on the CPU, and is particularly suited for NLP applications. It is a python-wrapper for the dyNET C++ package written by Chris Dyer.
There are two modes of operation:
Static networks, in which a network is built and then being fed with different inputs/outputs. Most NN packages work this way.
Dynamic networks, in which a new network is built for each training example (sharing parameters with the networks of other training examples). This approach is what makes dyNET unique, and where most of its power comes from.
We will describe both of these modes.
Package Fundamentals
The main piece of dyNET is the ComputationGraph, which is what essentially defines a neural network.
The ComputationGraph is composed of expressions, which relate to the inputs and outputs of the network,
as well as the Parameters of the network. The parameters are the things in the network that are optimized over time, and all of the parameters sit inside a Model. There are trainers (for example SimpleSGDTrainer) that are in charge of setting the parameter values.
We will not be using the ComputationGraph directly, but it is there in the background, as a singleton object.
When dynet is imported, a new ComputationGraph is created. We can then reset the computation graph to a new state
by calling renew_cg().
Static Networks
The life-cycle of a dyNET program is:
1. Create a Model, and populate it with Parameters.
2. Renew the computation graph, and create Expression representing the network
(the network will include the Expressions for the Parameters defined in the model).
3. Optimize the model for the objective of the network.
As an example, consider a model for solving the "xor" problem. The network has two inputs, which can be 0 or 1, and a single output which should be the xor of the two inputs.
We will model this as a multi-layer perceptron with a single hidden node.
Let $x = x_1, x_2$ be our input. We will have a hidden layer of 8 nodes, and an output layer of a single node. The activation on the hidden layer will be a $\tanh$. Our network will then be:
$\sigma(V(\tanh(Wx+b)))$
Where $W$ is a $8 \times 2$ matrix, $V$ is an $8 \times 1$ matrix, and $b$ is an 8-dim vector.
We want the output to be either 0 or 1, so we take the output layer to be the logistic-sigmoid function, $\sigma(x)$, that takes values between $-\infty$ and $+\infty$ and returns numbers in $[0,1]$.
We will begin by defining the model and the computation graph.
End of explanation
x = vecInput(2) # an input vector of size 2. Also an expression.
output = logistic(V*(tanh((W*x)+b)))
# we can now query our network
x.set([0,0])
output.value()
# we want to be able to define a loss, so we need an input expression to work against.
y = scalarInput(0) # this will hold the correct answer
loss = binary_log_loss(output, y)
x.set([1,0])
y.set(0)
print loss.value()
y.set(1)
print loss.value()
Explanation: The first block creates a model and populates it with parameters.
The second block creates a computation graph and adds the parameters to it, transforming them into Expressions.
The need to distinguish model parameters from "expressions" will become clearer later.
We now make use of the W and V expressions, in order to create the complete expression for the network.
End of explanation
trainer = SimpleSGDTrainer(m)
Explanation: Training
We now want to set the parameter weights such that the loss is minimized.
For this, we will use a trainer object. A trainer is constructed with respect to the parameters of a given model.
End of explanation
x.set([1,0])
y.set(1)
loss_value = loss.value() # this performs a forward through the network.
print "the loss before step is:",loss_value
# now do an optimization step
loss.backward() # compute the gradients
trainer.update()
# see how it affected the loss:
loss_value = loss.value(recalculate=True) # recalculate=True means "don't use precomputed value"
print "the loss after step is:",loss_value
Explanation: To use the trainer, we need to:
* call the forward_scalar method of ComputationGraph. This will run a forward pass through the network, calculating all the intermediate values until the last one (loss, in our case), and then convert the value to a scalar. The final output of our network must be a single scalar value. However, if we do not care about the value, we can just use cg.forward() instead of cg.forward_sclar().
* call the backward method of ComputationGraph. This will run a backward pass from the last node, calculating the gradients with respect to minimizing the last expression (in our case we want to minimize the loss). The gradients are stored in the model, and we can now let the trainer take care of the optimization step.
* call trainer.update() to optimize the values with respect to the latest gradients.
End of explanation
def create_xor_instances(num_rounds=2000):
questions = []
answers = []
for round in xrange(num_rounds):
for x1 in 0,1:
for x2 in 0,1:
answer = 0 if x1==x2 else 1
questions.append((x1,x2))
answers.append(answer)
return questions, answers
questions, answers = create_xor_instances()
Explanation: The optimization step indeed made the loss decrease. We now need to run this in a loop.
To this end, we will create a training set, and iterate over it.
For the xor problem, the training instances are easy to create.
End of explanation
total_loss = 0
seen_instances = 0
for question, answer in zip(questions, answers):
x.set(question)
y.set(answer)
seen_instances += 1
total_loss += loss.value()
loss.backward()
trainer.update()
if (seen_instances > 1 and seen_instances % 100 == 0):
print "average loss is:",total_loss / seen_instances
Explanation: We now feed each question / answer pair to the network, and try to minimize the loss.
End of explanation
x.set([0,1])
print "0,1",output.value()
x.set([1,0])
print "1,0",output.value()
x.set([0,0])
print "0,0",output.value()
x.set([1,1])
print "1,1",output.value()
Explanation: Our network is now trained. Let's verify that it indeed learned the xor function:
End of explanation
W.value()
V.value()
b.value()
Explanation: In case we are curious about the parameter values, we can query them:
End of explanation
# define the parameters
m = Model()
pW = m.add_parameters((8,2))
pV = m.add_parameters((1,8))
pb = m.add_parameters((8))
# renew the computation graph
renew_cg()
# add the parameters to the graph
W = parameter(pW)
V = parameter(pV)
b = parameter(pb)
# create the network
x = vecInput(2) # an input vector of size 2.
output = logistic(V*(tanh((W*x)+b)))
# define the loss with respect to an output y.
y = scalarInput(0) # this will hold the correct answer
loss = binary_log_loss(output, y)
# create training instances
def create_xor_instances(num_rounds=2000):
questions = []
answers = []
for round in xrange(num_rounds):
for x1 in 0,1:
for x2 in 0,1:
answer = 0 if x1==x2 else 1
questions.append((x1,x2))
answers.append(answer)
return questions, answers
questions, answers = create_xor_instances()
# train the network
trainer = SimpleSGDTrainer(m)
total_loss = 0
seen_instances = 0
for question, answer in zip(questions, answers):
x.set(question)
y.set(answer)
seen_instances += 1
total_loss += loss.value()
loss.backward()
trainer.update()
if (seen_instances > 1 and seen_instances % 100 == 0):
print "average loss is:",total_loss / seen_instances
Explanation: To summarize
Here is a complete program:
End of explanation
from dynet import *
# create training instances, as before
def create_xor_instances(num_rounds=2000):
questions = []
answers = []
for round in xrange(num_rounds):
for x1 in 0,1:
for x2 in 0,1:
answer = 0 if x1==x2 else 1
questions.append((x1,x2))
answers.append(answer)
return questions, answers
questions, answers = create_xor_instances()
# create a network for the xor problem given input and output
def create_xor_network(pW, pV, pb, inputs, expected_answer):
renew_cg() # new computation graph
W = parameter(pW) # add parameters to graph as expressions
V = parameter(pV)
b = parameter(pb)
x = vecInput(len(inputs))
x.set(inputs)
y = scalarInput(expected_answer)
output = logistic(V*(tanh((W*x)+b)))
loss = binary_log_loss(output, y)
return loss
m2 = Model()
pW = m2.add_parameters((8,2))
pV = m2.add_parameters((1,8))
pb = m2.add_parameters((8))
trainer = SimpleSGDTrainer(m2)
seen_instances = 0
total_loss = 0
for question, answer in zip(questions, answers):
loss = create_xor_network(pW, pV, pb, question, answer)
seen_instances += 1
total_loss += loss.value()
loss.backward()
trainer.update()
if (seen_instances > 1 and seen_instances % 100 == 0):
print "average loss is:",total_loss / seen_instances
Explanation: Dynamic Networks
Dynamic networks are very similar to static ones, but instead of creating the network once and then calling "set" in each training example to change the inputs, we just create a new network for each training example.
We present an example below. While the value of this may not be clear in the xor example, the dynamic approach
is very convenient for networks for which the structure is not fixed, such as recurrent or recursive networks.
End of explanation |
13,498 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Automata
Editing Automata
Vcsn provides different means to enter automata. One, which also applies to plain Python, is using the automaton constructor
Step1: See the documentation of vcsn.automaton for more details about this function. The syntax used to define the automaton is, however, described here.
In order to facilitate the definition of automata, Vcsn provides additional ''magic commands'' to the IPython Notebook. We will see through this guide how use this command.
%%automaton
Step2: The first argument, here a, is the name of the variable in which this automaton is stored
Step3: You may pass the option -s or --strip to strip the automaton from its layer that keeps the state name you have chosen. In that case, the internal numbers are used, unrelated to the user names (actually, the numbers are assigned to state names as they are encountered starting from 0).
Step4: The second argument specifies the format in which the automaton is described, defaulting to auto, which means "guess the format"
Step5: Automata entered this way are persistent
Step6: The real added value is that now you can interactively edit this automaton
Step7: Beware however that these automata are not persistent
Step8: dot (read/write)
This format relies on the "dot" language of the GraphViz toolkit (http
Step9: efsm (read/write)
This format is designed to support import/export with OpenFST (http
Step10: The following sequence of operations uses OpenFST to determinize this automaton, and to load it back into Vcsn.
Step11: For what it's worth, the above sequence of actions is realized by a.fstdeterminize().
Vcsn and OpenFST compute the same automaton.
Step12: efsm for transducers (two-tape automata)
The following sequence shows the round-trip of a transducer between Vcsn and OpenFST.
Step13: Details about the EFSM format
The EFSM format is a simple format that puts together the various files that OpenFST uses to serialize and deserialize automata
Step14: grail (write)
This format is made to exchange automata with the Grail (http
Step15: tikz (write)
This format generates a LaTeX document that uses TikZ syntax to draw the automaton. Note that the layout is not computed | Python Code:
import vcsn
vcsn.automaton('''
context = "lal_char(ab), z
$ -> p <2>
p -> q <3>a,<4>b
q -> q a
q -> $
''')
Explanation: Automata
Editing Automata
Vcsn provides different means to enter automata. One, which also applies to plain Python, is using the automaton constructor:
End of explanation
%%automaton a
context = "lal_char(ab), z"
$ -> p <2>
p -> q <3>a, <4>b
q -> q a
q -> $
Explanation: See the documentation of vcsn.automaton for more details about this function. The syntax used to define the automaton is, however, described here.
In order to facilitate the definition of automata, Vcsn provides additional ''magic commands'' to the IPython Notebook. We will see through this guide how use this command.
%%automaton: Entering an Automaton
IPython supports so-called "cell magic-commands", that start with %%. Vcsn provides the %%automaton magic command to enter the literal description of an automaton. For instance, the automaton above can entered as follows:
End of explanation
a
Explanation: The first argument, here a, is the name of the variable in which this automaton is stored:
End of explanation
%%automaton --strip a
context = "lal_char(ab), z"
$ -> p <2>
p -> q <3>a, <4>b
q -> q a
q -> $
a
Explanation: You may pass the option -s or --strip to strip the automaton from its layer that keeps the state name you have chosen. In that case, the internal numbers are used, unrelated to the user names (actually, the numbers are assigned to state names as they are encountered starting from 0).
End of explanation
%%automaton a dot
digraph
{
vcsn_context = "lal_char(ab), z"
I -> p [label = "<2>"]
p -> q [label = "<3>a, <4>b"]
q -> q [label = a]
q -> F
}
%%automaton a
digraph
{
vcsn_context = "lal_char(ab), z"
I -> p [label = "<2>"]
p -> q [label = "<3>a, <4>b"]
q -> q [label = a]
q -> F
}
Explanation: The second argument specifies the format in which the automaton is described, defaulting to auto, which means "guess the format":
End of explanation
%automaton a
Explanation: Automata entered this way are persistent: they are stored in the notebook and will be recovered when the page is reopened.
%automaton: Text-Based Edition of an Automaton
In IPython "line magic commands" begin with a single %. The line magic %automaton takes three arguments:
1. the name of the automaton
2. the format you want the textual description of the automaton. Defaults to auto.
3. the display mode: h for horizontal and v for vertical. Defaults to h.
Contrary to the cell magic, the %automaton can be used to update an existing automaton:
End of explanation
%automaton b fado
Explanation: The real added value is that now you can interactively edit this automaton: changes in the text are immediately propagated on the rendered automaton.
When given a fresh variable name, %automaton creates a dummy automaton that you can use as a starting point:
End of explanation
%%automaton a
context = "lal_char(ab), z"
$ -> p <2>
p -> q <3>a, <4>b
q -> q a
q -> $
Explanation: Beware however that these automata are not persistent: changes will be lost when the notebook is closed.
Automata Formats
Vcsn supports differents input and output formats. Some, such as tikz, are only export-only formats: they cannot be read by Vcsn.
daut (read/write)
This simple format is work in progress: its precise syntax is still subject to changes. It is roughly a simplification of the dot syntax. The following example should suffice to understand the syntax. If "guessable", the context can be left implicit.
End of explanation
%%automaton a dot
// The comments are introduced with //, or /* ... */
//
// The overall syntax is that of Dot for directed graph ("digraph").
digraph
{
// The following attribute defines the context of the automaton.
vcsn_context = "lal_char, b"
// Initial states are denoted by an edge between a node whose name starts
// with an "I". So "0" is a initial state.
I -> 0
// Transitions are edges whose label is that of the transition.
0 -> 0 [label = "a"]
0 -> 0 [label = "b"]
0 -> 1 [label = "c, d"]
// Final states are denoted by an edge to a node whose name starts with "F".
1 -> Finish
}
Explanation: dot (read/write)
This format relies on the "dot" language of the GraphViz toolkit (http://graphviz.org). This is the default format for I/O in Vcsn.
An automaton looks as follows:
End of explanation
a = vcsn.context('lal_char(ab), zmin').expression('[ab]*a(<2>[ab])').automaton()
a
efsm = a.format('efsm')
print(efsm)
Explanation: efsm (read/write)
This format is designed to support import/export with OpenFST (http://openfst.org): it wraps its multi-file format (one file describes the automaton with numbers as transition labels, and one or several others define these labels) into a single format. It is not designed to be used by humans, but rather to be handled by two tools:
- efstcompile to compile such a file into the OpenFST binary file format,
- efstdecompile to extract an efsm file from a binary OpenFST file.
efsm for acceptors (single tape automata)
As an example, consider the following exchange between Vcsn and OpenFST.
End of explanation
import os
# Save the EFSM description of the automaton in a file.
with open("a.efsm", "w") as file:
print(efsm, file=file)
# Compile the EFSM into an OpenFST file.
os.system("efstcompile a.efsm >a.fst")
# Call OpenFST's determinization.
os.system("fstdeterminize a.fst >d.fst")
# Convert from OpenFST format to EFSM.
os.system("efstdecompile d.fst >d.efsm")
# Load this file into Python.
with open("d.efsm", "r") as file:
d = file.read()
# Show the result.
print(d)
# Now read it as an automaton.
d_ofst = vcsn.automaton(d, 'efsm')
d_ofst
Explanation: The following sequence of operations uses OpenFST to determinize this automaton, and to load it back into Vcsn.
End of explanation
a.determinize()
Explanation: For what it's worth, the above sequence of actions is realized by a.fstdeterminize().
Vcsn and OpenFST compute the same automaton.
End of explanation
t = a.partial_identity()
t
tefsm = t.format('efsm')
print(tefsm)
vcsn.automaton(tefsm)
Explanation: efsm for transducers (two-tape automata)
The following sequence shows the round-trip of a transducer between Vcsn and OpenFST.
End of explanation
a = vcsn.B.expression('a+b').standard()
a
print(a.format('fado'))
Explanation: Details about the EFSM format
The EFSM format is a simple format that puts together the various files that OpenFST uses to serialize and deserialize automata: one or two files to describe the labels (called "symbol tables"), and one to list the transitions. More details about these files can be found on FSM Man Pages.
When reading an EFSM file, Vcsn expects the following bits:
a line arc_type=TYPE which specifies the weightset. If TYPE is log or log64, this is mapped to the log weightset, if it is standard, then it is mapped to zmin or rmin, depending on whether floatting points were used.
a "here-document" (the Unix name for embedded files, delimited by <<EOF to a line equal to EOF) for the first symbol table. If the here-document is named isymbols.txt, then the automaton is a transducer, otherwise it is considered an acceptor.
if the automaton is a transducer, a second symbol table, osymbols.txt, to describe the labels of the second tape.
then a final here-document, transitions.fsm, which list the transitions.
fado (read/write)
This is the native language of the FAdo platform (http://fado.dcc.fc.up.pt). Weighted automata are not supported.
End of explanation
a = vcsn.B.expression('a+b').standard()
a
print(a.format('grail'))
Explanation: grail (write)
This format is made to exchange automata with the Grail (http://grailplus.org). Weighted automata are not supported.
End of explanation
a = vcsn.Q.expression('<2>a+<3>b').standard()
a
print(a.format('tikz'))
Explanation: tikz (write)
This format generates a LaTeX document that uses TikZ syntax to draw the automaton. Note that the layout is not computed: all the states are simply reported in a row. You will have to tune the positions of the states by hand. However, it remains a convenient way to start.
End of explanation |
13,499 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Review from the previous lecture
In yesterday's Lecture 2, you learned how to use the numpy module, how to make your own functions, and how to import and export data. Below is a quick review before we move on to Lecture 3.
Remember, to use the numpy module, first it must be imported
Step1: You can do a lot with the numpy module. Below is an example to jog your memory
Step2: Do you remember loops? Let's use a while loop to make an array of 10 numbers. Let's have each element be increased by 2 compared with the previous element. Let's also have the first element of the array be 1.
Step3: There's your quick review of numpy and a while loop. Now we can move on to the content of Lecture 3.
A. Data Introspection
One of the most common tasks in experimental physics is trying to model experimental data with a function. This lecture will walk through how to accomplish this with python. First, let's import some data.
Step4: Let's plot it to see what it looks like!
In python there are multiple plotting libraries, but the moset common one is matplotlib, and that is the one we will be using today.
Step7: Usually, you will want to have some theory-based motivation for the function you choose to model some set of data, but for this example, we don't know anything about the data other than the points themselves. In this type of situation, trying to fit a simple function to the data is not a bad first step in trying to understand it. What function do you think might fit this data based on how it looks in the plot?
<details>
<summary>Answer</summary>
It looks like the data is shaped like a normal (gaussian) distribution, so let's try to fit it to that! First, let's define a gaussian function for fitting.
The equation for a Gaussian curve is the following
Step8: What values of these parameters do you think will match the data above?
Step9: Let's try plotting the function the function with these parameters next to the data! For this, we should define some evenly-spaced x-values to calculate the function at using np.linspace
Step10: plt.plot is the standard do-it-all plotting function in matplotlib. Everything about how the series looks can be modified.
B. Goodness-of-fit
How good was your guess? How do you even answer that question?
<details>
<summary>Answer</summary>
Let's use something called the $L_2$ norm
Step11: Try changing the parameters to something bad and see what happens to the value of l2_norm. Since this definition of the $L_2$ norm is not normalized by something like a standard deviation of the data, it can't tell us in absolute terms how good our funciton fits, but it can at least tell us if one set of parameters fits better than another. This is really helpful!
Step12: But, how do we know when we have a best fit? How would you try to figure it out?
Thankfully, we don't have to create our own method to do this. The smart people working on the scipy package have already built an optimized tool for us to use! It's called the curve_fit function as is part of the scipy.optimize sub-package.
C. Fitting
Step13: scipy.optimize.curve_fit is a type of minimization function. In this case, the function finds the parameters of another given function that minimize the $L_2$ norm between the data points and what our gauss function thinks the data points should be at a given x.
A quick, useful way to see what a function does without having to google it is to use the built-in python help function.
Step14: That gave us a lot of information about the curve_fit function! As you can see, curve_fit takes a function as its first parameter, and it tells us exactly how to arrange the parameters of that function (thankfully, our gauss function should already have this form). The next two parameters curve_fit takes are xdata and ydata (x_data and y_data as we defined them). The rest are optional and will be talked about briefly at the end of this lecture.
Let's try calling help on the gauss function we defined above.
Step15: What do you think the help function does?
<details>
<summary>Answer</summary>
You can see help just returns the function "signature" and the string at the start of the function (called the "docstring").
</details>
Now, let's use curve_fit to fit the data to our function!
Step16: Do you know what popt is? How would you find out?
<details>
<summary>Answer</summary>
If you look back at the `help` output from `curve_fit`, `popt` is a list of the best-fit parameters of our gauss function for this data. The parameters in the list are in the order that the parameters are listed in our `gauss` function (`mean`, `std`, `amp`). Let's try plotting the data, our guess, and the best fit from `curve_fit`!
</details>
Step17: How close was your guess to the best fit?
D. Interpreting Fitting Errors
pcov is a little more complicated. pcov is what's called the "covariance matrix" of the best fit parameters. As shown in the help output, the standard deviations of the parameters can be recovered from this matrix in the following way
Step18: The covariance matrix looks like this for the parameters in our gauss function
\begin{bmatrix}
s_{\mu}^2 & cov(\mu, \sigma) & cov(\mu, A)\
cov(\sigma, \mu) & s_{\sigma}^2 & cov(\sigma, A)\
cov(A, \mu) & cov(A, \sigma) & s_A^2
\end{bmatrix}
Where $s_x^2$ is the variance of a parameter $x$ and $s_x$ is its estimated standard deviation, and $cov(x, y)$ is the covariance between parameters $x$ and $y$.
Can you guess what np.sqrt(np.diag(pcov) does now?
Covariance can be difficult to visualize. It's often much easier to look at something called the "correlation coefficient" instead. The correlation coefficients can be easily found from the covariance matrix by using this transformation | Python Code:
import numpy as np
Explanation: Review from the previous lecture
In yesterday's Lecture 2, you learned how to use the numpy module, how to make your own functions, and how to import and export data. Below is a quick review before we move on to Lecture 3.
Remember, to use the numpy module, first it must be imported:
End of explanation
np.linspace(0,10,11)
Explanation: You can do a lot with the numpy module. Below is an example to jog your memory:
End of explanation
# your code here
#start by defining the length of the array
arrayLength = 10
#let's set the array to currently be an array of 0s
myArray = np.zeros(arrayLength) #make a numpy array of 10 zeros
# Let's define the first element of the array
myArray[0] = 1
i = 1 #with the first element defined, we can calculate the rest of the sequence beginning with the 2nd element
while i < arrayLength:
myArray[i] = myArray[i-1]+2
i = i + 1
print(myArray)
Explanation: Do you remember loops? Let's use a while loop to make an array of 10 numbers. Let's have each element be increased by 2 compared with the previous element. Let's also have the first element of the array be 1.
End of explanation
# This data was generated using the methods described in the preview notebook. Feel free to read through the preview to see how it was done.
data_filename = "https://raw.githubusercontent.com/astroumd/GradMap/master/notebooks/Lectures2021/Lecture3/Data/photopeak.txt"
x_data, y_data = np.loadtxt(data_filename, usecols=(0, 1), unpack=True)
print(x_data, y_data)
Explanation: There's your quick review of numpy and a while loop. Now we can move on to the content of Lecture 3.
A. Data Introspection
One of the most common tasks in experimental physics is trying to model experimental data with a function. This lecture will walk through how to accomplish this with python. First, let's import some data.
End of explanation
# Import the matplotlib package. Really, we only need the pyplot sub-package, so we import it this way. Renaming it to 'plt' is common practice.
import matplotlib.pyplot as plt
# The most basic kind of scatter plot you can plot using matplotlib is done like this:
plt.scatter(x_data, y_data)
# At the end of a cell where you are plotting things, this line tells python that you want do display the plots you defined in the cell.
plt.show()
Explanation: Let's plot it to see what it looks like!
In python there are multiple plotting libraries, but the moset common one is matplotlib, and that is the one we will be using today.
End of explanation
# While not necessary in python, defining datatypes using what are called 'type hints' has become the norm for modern python.
# It has a lot of benefits, and if you have time in the future, you should consider learning to use them.
from typing import Union
# The docstring in this function is formatted in 'google style'. 'Numpy style' is another popular way to write docstrings.
# Choosing one of these styles and using them for all of your docstrings will make your code much easier to read and maintain.
def gauss(x: Union[float, np.array], mean: float, std: float, amp: float) -> Union[float, np.array]:
A general implementation of a gaussian function.
f(x) = A * e^(-1/2 * ((x-mu)/sigma)^2)
To normalize this function, you would multiply by sigma * sqrt(2*pi)/A.
Args:
x: the input for f(x).
mean: the center of the gaussian function (mu).
std: the standard deviation of the gaussian functino (sigma).
amp: the scaling amplitude of the gaussian function (A).
Returns:
The amplitude of the gaussian function for a given x.
return amp * np.exp(-1 / 2 * ((x - mean) / std)**2)
# This is what is in the student version:
# # Change the body of this function so that is returns the value at x of the gaussian defined by the parameters.
# def gauss(x, mean, std, amp):
# Write what the function does here!
# raise NotImplementedError
Explanation: Usually, you will want to have some theory-based motivation for the function you choose to model some set of data, but for this example, we don't know anything about the data other than the points themselves. In this type of situation, trying to fit a simple function to the data is not a bad first step in trying to understand it. What function do you think might fit this data based on how it looks in the plot?
<details>
<summary>Answer</summary>
It looks like the data is shaped like a normal (gaussian) distribution, so let's try to fit it to that! First, let's define a gaussian function for fitting.
The equation for a Gaussian curve is the following:
$f(x) = \frac{1}{\sqrt{2\pi\sigma^2}}e^{\frac{-(x-\mu)^2}{2\sigma^2}}$
where $\mu$ is the mean and $\sigma$ is the standard deviation. We also want to be able to scale our function to fit the scale of the data, so we should multiply the equation by some amplitude, A.
</details>
End of explanation
mu = 0.67
sigma = 0.04
amp = 5500
# This is what is in the student version:
# mu =
# sigma =
# amp =
Explanation: What values of these parameters do you think will match the data above?
End of explanation
x_points = np.linspace(0.5, 1, 100)
plt.plot(x_data, y_data, linestyle='', marker='.', markersize=12) # This line makes the same plot as plt.scatter, but avoids some quirks in matplotlib.
# Try replacing this with plt.scatter(x_data, y_data) and see what happens!
# plt.scatter(x_data, y_data)
y_gauss = gauss(x_points, mu, sigma, amp)
plt.plot(x_points, y_gauss)
plt.show()
# This is what is in the student version:
# x_points = # Use np.linspace here.
# plt.plot(x_data, y_data, linestyle='', marker='.', markersize=12) # This line makes the same plot as plt.scatter, but avoids some quirks in matplotlib.
# # Try replacing this with plt.scatter(x_data, y_data) and see what happens!
# # plt.scatter(x_data, y_data)
# y_gauss = # Finish this line to get the y-values using the function you made above.
# plt.plot(x_points, y_gauss)
# plt.show()
Explanation: Let's try plotting the function the function with these parameters next to the data! For this, we should define some evenly-spaced x-values to calculate the function at using np.linspace:
End of explanation
y_data_gauss = gauss(x_data, mu, sigma, amp)
l2_norm = np.sum((y_data - y_data_gauss)**2)
print(l2_norm)
# This is what is in the student version:
# y_data_gauss = # Finish this line to get y_gauss at the x_data.
# l2_norm = # Finish this line to calculate the L_2 norm.
# print(l2_norm)
Explanation: plt.plot is the standard do-it-all plotting function in matplotlib. Everything about how the series looks can be modified.
B. Goodness-of-fit
How good was your guess? How do you even answer that question?
<details>
<summary>Answer</summary>
Let's use something called the $L_2$ norm: $||y - f(x)||^2$ to get a metric of the difference between our data and our function. This may sound and look fancy, but all it's doing is calculating the distance at each x-value in the data between the data y-value and the function y-value. Then, it squares those distances and adds them all together.
This is defined by `np.sum((y_data - y_data_gauss)**2)` for our setup.
</details>
End of explanation
mu = 0.3
sigma = 0.1
amp = 500
l2_norm = np.sum((y_data - gauss(x_data, mu, sigma, amp))**2)
print(l2_norm)
# This is what is in the student version:
# mu =
# sigma =
# amp =
# l2_norm = np.sum((y_data - gauss(x_data, mu, sigma, amp))**2)
# print(l2_norm)
Explanation: Try changing the parameters to something bad and see what happens to the value of l2_norm. Since this definition of the $L_2$ norm is not normalized by something like a standard deviation of the data, it can't tell us in absolute terms how good our funciton fits, but it can at least tell us if one set of parameters fits better than another. This is really helpful!
End of explanation
import scipy.optimize
Explanation: But, how do we know when we have a best fit? How would you try to figure it out?
Thankfully, we don't have to create our own method to do this. The smart people working on the scipy package have already built an optimized tool for us to use! It's called the curve_fit function as is part of the scipy.optimize sub-package.
C. Fitting
End of explanation
help(scipy.optimize.curve_fit)
Explanation: scipy.optimize.curve_fit is a type of minimization function. In this case, the function finds the parameters of another given function that minimize the $L_2$ norm between the data points and what our gauss function thinks the data points should be at a given x.
A quick, useful way to see what a function does without having to google it is to use the built-in python help function.
End of explanation
help(gauss)
# This is what is in the student version:
#
Explanation: That gave us a lot of information about the curve_fit function! As you can see, curve_fit takes a function as its first parameter, and it tells us exactly how to arrange the parameters of that function (thankfully, our gauss function should already have this form). The next two parameters curve_fit takes are xdata and ydata (x_data and y_data as we defined them). The rest are optional and will be talked about briefly at the end of this lecture.
Let's try calling help on the gauss function we defined above.
End of explanation
# The variable names popt and pcov come from the curve_fit function. We will get into what they mean soon!
popt, pcov = scipy.optimize.curve_fit(gauss, x_data, y_data)
print(popt)
print(pcov)
Explanation: What do you think the help function does?
<details>
<summary>Answer</summary>
You can see help just returns the function "signature" and the string at the start of the function (called the "docstring").
</details>
Now, let's use curve_fit to fit the data to our function!
End of explanation
mu = 0.67
sigma = 0.04
amp = 5500
x_points = np.linspace(0.5, 1, 100)
plt.plot(x_data, y_data, linestyle='', marker='.', markersize=12, label='data') # The parameter "label" gives a name to the data series.
plt.plot(x_points, gauss(x_points, mu, sigma, amp), label='hand-picked parameters')
plt.plot(x_points, gauss(x_points, *popt), label='best-fit parameters') # using "*popt" is a nice python trick that expands popt into the three individual values it contains.
plt.legend() # This function produces a legend of the plots and their names.
plt.show()
# # This is what is in the student version:
# x_points =
# plt.plot(x_data, y_data, linestyle='', marker='.', markersize=12, label='data') # The parameter "label" gives a name to the data series.
# # Try adding labels to these plots!
# plt.plot() # Finish this line to plot your hand-picked best-guess parameters from earlier.
# plt.plot() # Finish this line to plot the best fit parameters from curve_fit.
# # This function produces a legend of the plots and their labels. You can leave this line as is.
# plt.legend()
# plt.show()
Explanation: Do you know what popt is? How would you find out?
<details>
<summary>Answer</summary>
If you look back at the `help` output from `curve_fit`, `popt` is a list of the best-fit parameters of our gauss function for this data. The parameters in the list are in the order that the parameters are listed in our `gauss` function (`mean`, `std`, `amp`). Let's try plotting the data, our guess, and the best fit from `curve_fit`!
</details>
End of explanation
perr = np.sqrt(np.diag(pcov)) # This comes straight from the curve_fit help output.
# This is an example of string formatting in python.
# Each set of {} corresponds to a parameter passed to .format().
# .3e means "format this number in scientific notation with 3 digits after the decimal".
# using "*perr" is a nice python trick that expands perr into the three individual values it contains.
# All of this is out of the scope of this lecture, but it's good to get exposure to these things.
# Just look at how nice the out put looks!
print('s_mu = {:.3e}, s_sigma = {:.3e}, s_amp = {:.3e}'.format(*perr))
Explanation: How close was your guess to the best fit?
D. Interpreting Fitting Errors
pcov is a little more complicated. pcov is what's called the "covariance matrix" of the best fit parameters. As shown in the help output, the standard deviations of the parameters can be recovered from this matrix in the following way: perr = np.sqrt(np.diag(pcov))
Since we aren't teaching linear algebra here, all of the matrix manipulations will be given to you. All that needs to be taken away from this is how to read this specific matrix.
End of explanation
perr_transpose = np.atleast_2d(perr).T
pcor = pcov / perr / perr_transpose
print(pcor)
Explanation: The covariance matrix looks like this for the parameters in our gauss function
\begin{bmatrix}
s_{\mu}^2 & cov(\mu, \sigma) & cov(\mu, A)\
cov(\sigma, \mu) & s_{\sigma}^2 & cov(\sigma, A)\
cov(A, \mu) & cov(A, \sigma) & s_A^2
\end{bmatrix}
Where $s_x^2$ is the variance of a parameter $x$ and $s_x$ is its estimated standard deviation, and $cov(x, y)$ is the covariance between parameters $x$ and $y$.
Can you guess what np.sqrt(np.diag(pcov) does now?
Covariance can be difficult to visualize. It's often much easier to look at something called the "correlation coefficient" instead. The correlation coefficients can be easily found from the covariance matrix by using this transformation:
$cor(x, y) = \frac{cov(x, y)}{s_x s_y}$
Performing this transformation on the covariance matrix gives the correlation matrix, which looks like this:
\begin{bmatrix}
1 & cor(\mu, \sigma) & cor(\mu, A)\
cor(\sigma, \mu) & 1 & cor(\sigma, A)\
cor(A, \mu) & cor(A, \sigma) & 1
\end{bmatrix}
The code for this is a bit beyond the scope of this lecture, so it has been done for you below.
End of explanation |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.